[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How to block telnet access.




> > To give an example, you don't need to know a lot to run OpenBSD.  But
> > the security people at OpenBSD do know what they're doing.  (Notethat
> > I'm NOT saying that OpenBSD is 100% secure.)
> 
> then in your own words, openBSD is insecure (i.e. anything less then 100%
> is insecure)?

See http://www.openbsd.org/security.html.  Every OpenBSD release has had
new vulnerabilities discovered in it.

> > Sorry, I disagree.  When something is properly designed and small
> > enough, it is quite possible.  (Like I noted earlier, these days
> > ``properly designed'' probably means little reliance on vendor
> > libraries and other possibly insecure software.)
> 
> next thing you'l tell me that you can write a program that has 0 bugs...

Stop being silly.  There's no reason for a simple program to have bugs,
unless the programmer was incompetent.

> and that you can 'inspect and secure' each and every line of source code
> in a system that runs a few internet services? that is, each and every one
> of the few 100K source lines (sorry - the kernel itself is over 1M lines
> today, thought most are for drivers you do not use).

How do you get that idea?  I said that it is possible to verify a program
that is ``small enough''.  

This isn't magic.  It's computer science and mathematics.  The process of
proving that a program is correct is impossible because the actual
verification procedure will be complex enough to have bugs; in other words,
as we've figured out already, we can't inspect the entire system.

So, the solution is to implement the network services using applications
that are ``small enough'' so we can comfortably review them.  In practice,
this means separating the ``service'' into smaller ``tasks'', each of which

	1) Is coded securely.

	2) Is small, so it can be inspected.  (Which btw, also means that
	   it is less error prone, because it's small.)
	
	3) Does not trust the other tasks.

Apart from the advantage gained in code complexity and the ability to
analyze it, there's also the significant fact that if one of the tasks
does get compromised, it does not affect the rest of the system.

Finally, it is true that to be able to inspect the program, you have to
replace much of the system-related fluff.  Since a lot of system libraries
are not coded securely, this is actually a win.  (I already said this a
couple of times, but then again I already said most of the stuff above.)

> > Second, I'm trying very hard NOT to use terms like ``95% secure''.
> > Partial security isn't.  In other words, I don't believe that 95% or
> > 50% are ``not the same thing''.  They are.  Insecure.
> 
> but in a world where you cannot achive the 100%, still 95% is better then
> 50% (or you don't beleive in statistics of the chance of being attacked by
> a cracker that happens to land exactly on the security holes that exist in
> your specific system).

What exactly is this ``world where you cannot achive[sic] the 100%''?  

What's the threat model?

I specifically stated that I was dealing with a case where internal
threats are not an issue; i.e. THERE ARE NO UNTRUSTED USERS.  The design
above works for this case, because 

	1) The application interacts with the attacker only through 
	   the network, making that relevant code the ``critial
	   path''.

	2) The attacker cannot influence the flow of the program 
	   except by using networked input, which was dealt with
	   in (1).

In this case, it is possible to achieve ``%100'' security.

Compare this to the case of a setuid application, where verification
is a much more complex and difficult process, since the attacker
has much more control over the program's execution environment.  In
that case there is indeed no chance for a reliable solution, so we
have to fall back to the `minimizing damages' approach, i.e. keeping
abreast of new vulnerabilities and keeping our fingers crossed.

But in mathematical terms, that isn't reliable; we are in a race
that we can lose.  I'm saying that in this specific threat model,
it IS possible to design a reliable solution.  The reasons this
usually isn't done are not technical.  They are either economical,
in that it's cheaper to deal with the occasional break-in than to
design and implement a secure solution, or that the threat model
used (even if unintentionally) excludes sophisticated attacks.

Like I previously wrote, if your threat model is that you are only
facing unlucky script kiddies then what you're saying is fine.  I'll
just point out, for completeness, that your threat model has high
chances of breaking.

> and remember - the system is secure if it was never broken to - not if it
> is impossible to break into it (because this second option is never true).

This statement is false any way you look at it.

	1)  I have a FreeBSD box at home.  It's not connected to the
	    Internet.  I have considered and rejected the possibility of
	    the CIA breaking into my house to steal my box.  The box is
	    secure.

	2)  I have a SunOS machine connected to the Internet.  The root
	    account has no password.  No one has exploited this, yet.
	    Is it secure?

> > > that's why the sain rule is "first decide how important is the system and
> > > its resources to you, and based on that decide how much effort to spend on
> > > securing it".
> >
> > That's the economical side of the equation, not the technical side.
> 
> not as i see it, since i know it is _never_ possible to achive a
> completely secure (i.e. 100% secure) system.

Try to follow your own logic.

It's impossible to ``100% secure'' a system.  But it isn't impossible to
``50% secure'' it, or to ``95% secure'' it.  So what's the difference 
between these two options?  Where do the 45% come from?  

They come from code which has been reviewed and secured.  However, you
claim that a program can't ever be totally inspected and reviewed.  We
thus arrive at a contradiction.  There cannot be different levels of
security, since everything is equally broken.

The correct answer is that stable, well-defined parts of a system CAN be
inspected and secure.  But an operating system as a whole is much too complex
and transient to review.  Even the OpenBSD people keep discovering stuff in
their never-ending audit, and their code base is really not going through
the dramatic changes and feature additions other OSes are going through.

Therefore, as I keep trying to show, if your threat model is such that
attackers are only coming from the network, there is no reason in the
world why you cannot secure those parts of the system which will be 
under attack.  You don't have to secure the whole system.

> > It's possible to architect a system to be secure against the certain
> > threats we're talking about.  It may very well be that you can't
> > afford to do so; that doesn't make it impossible.
> 
> the effort is asymptotic - you'll need to invest an infinite ammount of
> effort in order to get very close to 100% secure, and even then you're not
> 100% secure.

Like I said, this is not magic.  If I implement a service in such a way
that I can trivially show how it is secure, why exactly is it NOT secure?

> but i tihnk this argument leads no where - you're the optimistic type of
> a sys admin, and i am not, and none of us will convince the other..

I'm not optimistic at all.  I understand the issues involved and want a
reliable solution.  I recognize where it is possible to solve the problem 
and where it currently is not, and try to architect my networks accordingly.

=================================================================
To unsubscribe, send mail to linux-il-request@linux.org.il with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail linux-il-request@linux.org.il