[clue-talk] hrmmm

Nate Duehr nate at natetech.com
Wed Aug 1 00:36:22 MDT 2007


On Jul 31, 2007, at 4:51 PM, Matt Poletiek wrote:

>> Yes, but to do in-depth security research, you need to get paid to do
>> so.  Only a small percentage of folks are going to work on this  
>> stuff in
>> their spare time, with the level of complexity we're talking about  
>> nowadays.
>
> This kind of mind set is limiting of human potential. Resources are
> obtained in more than one way and this is often the case.

Okay, that's a fair discussion point.

>> The "evolution" has been going on for a couple of decades now.  When
>> will it "evolve" into something a magnitude better?  Where's the  
>> fiscal
>> incentive to do so?  Who's working on it?
>
> If you follow mailing lists like bugtraq and full disclosure, hell,
> even the milw0rm rss. You will see that more often then not the idea
> of a security hole is more often than not presented by a nobody or
> someone who wishes to remain anonymous, this is due to societies fear
> of this knowledge.

That comment could spin a whole new topic of discussion:  Why are  
people afraid of computers?  Is it because people aren't logical, or  
because of interesting human interaction problems with people "in the  
know" and people who "don't have a clue".  If those "in the know"  
could openly describe how they "got there" -- those that have either  
good training (nuture) or good intuition (nature) about how computers  
actually REALLY work, what percentage of those who "don't have a  
clue" would attempt to climb the same mountain to gain the same or  
similar knowledge levels?  Would they?

I contend that the fact that paid groups like SANS and others exist  
because people don't want to climb that knowledge mountain alone,  
but... if the person only sits and absorbs what a good training  
organization like SANS presents (yes, I think they have very good  
material and courses), and they don't take a personal interest of  
some sort... the knowledge will only take them so far.

> After the idea is presented, proof of concept code
> is developed by again, either a nobody or someone who wishes to remain
> anonymous. Their are a bold few well known names, but they do not
> alone drive the evolution of the code. Code, in a sense, is an ever
> evolving language of strictly objective, linear logic. When a machine
> is exploited, it is simply executing true logic which was not foreseen
> by the official developer.

Agreed.  And no single developer will ever see all the holes.  But...  
where's the effort up-front (and the monetary and/or moral deep  
emotional drive) to work with others to make sure one's code doesn't  
"suck" when it comes to security?  I don't believe that forced  
"certifications" that someone else reviewed your code is a viable  
option in anything except highly-funded environments.   And the in- 
between is where most people live, looking at a typical bell-curve.   
So how do you push "security" down into the middle of the bell- 
curve?  That's the interesting (and maybe unanswerable) question that  
drives my curiosity.

> This additional logic forces future generations to reconsider, hence
> the evolution. This occurs in both the commercial and open source
> communities, though much quicker with more efficiency in the latter.

So it's claimed.  I'm not going to go into that debate here, but I  
can point out commercial products that evolved far more than the open- 
source community could evolve them... an example would be Asterisk.   
It's laughably bad at telco when compared to a Lucent #5ESS Central  
Office switch, but that doesn't mean it doesn't have its place.  At  
the extremes (and Asterisk and a CO switch are about as far apart on  
the bell-curve as two "products" get), you see people getting paid  
large sums of money on the high-end to get things done, and on the  
low-end you see volunteers toiling away on a "pet" project.

But the "in the middle" projects are the ones most people use for  
things... and many such projects seem to stall out from under their  
own weight as volunteers get over-extended for what's "reasonable"  
amounts of time and yet there's no funding to pay someone to pick it  
up and run with it.  In the middle-ground of the bell curve for  
software seems to always be the "most dangerous" when it comes to  
various design issues, security included.

>> Right.  But do they do that because of silly little catch phrases and
>> misconceptions like "virus" and "trojan horse"?  Could the industry
>> overall do a better job of explaining what's REALLY wrong, instead  
>> of me
>> seeing late night commercials for "does your computer have a worm?
>> (ewwww!)" targeted at and feeding off of, the clueless?
>
> Not the Industry, but the community. Humanity separate of capitalism
> and profit motive. It is the hobbyists that create the industry, it is
> the industry which mass produces the technology. Those who really care
> will find the need to understand the solution instead of trusting
> their money. This doesn't have anything to do with industry, but
> everything to do with evolution. The extremes of the species test
> every direction before the masses follow.

Good point, maybe we're getting at the same idea here, but I'm  
attacking (or trying to) the "how do you make the in-between better,  
permanently" question.

> As long as people are willing to throw money at their problems, there
> will be industry. It is up to the industry to get along with the
> community. You see this happening now in the security industry,
> however there are those who fear change and will do anything they can
> to stop it. This fear will either cripple or harbor this new found
> human nature regarding our technology.

Fear stops a lot of things.  Definitely true.  I would be afraid to  
dive into a project like say, Asterisk -- even with my many years of  
telco background and knowledge/experience, because I would be afraid  
that it would "eat me alive"... now on the opposite side of that  
coin, I could see someone with a passion for it, really burning  
themselves badly but adding great value to the project.  (In fact,  
watching some of the mailing lists, there are already people doing  
that.  Whether or not they'll realize any long-term personal gain for  
doing so, and by that I mean any kind of gain -- even if their  
motivation is just "to make things better" -- remains to be seen.

Free World Dial-up is another example.  Great idea, neat project...  
and recently they finally had to come up with a plan that would "pay  
the bills"... and then carefully explain to their user-base that the  
"free" isn't going away, but there's some "better than free" services  
to be had if one pays a bit...

>> I know it's possible, and fully agree.  But I think most companies  
>> and
>> organizations using computers really don't understand the COSTS  
>> involved.
>
> I would say, anyone presenting a guaranteed security against all
> automated attacks will receive the ears of many. It is an
> organizations desire to cling to the technology they have already
> adapted which forces compromises such as multi-leveled authentication
> and unnecessary overhead.

Hmmm... here we might disagree.  Sometimes I think the unnecessary  
overhead isn't necessary from a technical standpoint, but it is  
needed badly for people's acceptance... managers without a clue find  
RSA keyfobs and multiple layer VPN's comforting because there's  
something they can see and hold and understand there, versus say a  
just-as-secure OpenVPN tunnel that would cost many factors less.   
Technically, NO difference in security if both are implemented  
correctly -- but that little keyfob in their hands works just like a  
real "key" that they lock their house up at night with -- but faced  
with a decision to build or not build some new project, they  
immediately also assume that the only way to do "security" is to  
continue the -- shall we call it "tradition", since that seems to be  
the emotion that's at play here -- of using an expensive keyfob- 
enabled Cisco PIX (complete with brand-loyalty, perhaps) than  
trusting that the OpenVPN box that cost $500 to implement is "just as  
good".

> I find 2 consistencies in all the compromised systems I come across.
>
> #1. They are always fairly out of date.... this does not cost money  
> to solve
> #2. They never have any decent form of memory protection.... This too,
> only requires a little bit of expertise to setup.

Same experience here.  No policy/procedure for maintaining legacy  
system security.  However... there's a catch here.  I've seen  
business systems that could NOT be patched/upgraded, because the  
vendor "moved on" to the next version.  The original version works  
FLAWLESSLY for the organization's purposes, but suddenly finds itself  
insecure in an ever-more-hostile network environment.  Deploying SSH  
on Solaris 8 might be a so-so example of this... Sun as a vendor  
won't offer up "official" Solaris 8 SSH packages, but they're "built- 
in" and supported on Solaris 9 and 10.  Forced upgrades.  But it  
happens in the non-commercial open-source realm, too.  I just can't  
think of a great example right at the moment.

> There is no need at all to be so paranoid as to adopt legacy
> technologies to stay secure.
> Fear feeds that which is subject to fear.

Maybe that's REALLY a good root-cause type of question -- how to get  
people less fearful of computer security.  Very interesting question,  
that probably leads to even more questions.

It's fun to think about all of it from the big-picture vantage point,  
but in the trenches it so often gets convoluted.  An example would be  
mandated port-scanning... if the software says "medium" threat  
because telnetd is enabled on a machine, even if no one EVER uses  
it... is it really a "threat"?  To many "security companies" the  
answer today is... yes.  Government agencies too.  If it's open and  
listening, it's a "threat" even if the logs clearly show that no one  
ever accesses the machine that way, and there are no exploits for the  
daemon.... (telnetd is a bad example, it has had problems, but then  
again, so has SSH).

--
Nate Duehr
nate at natetech.com






More information about the clue-talk mailing list