“There’s lots of innovation going on in security – we’re inundated with a steady stream of new stuff and it all sounds like it works just great. Every couple of months I’m invited to a new computer security conference, or I’m asked to write a foreword for a new computer security book. And, thanks to the fact that it’s a topic of public concern and a “safe issue” for politicians, we can expect a flood of computer security-related legislation from lawmakers. So: computer security is definitely still a “hot topic.” But why are we spending all this time and money and still having problems?”
this is a week late article regurgitated from /.
First of all: Please spell check before commenting, especially since it is a one sentence slam on the article. The least you could do would be to spell correctly. Second of all: I think that those in IT can relate to this article. If we are on-staff IT we have certainly had managers and others want to use some technology without consideration for security. They want features and expect that nothing will go wrong as long as it is behind a Cisco Pix (R). I found the article to be good at demonstrating some of the security issues and the mindset of those in IT.
What about those of us that like to avoid slashdot? (for any number of reasons)
While I agree with some of what he says, he sure was unclear about how he planned to eliminate the need to ever patch software again!
Also, whats with the sow’s ear analogy? He implies that patching only adds new code and leaves the bad code in. Unless Im missing something, when code is patched, the code that doesn’t work correctly is replaced with code that does. Am I wrong?
Frankly, his insinuation that if we just programmed things correctly we would not have bugs to patch is ludicrous.
“Frankly, his insinuation that if we just programmed things correctly we would not have bugs to patch is ludicrous.”
Apparently you haven’t had experience with the hulking mobs of “professional” software whose authors invested fifty times more effort in marketing their product than actually writing it. A lot of companies don’t care one way or the other if their software is a piece of crap. As long as they can convince someone to install it throughout their company and rely on it to do even the simplest of tasks, they know they’re set for that juicy government contract and their executives get the huge bonuses they need so much.
Anyway, if the programmers are paid to make the app look cool instead of work correctly, then chances are, it won’t work correctly. Even in this day and age, security is considered an expendable virtue by many, especially the contractors who can just blame any problems on someone else and let their clients carry the burden of securing their woeful networks.
You both make good points, but the simple truth is that a modern program is so vast and complicated that it would require a ridiculous amount of man-power to “program it properly” i.e. make it 100% secure from the outset.
You both make good points, but the simple truth is that a modern program is so vast and complicated that it would require a ridiculous amount of man-power to “program it properly” i.e. make it 100% secure from the outset.
True, 100% security is totally unfeasible, perhaps impossible. But that’s not what’s needed. The point is that any security at all is better than none, and if security isn’t part of the design in the first place, “none” is exactly how much you’ll wind up with.
I think the guy is making the point most people’s CS101/CS201/CS* teachers try and make:
Do it right the first time, you can’t fix it later.
No matter how hard you mash this attitude into your brain, you still find yourself making occasional hacks “just to see how this looks.” Then you occasionally leave that there, forgetting you did it. And of course, no matter how hard you try to do it right the first time, sometimes your well laid plans were just plain wrong.
But I think it’s also a patching attitude: ie. If you have to put the patch through major testing to convince yourself that your fix was good than you should be re-reading your fix: You should know why what you did fixed the problem.
That sounds simple. But it’s not always so when you’re fixing someone elses buggy code.
Many companies don’t give their people time and incentive to do this sort of thing. And they end up with mediocre patches, and code that the KDE project won’t even take (that was a joke).
The insinuation is that if programs were secure by design, patches would only be required for bugs, not design flaws. He points to QMail and Postfix as examples of programs that have required very few patches, because they are secure by design.
Bugs are probably always going to be there, but it’s about the way we fix them. It’s the method of fixing he is objecting to, not fixing in general.
Doesn’t the word ‘patching’ say it all, it’s a patch, not a cure or a replacement part…it’s a patch, it’s plaster in the wall, it’s ducktape on the wing
The real question to ask is not “can we educate our users to be better at security?” it is “why do we need to educate our users at all?”
Because if you don’t have educated users, you don’t have a secure OS – period. As long as a user can run and/or install application, his/her OS is insecure by default. There’s a lot of narrow-minded people out there who think that if you fix the email attachment problems and fix/raplace Internet Explorer and/or Windows, all security problems will magically disappear.
However, despite what you think about the war in Iraq, people often ask, “Well, if we weren’t fighting the terrorists in Iraq, what do you think that would be doing for the past few years?” Same applies to crackers and virus writers. These people aren’t going to go away. Once you fix one security hole, they will keep poking and prodding until they find a way in, and they WILL find a way in if the user is not security-conscious … I don’t give a damn what OS they’re using. I think many people who have this ‘my OS can never be hacked’ mentality will one day be like the poor souls on the Titanic who swore it could never sink.
Oh, and educating users DOES work. If somebody runs an attachment or program because it promises them nude pics of J-Lo, they weren’t educated
When computers were first developed they were intended to be used by experts. The majority programmers still assume that users have an idea of what they are doing.
Carmakers assume that most drivers are drunken incompetent idiots with limited driving skill. That is why they put airbags, antilock brakes and automatic transmisions in cars.
Probably no more than 5% of computer users are competent enough to use a computer safely and perhaps 1% really know what they’re doing (even a lot of IT professionals seem to have very little idea of what the’re doing). Maybe the programmers and hardware designers should assume all users are idiots to be on the safe side.
Home desktop computer security is really easy to teach users. Anyone who is capable of the complexity of driving a car should be able to follow this steps.
1. Don’t do everyday tasks as the administration user.
2. Don’t open attachments that you weren’t expecting.
3. Don’t install software from non-reputable sources.
The thing is that the same people who won’t follow this rules are the same people who will buy a car without getting it checked out first, or let a plumber that they didn’t call in to their house.
– Jesse McNelis
I think the issue that the author points out is that users cannot be counted on to do these things, and its the system’s fault for making these things a security risk.
Your analogy about people who don’t follow these rules is slightly off base. Perhaps people who buy cars without checking them out deserve whatever they get, but their stupidity can often have the result of harming others (eg: the brakes on their car are shoddy). That’s why in most places, people are forced to get their cars inspected every year in order to keep their tags up-to-date. A similar argument could be applied to computers — bad users are a danger to other people (mindless DoS boxes, etc), so the system should not allow their behavior to subvert security.
His suggestion for an anti-virus program is great. Just make fingerprints of all executable code that is allowed to run, and don’t allow unknown code. Then you have a database of a few hundereds of programs instead of a database of thousands of malicious programs.
If you install (willfully) new code, how would you update the “known-good” database?
a) by hand – open db program, point to new executable
b) let the installation routine do it
a) won’t fly with users
b) if the installation routine can do it, then any malware can do it.
Interesting idea, but comes with a dilemna.
<it>His suggestion for an anti-virus program is great. Just make fingerprints of all executable code that is allowed to run, and don’t allow unknown code. Then you have a database of a few hundereds of programs instead of a database of thousands of malicious programs.”</it>
… which makes sense but requires educated user.
Quite the opposite.
It means the user can be as dumb as you like – they can’t harm the system by running malicious code.
(Well, that’s the theory).
When User rings up asking why latest-thingy won’t run, Tech-Support tells them “That is not authorised for use on our equipment.” (The computer has already reported “You are trying to run an unauthorised program. If you beleive this program to be authorised, Contact Your Support technician to resolve the conflict”.)
This then leaves the User to justify to tech-support why they think it is Authorised for them to install Kazaa onto Company laptops.
DRM by another name?
Restricting what the user can install may well prevent them from putting on Kazaa, but will hardly protect against viruses and spyware which propagate through various holes and exploits.
One ‘exploit’ used to get through security is to get the User to allow it through (“Click here to Install COmet Cursors”, “Are you sure you want to install Kazaa and Gator?”, “Click hre to update your PayPal details”).
With locked-down systems, where only signed apps can run, the viruss and spyware will have a much harder time. Even if the user tries theirt best to install a virus/spyware/etc, the OS/security will not let it.
Yes, it is a form of DRM, but in the case of Company Laptops I don’t see the problem.
“Mr Johnson, this is your Company Laptop. It is pre-Installed with the Applications we feel you will need. You have been trained in these apps. If you feel you need any more installing, discuss it with your Supervisor. Unautorised applications will not install. Breach of Company Security is a sackable offense”
Additional example: you don;t see check-out operators being allow to install Solitaire or Poker on thier tills. Couriers carry consignment-readers. No extra apps on these. Why allow it on other devices?
Actually it doesn’t. However, it requires an educated system administrator. Some times the admin is in fact the user but the article seems more focused on companies with a dedicated admin. Then the users can be as stupid as anything.
“But why are we spending all this time and money and still having problems?”
Same goes for flood defense systems, insect-reppellants, etc.
The problem is still there.
We are building better defenses, but have not built the ULTIMATE DEFENSE yet.
Plus, in the case of computer security, the problem is evolving just as fast (faster?) than the security measures.
Malware (virus, spy, advert, spam) writers have funding from people with a vested interest and large purses.
Gone are the days when a comuter virus would display an amusing picture, or make your floppy disk sound like a washing machine and report “Water in drive”
Now viruss are built to be stealthily copying your data (preferences, CC number, web activity, etc).
Most of our problems are psychological and societal, not technological or material. Whoever comprehends this has seen the light, but the light seems very dim from the distance in comparison with the glitter of huge heaps of cash and power.
I think the ideas are largely good, with the major exception of “Educating users”.
Hehe, by good I mean that there is merit those ideas to be called “dumb”, but “Educating users” is not one of them.
To clarify my stance from my previous post regarding “Educating users” — if one can improve security by educating users, one should educate users, if one can improve security without needing to educate users, one should do that as well. Of course, it is better only the latter case to occur — since then experts won’t have to deal with users, but unfortunately, often one has if one wants better security.
So, in other words, the stance of the author of the article that the our strategy should not be to educate users, because it is dumb, is clearly, simplistic and dumb itself. In any case, there cannot be ultimate security if users are not educated — how is knowing not to click on an Anna Kournikova virus-infected e-mail not a social problem, and hacking is a social problem? By solely technical means, one can make it harder for users to run the attachment, but there is a ceiling that is hit at one moment and from that point, one can progress solely through educating users. Of course, that does not mean that one will be successful at educating users, but that is another case, and there are various types of training and learning that I do not think have been applied to people and may turn out to bear good results if the strategy is to educate users AS WELL AS do everything technically possible to solve the security problems.
Much of the medicine for these problems is out there.
The approaches require us to slow down (the OpenBSD
approach does not come as a veneer) and pay more.
On the whole, we (in the wider sense) aren’t willing
to do either of those things?
Basically this guy is complaining because gravity is downwards. He talks about how wonderful things could be if everything was different. The arguments are always the same.
None of this would be a problem if:
a) The users weren’t do dumb.
b) The programmers weren’t in a hurry. (or were as smart as he is).
c) The criminals weren’t criminal.
d) The nerds weren’t nerds.
e) Computers were smart and aware of their own actions.
f) The sky was more purplish.
g) All of the above.
This is a bit like saying that evolution isn’t working. We’d be better off just having one perfect sustainable species (one being in fact) right from the start, instead of a million (billion) or so imperfect ones, most of which unfortunately died out.
I thought the main points were:
1.) Program defensively – the program should only accept known GOOD input
2.) Firewall defensively – the network should only accept and transmit known good data
3.) Control input (mail) and programs within the system.
4.) Don’t install the latest and greatest until initial adopters have managed to get it to not blow up in their face. (Would he be ok with installing the latest and greatest on a development system not attached to the main one?)
A couple days ago I thought I’d try securing Firefox by moving it to a limited-user account accessable by SUID… I still haven’t found instructions or anything about how good an idea it really is.
Have you been getting spyware/junk installed via firefox?
There’s an easy way: Go into Control Panel, Users, and make yourself a Limited User. Then permission up when you need it.
Sometimes Windows permissions are stupid and make using this setup difficult, but you’ll get the hang of it eventually.
Somebody completely missed the point of the article.
a) The users weren’t do dumb.
That’s not what he says at all. He says that educating users will never happen, so don’t design systems that require them to be smart. This is an extremely important idea that is applicable to more than just internet security. For example, international development programs are finding that educating people about proper health practices has limited impact — people only retain a fraction of the knowledge they need to live safely. So a lot of effort is being placed into creating mechanisms that don’t require educated participation by users. For example, instead of teaching people to come to a hospital to deliver children, instead of going to a village midwife, give the midwife money to bring people who come to them to the hospital! When the human element is removed from the equation, the results become much more reliable.
b) The programmers weren’t in a hurry. (or were as smart as he is).
Its a sad truth of the software industry that programmers aren’t held up to the same standards as other types of engineers. The guys who designed the 777 knew how it would fly before they bent a single sheet of aluminum. Why should software be any different? Why shouldn’t programmers have to prove the security of their system before they write a single line of code?
The main problem is that Virii, etc. nowadays more often than not mimick the valid apps. How do you lock down an application or process that is mascarading or mimicking another? Things become far more complicated than I think this article gives credit for.
The truth is that complete and total security will never happen, as this would mean that we would have to scratch out the software industry and kind of start again, which wouldn’t make political nor financial sense.
This is just like asking when are we going to be able to stop repaving the roads. Well, while car tires make full contact with pavement-based roads, we’ll need to keep repaving these roads periodically. Of course, we could build all roads out of concrete, and they will last for a way longer period without much maintenance. Actually, it might be even cheaper in the long term, but that won’t make sense politically as the initial expenditure will be several times higher than when using pavement alone.
So, can it be done? Yes. Will it be done? No.
now its just an xml feeder into /.
removed from my favorites………….
I get the feeling a lot of responders are missing the whole idea of the article. I personally know of someone who uses much the same philosophy on a network he maintains for a business his daughter owns. And what the author says does work quite well. He has never had a successful attack against the network, has not had a virus or trojan successfully enter the system and to this point has not had any successful attack. The only educating the user that he needs to do is when they call him to complain that they can’t install software that is not on the authorized list.
The point being, if the concepts contained in this article were followed we would not entirely eliminate the problem, but we would definitely make the problem a whole lot smaller and easier to manage.
Bill
The more I think of it, the more I realize computer security
is 90% hype, 5% marketing and 5% paranoia. If we designed
systems, protocols and software that functioned correctly,
to begin with, we wouldn’t need to assault poor users with
an assortment of ill designed tools that make their lives
miserable and their computer usage hell.
No, most users are not dumb, uneducated and illiterate as
almost all of you have been alluding to. I find such
allusions insulting. Many of them just don’t find investing
unholy amounts of time behind a computer monitor and
keyboard sensually rewarding. And why should they?
Isn’t it the duty of these so called “computer experts” to
write software that is safe, secure and fault tolerant?
Alas, they fail to meet their obligations. So they instruct
users to scale hurdles, sacrifice their first born child,
and give offerings to GOG (god of geeks) at the dawn of the
next equinox. Of course, if all else fails, just blame
the “uneducated” users.
The voice of wisdom has spoken.
Let me add one thing: The often stated excuse that today’s software is so big and complex that it cannot be free of bugs is just that – a lame excuse. Programmers are trained exactly to deal with complexity, and if they aren’t, they are not fit to be programmers and should look for another job.
– Morin
You’ve never dealt with most computer users, and you’ve made that abundantly obvious. You’ve dealt with most “home administrators.”
“Isn’t it the duty of these so called “computer experts” to
write software that is safe, secure and fault tolerant?”
Yep it is. It’s also their duty to make it useful. Here’s the most secure c program ever:
int main(void) //no argc, then they could input
{
return 0;
}
See, no features, no security problems. It’s the users job to RTFM and figure out how he should use the provided features. Saying it’s the developers job to make perfect tools is like blaming Chevy for not making the Corvette safe. If they made it safe, nobody would want it because it’d be slow!
I don’t discriminate between “computer users” and “home
administrators.” Anyone who uses a computer to accomplish
tasks is a computer user. I also like to believe humans, in
general, are extremely intelligent beings. With the right
clues, motivation and guidance, they can accomplish great
feats.
Most computer geeks I know have this condescending attitude
towards users, or people, who are not computer savvy. Labels
such as “uneducated,” “illiterate,” etc, are used to
describe such users. Calling users “uneducated” because
they do not climax off reading computer or software manuals
doesn’t make “computer experts” intelligent.
On Linux, for instance, you need to be a farking networking
guru to install, configure and maintain iptables correctly.
Lets follow my colleagues logic. My father an ambassador
with two post graduate degrees in Law and Diplomacy from
Oxford University is “uneducated” because he neither has
the time, inclination or motivation to learn how to write
iptable rules to secure his computer.
According to you, it is his job to secure his system as well
as RTFM. Well, how can he do so when he knows nothing about
bits and bytes. I admit, sometimes I live in fantasy land.
But I thought “computer experts” were paid to play around
with ones and zeros.
If people had written software to function correctly,
perhaps there will be no need for firewalls, virus scanners,
spyware scanners, adware scanners, elaborate intrusion
detection systems among a criminal amount of the hideousness
computer users need to stay abreast off. Do really think my
dad has RSS feeds to computer security bulletins?
You need to research what it takes to make any computer
secure today on any operating system. It’s a bloody joke!
Apart from the fact that “securing” your system will leave
it half crippled, I can’t imagine anyone but a person
married to their monitor and keyboard actually taking the
time to do it. I know I don’t. And I consider myself a
self professed computer geek. We live in trying times,
broda!
Well… First of all it’s easier to do some methods of securing computers than you realize. One thing that makes this true is that not everyone needs the “maximum” amount of security for their computer just like a lot of people don’t need the maximum amount of security for themselves (meaning bodyguards) or their house.
I agree that software developers need to do a better job. (A lot of recent programs have been pathetic) However, we can’t protect you from everything ourselves. Its like expecting car manufacturers to protect you from other drivers, driving a car off the cliff, etc… Every industry has limits to what they can guarantee and every industry requires their users to know something. (If you get hauled in for plowing your car into some pedestrian and you say “I couldn’t take the time to learn how to drive” you aren’t going to get much sympathy.)
Finally, there are people that can help you besides software developers. You could always pay someone a small amount of money to help setup your computer(s), just like there are people you can pay to drive your car for you and people you can pay to fix it, etc… If you pay someone to help you, then you don’t need nearly as much knowledge or effort.
Actually “uneducated” is correct and it’s not a derogatory term if applied to a specific area like computers. Most of us are uneducated in car repair or house maintenance or law. We can’t know everything, so we learn a few of the maintenance basics and minor “do it yourself” tasks, and rely on experts to help us for the rest. There’s no shame in that.
The key problem when “uneducated” people think they know everything there is to know. My neighbour is a classic example. When he bought the house six years back, he bought into the whole “open concept” idea but he didn’t want to pay anyone to do it. He decided he could do it himself. He tore down several walls — some of which had load bearing beams. He wouldn’t listen to reason because he saw it done in magazines and TV. Although (fortunately) the whole house hasn’t collapsed on top of him, there are cracks on the walls and ceiling that indicate that things aren’t as they should be. However, as far as he’s concerned “magazines say that it’s just the foundation settling”, so he doesn’t see that he’s the problem. Fortunately, he ran out of money a few months after he started so he stopped damaging his property and risking his life more than he already has.
The “humble uneducated” should never be blamed in any field (including computers). If things go wrong for these people, it’s likely the fault of the experts who set things up for them.
It’s the “arrogant uneducated” that we have to worry about. Against those people, nothing can ever be fool-proof since those fools are so ingenius.
> Apart from the fact that “securing” your system will
> leave it half crippled,
On Windows, yes. Unfortunately, some software just doesn’t run unless you have admin or power user privileges. The reasons are many and have been documented elsewhere.
On Linux, you can have a fairly secure system that’s mostly transparent to users. Ubuntu is a good example of this. The “humble uneducated” should have no problems on this system once it’s properly configured and they are introduced to the Ubuntu guide and the Ubuntu community. At least, that’s been my experience.
“Most computer geeks I know have this condescending attitude
towards users, or people, who are not computer savvy. Labels
such as “uneducated,” “illiterate,” etc, are used to
describe such users. Calling users “uneducated” because
they do not climax off reading computer or software manuals
doesn’t make “computer experts” intelligent. ”
that’s because most geeks, when faced with a pop up window that says “warning: this attachment is a virus” and the email is from some random string of letters with the body text full of gibberish, will decide to delete the email.
apparently most users decide to install it.
and thats what geeks mean when they complain about uneducated. we don’t mean the people that can’t do an LFS install, we mean the people that consistently give out credit card numbers to kenyan princes, and make all their passwords ‘12345’ despite having an admin telling them to stop that shit.
> that’s because most geeks, when faced with a pop up
> window that says “warning: this attachment is a virus”
> and the email is from some random string of letters
> with the body text full of gibberish, will decide to
> delete the email.
And by just thinking about that fact, you can already see where the problem is: The problem is that clicking on an “install” button can infect the system. Programs that are installed this way should not be able to ruin the system (if installing programs this way is possible at all).
> […] we mean the people that consistently give out
> credit card numbers to kenyan princes,
Thinking about it results in the observation that, to be user-friendy, knowledge of the credit card number should not give anyone power over your money. It suggests that other means of authentication should be preferred.
> and make all their passwords ‘12345’
Thinking about it results in the observation that passwords are not user-friendly, and that again better means of authentication should be preferred.
> despite having an admin telling them to stop that shit.
… and this tells you why: because people won’t stop that shit, because they have a life themselves to worry about. Think about these words please: “They have a life.” They aren’t machines to act by the will of the great admin.
– Morin
1.
“The Internet has given a whole new form of elbow-room to the badly socialized borderline personality”
the author has no idea what borderline is.
2.
i like the idea of permit by default… its about freedom… its about life… if u deny by default wikipedia wouldnt exist… u must have a keycard to enter any buildung or cross any street… and so on… because u are denied of moving anywhere ure not explicitely allowed… nice world and still someone can crack u anyways usualy…
If the end user has to RTFM then the programmer has not done his job. The manual should only be required for those less than obvious features that most people don’t use anyway. Let’s face it, the software companies are always touting how user friendly their program is. And user friendly is equated with definitely not having to read a manual before you can use it. The author of the article was correct in his assessment that we are approaching security backwards. The only way to have a reasonable chance of a secure system is to be a step ahead of rather than a step behind the bad guys.
> If the end user has to RTFM then the programmer has not
> done his job.
For simple software, sure.
For more complex software, that’s not practical or even possible in many cases since the software may have to eventually interact with devices or OS configurations that haven’t been invented yet.
Think of it this way. When you get a car, you need to get a driver’s license before you can safely go on the road with it and read the maintenance manuals. However, you don’t need to learn auto repair as long as you know where to hire the appropriate auto repairman and take it in for service every once and a while.
When you get a new house, you have to read the service manuals for the house and pay attention to advisories from City Hall or the Fire Hall about any required upgrades. However, you don’t need to become a plumber, carpenter construction worker, or interior designer. You just need to know how to get in contact with one if the need arises.
Ditto with health. When you’re born, you need to learn the basics of health and some basic first aid. However, you don’t need to get a medical degree as long as you know how to reach a doctor should the need arise.
It would be nice if life never forced you to RTFM, but sometimes it does and if you chose not to even read the Coles Notes (or Cliff Notes) version of RTFM, the responsibility for the consequences rests solely on you.
>> If the end user has to RTFM then the programmer has not
>> done his job.
> For simple software, sure.
> For more complex software, that’s not practical or even possible in
> many cases since the software may have to eventually interact with
> devices or OS configurations that haven’t been invented yet.
The complexity of the software has nothing to do with it. You should be able to use the basic functions of your software without having to read a manual. That is what the concept of an intuitive interface is all about. The way most manuals are written these days the average user doesn’t understand it anyway. And I am not saying the user is stupid, just that the manual is usually as poorly written as the code is.
The point of the article as I perceive it is that we are in the security mess we are in because much of the software has security coded as an afterthought. I agree with his assessment that we are going about security the wrong way. In a business setting very few people need to be able to install software. Lock down the applications you need and forbid anything else. That concept as well as the rest he puts forward make sense and could go a long ways towards securing most networks.
Next time you fix a friends computer that is loaded with 200 different spyware processes let me know how smart they are.
users are quite stupid because they install the most insane garbage…oooh pretty cursors and smiley faces and …
then they wonder why their computer doesnt work.
” #4) Hacking is Cool”
This guy doesn’t even know the difference between a hacker and a cracker. Encouraging someone to take apart their VCR is not going to encourage computer crimes. A ‘security expert’ should know this stuff.
PS Hacking IS cool.
Yes it is .
My prediction is that the “Hacking is Cool” dumb idea will be a dead idea in the next 10 years. I’d like to fantasize that it will be replaced with its opposite idea, “Good Engineering is Cool” but so far there is no sign that’s likely to happen.</br></br>
This article misses the target on several points, but this one jumps out at me immediately. The thought that “Hacking is Cool” is a new idea is almost laughable. And the notion that it will go away in 10 years is even more so. I am assuming that the author is talking about script kiddies or crackers, he isn’t very clear on this point. Cracking is driven by peer pressure and to a lesser degree the mainstream media. Just go browse some IRC logs sometime and you will get an idea of just why this isn’t going to go away. These people are driven by the idea that they can get even for being bullied or flex their technological “muscle”, by stealing some personal information or controlling someone’s webcam. Script kiddies are not going to go away until their scripts stop working and hacking (cracking) will still be “cool” as long as peer pressure drives it. I don’t see either of these going away in 10 years.
“. Unless Im missing something, when code is patched, the code that doesn’t work correctly is replaced with code that does. Am I wrong?”
If your talking about binary patches, then the bad code IS left in place. A branch is placed at the start of the bad code to the patch, at the end of the patch, the code is branched back to the correct version. So in the case of any patch from microsoft, even for the OS, the bad code is left in place.
Source code patches are usually quite different. There’s no reason they can’t branch like above, but almost always, the bad code is removed or replaced. The bad code no longer exists.
Hey, that’s another point for OSS security!
“His suggestion for an anti-virus program is great. Just make fingerprints of all executable code that is allowed to run, and don’t allow unknown code. Then you have a database of a few hundereds of programs instead of a database of thousands of malicious programs.”
“The main problem is that Virii, etc. nowadays more often than not mimick the valid apps. How do you lock down an application or process that is mascarading or mimicking another? Things become far more complicated than I think this article gives credit for.”
i agree with both of you. We need a program like ProcessGuard built in to the os with a Secure Hash Algorithm (SHA)256 encryption algorithm or the AES (Advanced Encryption Standard) to prevent code modification and termination.
http://www.diamondcs.com.au/
http://www.eweek.com/article2/0,1895,1859751,00.asp
The statement that “the software should be written secure from the start” is nieve at best. We have been building houses for thousands of years, and still 95% of them can be entered with a rock through the sliding glass door. Even a house with bars on the windows and doors can be entered very easily by using a battery operated recipcal saw on almost any exterior wall. Why do we allow this? Because when the numbers get counted out, it is cheaper to take the risk of being robbed than it is to truely secure a house. Heck people get killed and raped regularly due to the insecurity in housing, but still they don’t insist on secure housing.
The fact is no one really wants to pay the cost of truly secure software. (which may mean removing all functionality) As with most things, you figure out where the best cost/functionality/safety ratio lies, and you go with it.
I will admit, sometimes I write bad code. I am lucky enough to work for a very good, smart manager. When we start a new application, he gets the functionality requirements, and I tell him the cost/safety variables. When he can, he gets the funding to make safe reusable code, but he also is smart enough to understand that sometime “doing it right the first time” means that it won’t be done at all, and you will be left with a far worse situation than if you write the bad code.
Because of he sees both the long AND short term reprecussions, we have been able to consistantly improve functionality, security, and speed of development.
Let me just ask…How many of you have removed the locks on your front doors because they don’t keep out theives?
These ideas aren’t dumb, what’s dumb is focusing
on them and believing they will work.
They should still be deployed but focus should
be on the solutions the author provides.
Doing it right from the beginning is always
a good idea.
Quiet news day, so lets repost something off Slashdot from a few days ago.
the author is talking about enterprise network security. concerns such as ‘complex software’ and ‘what do you do if you want to install software’ don’t really matter, because we’re talking about the oceans of users whose job involves running an email client, an office suite and nothing else. these people still, in the majority of cases, have way more access than they need to way more resources than could possibly be of legitimate use to them.
I suggest you all take a look at the about the author page:
http://www.ranum.com/
before dismissing the author as clueless.
And I am sorry, but *not* educating at all the millions of people who buy computers has tremendous costs to the IT sector, which I think can be minimized by education.
“My prediction is that in 10 years users that need education will be out of the high-tech workforce entirely, or will be self-training at home in order to stay competitive in the job market. My guess is that this will extend to knowing not to open weird attachments from strangers.”
But what about of people who are not in the high-tech workforce and just use e-mail, for example? Do they not need education so that they do not run an Anna Kournikova virus-infected attachment?
So it’s kind of like, many people use e-mail, in 10 years those who do not learn not to click on attachments will stop using e-mail and they deserve that, and it is good for society.
Wait a moment!!
Most DDoS networks and attacks exist because of exactly such uneducated users — because criminals easily take advantage of them.
And who bears the costs? Of course, the highly-trained experts in IT, whose ignorant attitude that users will never get educated creates those costs in the first place.
There are other costs, too — thinking that users will never get educated, you may drive too many users off the Internet, and that can have very negative consequences, because the Internet, with its abundance of useful information, can be a great boon to people.
Thinking more, I have come to the conclusion that education is too little, and the solution is more education. If you notice, the premise of the article that “education should have worked by now” is not justified well. How many of you and your kids have been taught in school not to open e-mail attachments that are from an unknown person (at least), despite how alluring it may be? I think not many people. So perhaps it is little or almost non-existing education that is the real culprit, and not that users are too dumb to be ever educated?
I agree with the author’s points–even if the implimentation is expensive, and possibly not practical.
The prevaling comments seem to disagree with the section on User Education. My take: look at user education from the prespective of worm, virus and exploit writers. What user education does malware depend on?
Think about it. There’s no “training”, “RTFM” or “intelligence” for the user in the design process for malware. Yet look at malware’s success.