“Security and validation are critical issues in computing, and the next fifty years will be harder than the last. There are a number of proven programming techniques and design approaches which are already helping to harden our modern systems, but each of these must be carefully balanced with usability in order to be effective. In this talk, Alan Cox, fellow at Red Hat Linux, explores the future of what may be the biggest threat facing software engineers, the unverified user.”
Too short, too evident, nothing new… Though worth reading:) Repetorum est matter studiorum.
The article is an “audio article” so to speak. The mp3 link on the article is 20 minutes long and must be listened to for the full article.
The audioconference doesn’t says anything new too, but it’s a nice recap of the problem and the various way to mitigate against the problem.
I like the idea of being able to have security rules ‘per component’ and not only per software, the combination of both rules could be quite powerful.
The summary mentions capacity-based security (I can’t be arsed to download and listen the mp3). CBS is also the basis of security in EROS and Coyotos.
http://www.coyotos.org/
Capability based security really makes a lot of sense. If handled well, it should mean a program never does what you don’t expect it to do. This program wants to contact a website and transmit data? I prefer not. This screensaver wishes to have write-access to files? Maybe some other time.
It also means that file-viewer programs don’t unexpectedly create root-shells when handling purposely malformed data.
Problem in EROS apparently was that the way it was implemented was costly for performance. But that should be avoidable.
CBS will, I think, one day be considered as vital to a modern operating system as memory protection is considered now. Memory protection also costs performance by my knowledge.
…explores the future of what may be the biggest threat facing software engineers, the unverified user.
So much for privacy. I’m appalled at what I’m reading about the potential of Trusted Computing combined with EFI.
http://www.lafkon.net/tc/
The next fifty years? Sounds more like a throwback to the 1970s, except without David Gries and no knowledge of Multics or any of the Burroughs systems.
randomizing doesn’t fix bugs, it obscures them.
fault containment is a fine thing, but first you have to have a fault model.
doesn’t anybody read the literature from more than four years ago, anymore?
>randomizing doesn’t fix bugs, it obscures them.
And? What’s your point?
If the randomisation prevents a malware for executing this is still a win.
I’m not sure I understand your criticism: the talk was reminding well-know security principles, but they were true in the fifties and they will still be true in the next 50 years.
The problem is in building *usable* secure OS, not in security in itself..