This series explores the sort of technologies we could use if we were to build a new platform today. The first 2 parts covered the Hardware and core OS. In this third part we look at security, the file system, file management and throw in a couple of other random ideas for good measure.
Security
<rant mode=”on”> I intensely dislike the industry’s tendency to blame users for security problems, this is a cop-out, a dereliction of responsibility. The security problems are created by the computer industry and it is up to it to fix them. Viruses, Trojans and other nasties have been around for many years, how can the users be at fault when the industry has done seemingly nothing to defeat them? A car user is not expected to know the inner workings of a car in order to drive, why should he know the inner working of a computer? Security is not simple, [Security] but that is no excuse for blaming the user. </rant>
If there are patches to be downloaded the system should – by default – check for them daily. Of course if the system was properly designed in the first place you wouldn’t need many patches. Microsoft was warned about potential security problems many years ago, did they do anything about it? Who’s fault is that? That said there are more secure OSs than Windows especially the Unix based OSs such as Linux / *BSD and OS X (despite thinly disguised marketing efforts that say otherwise).
In any new system security should be built in from the start. It should assume everything wants to cause damage so restrict a program’s potential to do so. I don’t believe in building a wall as it can be broken through, I believe lots of walls are a lot harder to get through and present many opportunities for repelling an attack, thus security should be considered in all parts of the system. No one part should be relied upon. All parts of the system should by default, be secure.
Virus Scanning
Scanning everything with a virus scanner should be a standard part of the system, not an add on. Even if there are no viruses (or Virii) the system should scan an incoming file to see if it or part of it is executable.
Sandbox all new files
All files, not just e-mail / web downloads, that way they can’t do any damage. If an executable is not run immediately mark it so when it is run it can be sand-boxed first time. FreeBSD can sandbox applications using Jails [Jail].
You could go further and sandbox everything at all times and proactively look for things to sandbox – i.e. if an application attempts to download an executable file and execute it in memory, the system should either prevent or sandbox this behaviour.
Don’t allow programs to delete all files
Also, don’t allow programs access to the backed up files (see File System section). This will prevent a virus or errant program from deleting all the files in your home directory. If it tries the files should be moved to backup, the system should monitor the file system for this type of behaviour, warn the user when detected and give them an option of restoring files and either disabling the application or containing it’s actions to specific files. Deleting backups should be a privilege only the user has – no application should have this ability.
Such a scenario is possible on almost all Operating Systems and it’s happened to me in Linux – back in 2000 an alpha release of Opera browser crashed and removed the contents of my home directory, needless to say I wasn’t exactly happy about it. I expect an alpha to be unstable and missing features, I did not (and do not) expect it to take out my home directory. If an application was so bad that it could do such a thing think of what a malicious programmer could do. Currently applications by default can cause as much damage as they wish in your home directory, the system should prevent this.
Automatically identify all files
Warn the user when the identification is incorrect, if there is a text based identifier, change it (but tell the user). Text based file identifiers can be and are abused and should no longer be relied upon.
Don’t run services as Root
This is impossible to achieve on most Unix based systems due to the design of the kernel. This is one of the advantages of a microkernel as you don’t need to run services as root [Microkernel] reducing their ability to do damage.
Limit external access
Despite being aimed as a desktop system we may want some server like abilities such as the ability to control the system from outside. This should be possible but only via an encrypted method and should preferably only be possible with a dedicated application. Some non-encrypted connections would of course be possible i.e. Web or FTP servers.
Indirect program launching
A program that launches other programs should not be able to launch programs directly, the launch would go through the interaction engine (described in a later part of this series) which launches the program. This sounds restrictive but making program launches indirect means they wont return to a terminal once done, an attacker thus cannot get access to a terminal by externally crashing a program.
Exported Services
If I run a web server it automatically becomes visible to everyone on the outside unless it has been fire-walled off. I propose a “tunnel” be added to our system and only services which have been explicitly been “exported” along this tunnel can be accessed from the outside world. This would mean the web server would have to be explicitly exported before it could be accessed from the outside world. The Tunnel could also be monitored so what’s going in and out could be tracked and displayed, if there is something undesirable running it can have it’s export disabled (automatic exporting by programs should not be possible).
Quite how a tunnel would be implemented is open to question. One possibility is to have two network stacks: The Internal stack is in contact with the programs in the system. The External stack is in contact with the external network interfaces. The tunnel sits in between connecting the two stacks. You could run a Router, Firewall, NAT (Network Address Translation) and other services on the external stack yet have the inner stack completely isolated from the internet. When you do connect you could go via NAT which itself adds another layer of security.
Security in the file system
“Invisible” file backups could be used which only the system would be able to access, they would never be presented to applications. Only the user via the system would be able to manipulate them (i.e. to delete confidential data). A full OS backup could exist like this so if an attacker manages to circumvent the controls preventing them from changing system files, the system would detect the difference and replace the changed file with the correct one. The invisible OS backup files should not be presented to anyone, not even the user.
Code level security
Another method of improving security is to improve the code in the system, audit [Audit] the code for errors and bad code. Better code means less bugs and less bugs means the system is more stable and more secure.
Another method of improving security is to enforce correctness in either the language itself or at runtime. Some programming languages make it easy to write insecure code [BadCode] but one method of improving it is to add additional modules at compile or run time which check the program as it runs. Perl already does this in it’s “Taint” [Taint] mode but it may also be possible in languages like C [Code].
These techniques could be taken further – if libraries include potentially bad functions why not remove them and replace them so they are not in the system API. Scan executables for dubious function calls, if they are present refuse to run that software. It’ll make porting software a royal pain in the arse but software will be better for it.
Trying to modify and existing system to accommodate such a level of security will not be easy and liable to break applications and irritate users. This is one of the benefits of starting anew since there is nobody to annoy. It will be interesting to watch the reaction of the market to Service Pack 2 for Windows XP.
Security is a difficult subject but by restricting the system defaults like this and making a paranoid system which is constantly checking itself and everything coming in will make life more difficult for attackers. Locking the system down from unnecessary changes and keeping the bugs to a minimum means security is built right into the system, it’s not added after the fact where it can never be as effective. It also works, OpenBSD [OpenBSD] have been using this philosophy and at least some of these techniques for years.
Of course power users will not like a system like this but there’s no reason why controls could not be added to allow developers and power users more access to and control of the system.
The File System
Of course for our new platform we should have the best file system we can build. It should ideally have some of the following features / functionality:
- 64 bit
- Journaled
- Non fragmentory
- Automatic analysis and identification of files
- Arbitrary metadata for files with indexing to facilitate high speed searching
- Support for “live” searches – that is it should monitor the file system for changes and tell interested applications about the changes
This may sound like a radically advanced file system, but I have just described the file system “BFS” which was introduced to the world in BeOS. There’s a great hullaballoo about OS X “Tiger” adding search technology and Windows Longhorn using a database as a file system (an approach Be tried and later abandoned due to low performance). This is another example of existing technology spreading through the industry [Indexing] even if it’s old news to some of us!
I don’t however think that Be or BeOS users ever fully utilised the capabilities of BFS and in that respect support for it’s capabilities could be better integrated into the system. The apparently very similar technology in OS X Tiger looks like it’ll be a lot more integrated and thus useful to the end user.
Gnome Storage [Storage] wants to go further and translate files when storing them so you can read them with different programs without needing to support different formats. Similar functionality was available with Datatypes on the Amiga and Translators in BeOS though these were used by applications directly. I’m not sure if I like the idea of doing this as part of the file system as it’ll add a performance impact every time something is read / written and the use of XML means conversions to / from non-linear structures, computers are inherently bad at dealing with these in an efficient manner (throwing a faster CPU at it will not help, it’s a memory latency problem).
Search: Finding it better
If I’ve just downloaded pictures from a USB stick and want to edit them I have to either find then and activate the editor or, run the editor and manually find them with the file requester. This could be made easier by adding a few buttons to the file requester which use search functionality, press “latest files” and they’ll just appear. You would never need to go looking for files again.
Of course this wont work for everything, in order for metadata to properly work it needs to be added to the files and there are limits to how much of this can be done automatically. Adding metadata will need to be done at least partially manually and that becomes a user interface issue. Nobody is going to add metadata to every file so a window could appear with a set of options to help this process, the context will be important as well as the type of file in deciding which options. Past actions could also be analysed so the system can guess and present options based on those.
This need not be as complex as it sounds, if I download an image it can give me options of Astronomy, Humour, Technical, Cool, Geeky, Personal etc. If it’s really clever it can look at the originating web site and give a guess – i.e. anything from NASA is likely to be in the Astronomy category (this would require the web browser to store the URL as metadata with the downloaded image).
Metadata can also be potentially be used for all sorts of other not so obvious things. If you are running a second OS as this system allows, it will expect files in certain places. Your file system layout may not be in agreement with this alternative i.e. you may not want to add a new partition to your Hard Disc. You could use metadata to create a virtual file system, write the desired path as metadata but put the file in a directory for the second OS’s files. When the second OS wants a file the file system it gets the file from this directory by looking for a file with that piece of metadata.
This has interesting security possibilities as the second OS will not be reading directories directly so even if it tries asking for something else it will never get anywhere as there will be no other files with the correct metadata.
You could also create a file system layout from metadata by doing the above in reverse. We could do the same with normal metadata, to go back to my downloads example lets say I’ve saved a few humour and astronomy images in my downloads directory. A single command could move them to the right place – i.e. home/nick/pictures/humour or home/nick/pictures/astronomy.
I’d like to see other functions added to the file system such as automatic backups, a technique which has been used on big iron for years. Again metadata could help out here – store the backups in another location with the original path in metadata. This would save having your directories have multiple copies of each file and rapidly becoming unreadable.
Searching could also be enhanced by storing parts of files in metadata. The system could search for files containing specific strings but look in the metadata index first for speed (Apple is doing this in Tiger).
File Management
Of course all these files will still need to be managed. One of the most useful applications I used back in my Amiga days was a “dual panel” file manager. There were many of these with the most famous being SID or Directory Opus v4 [DOpus4] (note: not the later versions which changed the layout). I’ve never found an equivalent on any other platform though you can get some for Unix/Linux [LinOpus]. You can also simulate a dual panel manager by placing 2 Finder windows side by side in OS X 10.3, you use drag and drop in place of the copy button (I have two Finder windows permanently set up like this).
The dual panel approach is very easy to use and you can navigate around the system at good speed if it is implemented well. I’ve never found a better way to manage files, you can of course use the desktop (spatial or browser) or even a command line but I don’t find any other method works as well or as quickly.
Of course we can now improve on it by adding support for drag and drop and metadata. You could have a music button which lists all your music files irrespective of where they are. The same could be done for pictures and other data formats. Actions could be enhanced, double clicking the file will launch it with a viewer / player, double clicking on the directory display will bring you to the file’s location, dragging the directory to the other panel opens that directory there.
History and hierarchy lists would allow you to go upwards or backwards to previous/other locations. Adding the ability to navigate via menus could allow faster jumps to other parts of the system.
It’s difficult to describe how powerful yet easy a program like this could be, DOpus and it’s like were very good at their task, to the point that I never used the desktop or the shell. Combined with metadata searching these tools could be even better.
Loading and saving files can also use some of the same technology (but not the dual panels). A standard file requester could allow fast navigation and allow searching of metadata to suggest the best location to retrieve or store a file.
Other Random Ideas:
Building a new OS means new concepts can be tried out, here are couple of other concepts:
Large Page Sizes
Existing Operating systems still have hangovers from the hardware they were originally designed on, i.e. memory management typically deals in page sizes of 4 KBytes. This seems a little small given today’s system requirements which are running into hundreds of megabytes of memory for user desktops and gigabytes for workstations. I’d suggest the use of a flexible page size starting in the order of 32 KBytes, It does have the disadvantage of using larger memory blocks but this can be reduced by allocating smaller memory requests inside already allocated blocks, this could be done by embedding a small memory manager into applications at runtime, when an application wants memory the memory manager could activate and check if there is sufficient memory in it’s allocated blocks, if not it will ask the OS for more memory.
This will not only make efficient use of memory but also reduce OS calls / context switches. Using large pages also shrinks the entire page directory so more of it can then be held on the CPU, page directory look-ups for data can then be done without needing to go to main memory as often. In fact if paging to disc is disabled (quite possible with today’s memory sizes) it’s possible the entire page directory will fit on the CPU giving a performance boost as it’ll never need to be flushed.
The Grid
The BeOS media kit allows you to arbitrarily redirect the output of media software, this for instance allows you to add an audio filter or effect to an MP3 player, even if the MP3 player has no support for audio effects.
The Grid is the same thing but extended to the entire system and even across multiple systems. If I want to save a file on a different computer I’d usually need to have to explicitly support this in the file system. The Grid would allow it to be transparent, an extra drive would appear on your file save requester and you select the drive and press save as normal, the file system just saves as normal and The Grid pushes the data across to the other computer.
The Grid could also enhance the existing media kit by moving some processing to the other system. You could conceivably even spread programs across multiple systems. Providing an application or OS service knows how to communicate via The Grid the systems will take care of the communication’s transport. The Grid could do away with the need to transfer data across a full network stack as The Grid will itself act as a simple network.
Again this comes back to the idea of simplicity, instead of having multiple applications supporting the functionality it is placed into the OS so anything can use it. Such a principle is also found in the Unix successor “Plan9” [Plan9], indeed The Grid could make good use of Plan9’s Plumbing [Plumbing] concept.
Conclusion
There are various trade-offs to be considered when contemplating a new system, we all want good performance but would we trade it for better security and stability? In today’s systems that question was asked years or even decades ago and the answer was no. Today CPUs provide 99.9% of the population with more power we can use and there’ll be heaps more power when we get multi-core CPUs on the desktop. Today with increasing numbers of viruses and attacks I’d accept the trade, it’s not as if anyone outside benchmark fanatics will notice the difference. I for one would like to use a system which is damn near impossible to hack and damn near impossible to crash.
The system I am describing is not so much new ideas but about taking the best ideas, old and new and putting them together to make something better than has gone before. Having a new system gives us the ability to do this but as the system allows other OSs to run along side we have a combination giving us the best of both worlds. However we can experiment beyond the bounds of OS design, we can also explore other areas and improve on those as well. It’s quite telling that even today very few if any have yet to get usability right. It’s not an easy subject, more a question of opinions than absolutes. The Mac is famous for it’s usability and on that very topic…
A correction:
In Part 1 my description of the Macintosh did not attribute it’s ease of use to Jef Raskin’s original vision of the machine of which it was a major part. This has since been corrected, my apologies for the error.
37 years ago, Jef Raskin was one of the people who introduced the concept of usability to the world, it’s taking the world a long time to catch up. In Part 4 I cover how we can make our new OS more useable.
References / Further Information
[Security] Security [Jail] FreeBSD Jails [Microkernel] Microkernels have security & stability advantages. [Audit] Auditing code is a method the OpenBSD group use to make their system more secure. [BadCode] Embedding security in programming languages. [Taint] Perl has a Taint mode. [Code] Suggestions for improving security by embedding runtime modules. [OpenBSD] OpenBSD has a reputation for security. [Indexing] Description of various document indexing systems. [Storage] Features in Gnome Storage [DOpus4] Screenshot of Directory Opus 4. [LinOpus] Worker dual plane file manager for *nix ScreenshotAnother called Gentoo (not the Linux Distro) also for *nix and Screenshot [Plan9] Plan9 the successor to Unix. An overview. [Plumbing] The Plumbing concept from Plan9.
© Nicholas Blachford July 2004
About the Author:
Nicholas Blachford is a 33 year old British ex-pat, who lives in Paris but doesn’t speak French (yet). He is interested in various geeky subjects (Hardware, Software, Photography) and all sorts of other things especially involving advanced technologies. He is not currently working.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
“Sandbox all new files”
Although “all new files” would be difficult, I have said for a long time joe users should be given a secure, sandboxed email client as an alternitave.
I also agree with your point about checking for udates. For instance, a MyDoom variant is having its way with the net right now, my email client should know this before downloading my email for today.
My AV utility that was updated at the end of last week gave no warning at all to the several infected zip files in my inbox this morning.
“Don’t run services as Root”
I actually agree with this point, becasue if the application is remotely exploited, it does not have access to the entire system.
It also helps to jail these services.
“Code level security”
I think too much emphasis is placed on this, and not enough is placed instead on secure defaults. For instance, a password free ssh server that is not running is several times more secure than a patched IIS install that is running.
I’ve always liked the idea of using modern object oriented languages to bring into being very fine grained Principle of Least Authority (or the Principle of Least Privilege) programming. Basically, make each object so that it only can access to what it MUST have access to in order to function. This way, breaking into one object in a program (if you even could) gives you little access to anything else.
There’s a big WMV video, intro to such things, here: http://www.erights.org/talks/skynet/index.html
And of course, capabilities might eventually be part of a mainstream OS. (I’d take them over an access list…)
Formal methods would be a HUGE help. Program verification or proving need to be made cheaper. (Let’s all start working on it?)
Finally, my biggest gripe on security is a simple one: run less software. The more unnecessary code (especially services) in an OS, the more insecure it is. Great advantages of a skeletal OS are that it’s usually faster, easier to maintain, easier to learn, more secure… just better.
and You had not tools to fsck it. read this interesting article on befs : http://duncan.focuseek.com/2003/01/mozilla/
Ludo
—
http://images.linspire.com/RunLinspireSong/RunLinspire.swf
You get security by removing insecurities, not “adding security.” Adding features should be a last resort and not considered a good sign.
I think the process should go like this:
1) Can we do without something? If yes, don’t add it; remove it if it’s already been added.
2) Can the overall structure (remember: elegance) be designed to take care of secirity problems? Try to make the design as broad, universal, and non-per-insecurity-hackish as possible.
3) Audit as a process, not just once a release. (But don’t ever assume any code is friendly.)
4) Add additional security features as a last resort for those few things that fall through the cracks. Consider every additional security feature (sandboxing, anti-virus, etc.) you need to add, a failure.
“You get security by removing insecurities, not “adding security.”
Very well said.
Security is not simple, [Security] but that is no excuse for blaming the user
B.S. Any machine must have an operator competent in its use for it to function as designed.
Unfortunately, security will equate to either difficulty or simplicity. Security exists to make certain actions difficult. One could theoretically have a system that updated itself automatically and prevented the user from intervening whatsoever in its operation, but the question of flexibility and trust (of the manufacturer) arises.
In addition, the sheer flexibility of computers is a major enemy of security. Complexity is the enemy here. Limited function appliances are the solution, and will fill the needs of a large percentage of the population currently acting as spam and virus spewing zombies.
As an aside, what system admin or OS architect in their right mind would allow the core system software to run and be stored on a partition/disk that is read-write throughout operation??? How could one have any expectation of integrity of the system?
No, antivirus software in itself is a copout. Scanning incoming files is ok, if asked too, but the auto-scans on the whole disk are obnoxious and a huge cop-out.
Running services not as root? A good idea, possibly make a “sandboxed root” for services. One that doesn’t have that much access. But here’s the problem. Some things need full access. Although I bet most services you could get away with having almost no write permissions. Of course, cutting uneccassary services is still the best way to protect yourself here.
A good idea, possibly make a “sandboxed root” for services.
I think this is the thrust of most ACL-based implimentations in Linux. This is the NSA SE Linux stuff.
Basically, you give users and programs (which can appear as users to the system) only as much access to resources as they require. Everything else is locked off tight.
Of course, the hard part is writing, organizing, understanding, and checking all of the security policies that result. Its a lot of work apparently (I haven’t tried).
Of course, if you want “sandboxing” today, you can use fakeroot or something like User-Mode Linux (giving each service its own user-space kernel and filesystem).
The car analogy (i.e. drivers don’t really need to know anything about the internals of a car) doesn’t work very well unless you extend it by suggesting that people are trying to attach bombs to the bottom of the car and are putting water in the gas tank and siphoning fuel, all simultaneously. Girtherobot is right: the only real way to allow totally unknowing users to safely use computers is to turn them into fixed-task machines without the ability to install software. Too many viruses (or viri if you prefer made-up words) are based on social engineering and not on system vulnerabilities.
So far this article has been been interesting and several comments made by the author coincide with points I’ve seen on many boards. I really look forward to part 4 on usability because thats were I see that a lot of improvement needs to be made. Thanks for the great insight.
For years I have maintained that what’s needed to help security is hardware-assisted threading and VM protection, and software designers to better jail aspects of applications: for example, the rendering preview window within Outlook should be a jailed thread, with only access to a visual bitplane, and a tight IPC into the rest of the application: any bad code (whether software defect, or virus) is fully contained within the jailed thread, and easily die and recover. The hardware has to enforce the lockout.
I agree that software designers need to be better able to jail aspects of applications, but different people seem to have different ideas of what the security granularity should be.
Thread level is better than the simple per-app level, but threads do not map well to where security borders should be drawn. And the side effect of people putting threads where they don’t belong to get security benefits, unnecessarily complicating programs, might actually make software quality worse where it should be better.
I like per-object security because the security interface idiocy of things like Java WebStart goes away when it’s implemented at such a fine level, and app programming doesn’t get as complicated or as easily abused as per-thread security.
Virus Scanning
No way. I never used them, they don’t cover all the cases and they slow down the whole system.
Sandbox all new files
You can’t do this for anything and forever. Apps are supposed to communicate and collaborate – you can’t aislate everything.
Don’t allow programs to delete all files
Well, that’s what file permissions and recicle bin where invented for, right?
Don’t run services as Root
This is impossible to achieve on most Unix based systems due to the design of the kernel.
Wrong. You can run updatedb as normal user – but you wont get the files you can’t access to. A per-user updatedb is entirely possible, as it’s possible to make the 1-1024 TCP ports range to be uasble for normal users – with ACLs. Nothing impossible here – services don’t run in kernel space
Large Page Sizes
Already done, no huge wins for normal apps. P4s can in fact use a page size larger than 4KB – but not for everything, it’s just limited. It’s used for databases mainly: There’s no performance advantaje derived from this one for most apps.
Radicalize the “mechanics in the kernel, policy in userspace” for general purpose OSs would be a big win. Getting the VSF, filesystems, TCP/IP layer in userspace would be great – is where it belongs, leave the rest in the kernel, without microkernel sickness.
When the article said the filesystem was 64-bit, what did he mean? What does a 64-bit filesystem offer?
Maybe I’m an idiot and just didn’t see the link to the first two parts but in case anyone else didn’t see them either.
http://www.osnews.com/story.php?news_id=7676
http://www.osnews.com/story.php?news_id=7703
uh. helloo!! 4kb pages sizes are a cpu architectural design, and have nothing to do with the OS. the x86 supports 4kb and 4mb pages. it makes more sense to use OS page sizes that match CPU page sizes. makes page fault handlers work all that much better for virtual memory systems…
sparc32 had 4kb
sparc64 has 8kb
x86 has 4kb + 4mb.
alpha has 8kb
this isnt something you can change.
using 4mb pages on windows is lunacy.
Im already very much anticipating the next generation, its called Longhorn. I do hope Nicholas gets what he wants.
I intensely dislike the industry’s tendency to blame users for security problems, this is a cop-out, a dereliction of responsibility. The security problems are created by the computer industry and it is up to it to fix them. Viruses, Trojans and other nasties have been around for many years, how can the users be at fault when the industry has done seemingly nothing to defeat them?
You might not *like* it, but the simple *fact* is that the biggest security hole in the system is the user. This is hardly unique to computer security, either, it applies equally the the physical world.
A computer is designed to do what the user tells it to – and when the user tells it to start up a service listening, send out a thousand emails or delete a bunch of files, there’s no way for the OS to know whether or not the user *really* wants to do that, or whether it is a malicious program that the user has inadvertently activated.
If end users didn’t do silly things like deliberately running malcious code, at least three quarters of the world’s computer security problems would disappear instantly. If they’d keep their machines patched (or if an automated system were ubiquitous) as well, that would probably eliminate around 90% – 95% of computer security issues.
If there are patches to be downloaded the system should – by default – check for them daily.
Of course, then the developer gets blasted for “taking over users’ machines”.
Of course if the system was properly designed in the first place you wouldn’t need many patches.
Bullshit. Patches fix *coding bugs*, not design flaws.
Microsoft was warned about potential security problems many years ago, did they do anything about it?
Windows NT.
Who’s fault is that? That said there are more secure OSs than Windows especially the Unix based OSs such as Linux / *BSD and OS X (despite thinly disguised marketing efforts that say otherwise).
If (or when) those OSes have more than an insignificant market penetration and a significantly lower number of security alerts, that’s an assertion you’ll be able to make that might actually have some substance behind it.
All files, not just e-mail / web downloads, that way they can’t do any damage. If an executable is not run immediately mark it so when it is run it can be sand-boxed first time. FreeBSD can sandbox applications using Jails.
So what about those applications that require inteeraction with the rest of the system to do anything useful (ie: most of them) ?
You could go further and sandbox everything at all times and proactively look for things to sandbox – i.e. if an application attempts to download an executable file and execute it in memory, the system should either prevent or sandbox this behaviour.
Fantastic. So I can’t run an installer or self-extracting archive from a network location or, if I do, I can’t tell it to install or unpack anywhere outside of a sanboxed directory structure (which I then have to find – assuming it’s even accessible from outside the sandbox at all which, to maintain security, it shouldn’t be).
Have you even *considered* the usability implications of this ? How about the disk space requirements for keeping multiple copies of core system files and libraries around for every little binary you want to run (assuming you’re trying to emulate jail(8)) ?
Also, don’t allow programs access to the backed up files (see File System section).
So I can’t delete files to free up space ? So obselete files in the hidden system backup can’t be deleted ?
This will prevent a virus or errant program from deleting all the files in your home directory.
No, it won’t. If the user can delete files, a program the user runs (ie: a virus) can also delete those files.
If it tries the files should be moved to backup, the system should monitor the file system for this type of behaviour, warn the user when detected and give them an option of restoring files and either disabling the application or containing it’s actions to specific files.
So, basically, a system level “Trash Can” ?
Deleting backups should be a privilege only the user has – no application should have this ability.
Please explain the difference between “the user” and “an application run by the user”.
Don’t forget to explain where third party file management utilities fit into your picture.
A program that launches other programs should not be able to launch programs directly, the launch would go through the interaction engine (described in a later part of this series) which launches the program. This sounds restrictive but making program launches indirect means they wont return to a terminal once done, an attacker thus cannot get access to a terminal by externally crashing a program.
Most shells already do this and it won’t help.
Incidentally, how is the shell going to fit into your equation ? It is, after all, an application that launches other applications.
If I run a web server it automatically becomes visible to everyone on the outside unless it has been fire-walled off.
[…]
The Tunnel could also be monitored so what’s going in and out could be tracked and displayed, if there is something undesirable running it can have it’s export disabled (automatic exporting by programs should not be possible).
Firstly, how is this different from existing systems wherein services must be enabled and data can be monitored in transit ?
Secondly, if automation isn’t allowed, how do you propose large numbers of machines be efficiently managed ?
One possibility is to have two network stacks: The Internal stack is in contact with the programs in the system. The External stack is in contact with the external network interfaces. The tunnel sits in between connecting the two stacks. You could run a Router, Firewall, NAT (Network Address Translation) and other services on the external stack yet have the inner stack completely isolated from the internet.
I’m not entirely sure what you’re trying to do with this, but what you’d described is basically the difference between a local network adapter (ie: 127.0.0.1) and an external network adapter that already exists on pretty much every OS in use.
When the article said the filesystem was 64-bit, what did he mean? What does a 64-bit filesystem offer?
64 bit file addressing, allowing Honkin’ Great Big Files(tm).
Most FSes already do this and have been for quite a while now.
“Invisible” file backups could be used which only the system would be able to access, they would never be presented to applications. Only the user via the system would be able to manipulate them (i.e. to delete confidential data).
If the user can delete them, then a program the user runs (ie: a trojan) can delete them.
A full OS backup could exist like this so if an attacker manages to circumvent the controls preventing them from changing system files, the system would detect the difference and replace the changed file with the correct one. The invisible OS backup files should not be presented to anyone, not even the user.
XP’s System Restore, in other words.
So, if no-one can delete them, how in the heck do you clean those files if a virus/trojan/whatever *does* manage to get itself in there ?
Of course for our new platform we should have the best file system we can build. It should ideally have some of the following features / functionality:
[…]
* Automatic analysis and identification of files
This does not belong in the filesystem. It has nothing to do with the storing of data on disk. It is something that should probably be implemented in the shell (since that’s the only place it really matters) or perhaps as a piece of underlying OS functionality that third party shells and such could utilise, but it is *not* a filesystem function.
This may sound like a radically advanced file system, but I have just described the file system “BFS” which was introduced to the world in BeOS.
You’ve also described NTFS and to a lesser degree ReiserFS, HFS+ and probably numerous others.
Gnome Storage [Storage] wants to go further and translate files when storing them so you can read them with different programs without needing to support different formats. Similar functionality was available with Datatypes on the Amiga and Translators in BeOS though these were used by applications directly. I’m not sure if I like the idea of doing this as part of the file system […]
Again, it doesn’t belong in the filesystem (and I’m sure the people from those groups you mention didn’t have it there). It’s arguably something that would be useful in I/O libraries, however – but then you’d just get the usual crowd complaining about $SOFTWARE_DEVELOPER classifying their data types for them.
Most of this is already doable within Linux. (I assume BSD and so on is similar, but I am not very familiar with them).
Sandbox all new files – This is an application level problem. If you make it a kernel level problem, creating and editing files that you do want to execute (scripts, development, etc) could be a lot more difficult.
Don’t allow programs to delete all files – But what if you want to? How do you set the policy between a trojan deleting files, and a user deleting all his files after copying them to another computer.
(On the Opera thing. It was _alpha_ software. This is where _you_ should have run it as different user or with chroot)
Automatically identify all files – I assume you mean extensions. Decent shells already do this.
Don’t run services as Root – Most properly set up systems don’t. Apache on Debian for example, by default runs as user www-data. Even most things like samba run as root to listen, and then fork a process as the relevant user. If you create a user called ‘can fork to other user’… you effectively have root don’t you?
In any case, these aren’t exactly hard to build into existing systems.
Limit external access – We call this SSH. It supports console, file transfers, and gui (X) access, all over a default of 768 bit encryption.
Indirect program launching – I believe the ‘interaction engine’ you describe is generally called the kernel.
Exported Services, tunnel – You mean like iptables?
Exported Services, internal stack. – See pipe, and bsd sockets.
Security in the file system – Invisible to what? Invisible to everyone except root? The only way to get around this is to enable a sort of ‘super-super-user’. While the idea of automated replacements of modified system files is a good one, it’s not a substitute for keeping root secure in the first place, and locking systems files into readonly mode.
What if the super-super user gets hacked? Then you can’t get access to that!
Code level security – Auditing, good. Checking the code at run time is going to add to the system load. And – importantly – how do you differentiate dubious function calls from ‘legal’ ones? What differentiates the delete function of a trojan from that in a file manager?
“…no reason why controls could not be added to allow developers and power users more access to and control of the system.” Possibly including the developers of viruses for example? Oh wait, that kinda means that all the extra security precautions are pointless doesn’t it?
The File System – While none of these are bad ideas, Linux, and unix in general, has little problem with using different file systems.
Search: Finding it better – Interesting ideas. However, it requires little other than adding a few features to existing file selectors and file managers.
File Management – While what you describe here appear to be good ideas, they are enhancements of existing applications.
Large Page Sizes – Other comments have already addressed this.
Flexible page size – Yea gods! Do you want to write the code for this?
Disable paging to disc – Decent OS’s already do this when they can. They don’t tend to touch the swap file until they run out of memory.
The Grid – See gstreamer.
RE: my opinion By Diego
…filesystems, TCP/IP layer in userspace…
If by userspace you mean as normal users (as opposed to micro kernel service space)… how… and why?
* Posted after three hours of sleep and no caffiene. Some of this may make no sense at all.
A car user is not expected to know the inner workings of a car in order to drive, why should he know the inner working of a computer?
But a car user, or owner, is responsible for the security of his car, not the car industry.
Actually in the US cars are required to meet certain crash test ratings and safety requirements before they can be sold on the market.
The user however is required to drive responsibly or face repercussions, so it is a gray area where the burden is shared.
Of course users will always be a weakness in security, but there are technical solutions that would avoid many of the security problems we have today.
I think an OS should support ownership of copyrights as fundimentally as it does ownership of files. So for example, if I run only GNU software on my machine there should be three accounts: mine, root, and gnu. If I then install another bit of software, like, say, The Gimp, then there should be another account created (guess it would be called gimp-team or something). There should be a central repository where I can go and look up the license on any piece of software. When I run the software it should run as the account that owns it, but the data that it produces should be owned by my account. Obviously the concept of an account might not be finely grained enough, but you get the idea. That way there’s absolutely no doubt over who owns what.
Actually in the US cars are required to meet certain crash test ratings and safety requirements before they can be sold on the market.
The user however is required to drive responsibly or face repercussions, so it is a gray area where the burden is shared.
Not at all. It is the car manufacturer’s responsibility to make a safe vehicle, it is the driver’s responsibility to drive safely.
Of course, trying to compare making (and driving) a car – a relatively simple, single-purpose mechanical device to writing (and maintaining) and operating system – a massively complex multi-purpose piece of software, is a bit silly IMHO.
Of course users will always be a weakness in security, but there are technical solutions that would avoid many of the security problems we have today.
What technical solutions are you proposing that don’t have a massively negative impact on usability ?
I’ve got the perfect sandboxed mode for this great new revolution. Ok, here are the steps, just follow along carefully:
1.) Look at your keyboard and notice the wire on the back.
2.) Get a pair of wire cutters
3.) Cut the wire between your keyboard and your computer.
4.) Repeat 1-3 for your mouse.
5.) Leave the room and never return.
Ok, now you can’t do anything stupid on your computer. In fact, you might want to build a big rubber room as well…to protect you from the real world.
What in the hell are you rambling about? What is the point of makeing a GNU account? So that if you make something its not assumed to be free or what? Your post confuses me. Whenever you make something its automatically yours, unless you are under a production agreement with a company or university.
http://www.osnews.com/story.php?news_id=7676 : “Building The Next Generation, Part 1: Hardware”,
http://www.osnews.com/story.php?news_id=7703 : “Building The Next Generation, Part 2: The OS”
Maybe good to add to the blurb?
“If the user can delete them, then a program the user runs (ie: a trojan) can delete them.”
If you are only using user level security (or, to a lesser extent, an access control list) like traditional Unix and Windows then, yes, this is true. But systems can be built with approx. object level security with capabilities where everything works via message passing that can mathematically be proven to make it impossible for a trojan to do what what the user it runs under can. Social engineering would likely be the only way to attack such a system.
In an effort to keep from repeating myself, I’ll say that you should look at how to make security easier for the user. (Assuming it’s not totally behind the scenes.) I think Bastille Linux is a good step in that direction.
I don’t know if I’m alone in this, but I consider situations in which users accidently trash their own systems a security problem.
rm * .foo is a good example. This kind of thing needs to be fixed for me to call any system secure.
Don’t run services as Root<BR>
This is impossible to achieve on most Unix based systems due to the design of the kernel.
The design of UNIX kernel does not have any idea what a “service” is. Services, to the UNIX kernel, are processes just like any other. And processes in UNIX can belong to a privileged user, or a normal user. Thus, UNIX kernel design does not make it impossible to run a service as a normal user. And indeed, many kinds of services on UNIX-based systems today are run as normal users.
However, be it UNIX or some microkernel design, not all services can be run as normal users — and this has nothing to do with kernel design, but security. If you have a service that needs to authenticate incoming connections, for example against the system’s user database, the service must be run as a privileged user: how else could it access the user database? Running on microkernel does not change anything in this scenario.
What is, arguably, better and more in microkernels is the fact that you can (and often must) run device drivers and things like that as normal user level processes instead of in-kernel (which is the UNIX design). The advantage then is that a bug in the device driver does not bring the whole system down. This concept is completely orthogonal to whether a service is run as root or a mere mortal user.
A car user is not expected to know the inner workings of a car in order to drive, why should he know the inner working of a computer?
Well, to drive, a car user is expected to know what each pedal does, what the steering wheel does, how to use gears (if not automatic and so on), like the computer user needs to know what buttons to press to make things happen, how to type and so on.
As for internals, a car user does not need to know how the engine works, but neither does a computer user need to know how his harddisk is connected, and how the CPU communicates with it. Nor does the computer user need to know (for OS), what DLLs are, how the OS links things together, and so on.
But if the computer user starts to modify his OS, or starts installing new applications, or starts to experiment and try with things, he needs to know what can be done and what can’t, and what is dangerous, and how things work.
Just like a car user: you expect that each driver knows better than to switch to first gear (or reverse) while driving 120km/h — those things are dangerous to the driver and the car. You expect each car owner to know what kind of gasoline the car uses. And if the car user starts to install gadgets to the car or the engine, he needs to know really really much.
Granted, there are still problems for the computer user: badly designed E-mail apps let the viruses and stuff like that do what they want. This is a problem. But if you use a decent E-Mail app, it does not run attachments automatically, and if the attachment is executable, it asks you. At the point when you are confronted with question “do I want to execute this or not”, you are in the same situation as the car driver, who thinks what would happen if he did something stupid.
The problem with the car analogy is that we have had cars around a lot longer, and driving a car is a simpler task than using a computer, which is a very complex system.
I intensely dislike the industry’s tendency to blame users for security problems, this is a cop-out, a dereliction of responsibility. The security problems are created by the computer industry and it is up to it to fix them.
Ehm, no. Most security problems are misconfigirations and other errors by humans, the users themselves.
On a philosphic level, security problems are created by inappropriate freedom. I guess you see the difficult catch-22 -which you totally evaded- here already.
If there are patches to be downloaded the system should – by default – check for them daily.
My OS has this ability as option which is turned off by default. If i go on vacation, i would certainly turn that on on my home server. However, doing this by default creates security issues itself. DNS attack is an easy one, IP reroutes by hostile ISPs and hostile governments is another one. Therefore, the packages need to be signed and the transfer should be encrypted using a good technique like GPG or SSL certificate and even that isn’t secure if the system got cracked between the 24-hour update. Moreover, installing software via the Internet by default is Evil as the user doesn’t know what happens when anything breaks otherwise. My OS has a non-standard software package which graves for bugs and changelogs. If i see updates, i check what has been fixed and what bugs remain or are new using that — that rocks, i’m the captain and i decide if i install a certain new piece of software, or not.
Of course if the system was properly designed in the first place you wouldn’t need many patches.
Agreed, but number of patches says nothing about number of flaws fixed.
It should assume everything wants to cause damage so restrict a program’s potential to do so.
[…]
You could go further and sandbox everything at all times and proactively look for things to sandbox – i.e. if an application attempts to download an executable file and execute it in memory, the system should either prevent or sandbox this behaviour.
I think you’ll like OpenBSD, OpenVMS, other hardened OSes and Le Grande. IOW: Exists. Worked on for years. Nothing new. At least, very similar to what you propose.
Anyway, how are you going to allow your browser to download. a file and allow the execution of this file aka “install new software”? There are tons of other examples which make your dream unrealistic except in niche environments. This is because security isn’t good friends with freedom and user-friendliness. I don’t see how your proposal will change the good of the _three_ but if you’re eager: check out PaX/GrSecurity, Systrace (example config for Mozilla exists), SELinux, OpenVMS, etc. ACLs go _much_ futher than what you describe.
Don’t allow programs to delete all files
Files are generally not deleted by programs, they are either moved to a directory (Trashbin) or unlinked (“Deletion”). Both are up till 100% recoverable.
Again, a decent ACL system would work. However that’ll require loads of time to configure for every GUI application and people are generally more interested in doing all what you described on highly secure production server environments.
If it tries the files should be moved to backup
/trash or /backup, no big deal.
Deleting backups should be a privilege only the user has – no application should have this ability.
What if i have uid 0 and my program is named /bin/bash?
If an application was so bad that it could do such a thing think of what a malicious programmer could do.
Yet people download the most weird binary-only software from the Web or obscure places like P2P programs. Do you think they really care of the security of the software? Or do you think that according to them the assumptions are worth the cause?
This is impossible to achieve on most Unix based systems due to the design of the kernel.
Bullshit. It is alive and kicking on Debian GNU/Linux, Gentoo, and many other Unices or Unix-like OSes. Together with privsep it is the alive and kicking core of OpenBSD and FreeBSD. On specific parts of the kernel _itself_ [which i think you meant] it is impossible because of the monolithic design of the kernel, i fully agree. With a microkernel, the hierarchy has other problems because of its freedom and that has implications. Stability could be one of them.
Some non-encrypted connections would of course be possible i.e. Web or FTP servers.
FTPing as root or other system user is stupid, FTPing as user is slightly less stupid, but still not smart. Think of hostile ISPs and hostile governments. You are from the EU i understand? All your Internet data will be stored for 1 year in the near future. It already happens in some EU country. The freedom of the government + peers and the securing of the private data of its citizens destroys the privacy of every Netizen, wether she is a terrorist or Aunt Tillie is not important. Do you really want your configuration data and/or password to be send via cleartext? Security includes such hazards as well, and if you want to be paranoid like you seem to be …
If I run a web server it automatically becomes visible to everyone on the outside unless it has been fire-walled off. I propose a “tunnel” be added to our system and only services which have been explicitly been “exported” along this tunnel can be accessed from the outside world.
Basically a firewall with deny all accept none rules on. Gamers will love that. They are fascinated in the culture of firewall configurations.
You could run a Router, Firewall, NAT (Network Address Translation) and other services on the external stack yet have the inner stack completely isolated from the internet.
A DMZ? A bridge? Seriously, it sounds like you want a combination of that. You run the server on the DMZ, and you run the desktops on the other side of the 3-port bridge. The Tnternet is on the 3rd side, and it is regarded as Evil. The firewall -which could be the bridge itself- acts according to this Evilness. That is a basic configuration.
When you do connect you could go via NAT which itself adds another layer of security.
Try AuthPF.
“Invisible” file backups could be used which only the system would be able to access, they would never be presented to applications. Only the user via the system would be able to manipulate them (i.e. to delete confidential data). A full OS backup could exist like this so if an attacker manages to circumvent the controls preventing them from changing system files, the system would detect the difference and replace the changed file with the correct one. The invisible OS backup files should not be presented to anyone, not even the user.
MS Windows XP does something similar. Heck, even any properly configured *NIX system does this with the exception that the system administrator(s) have the power to revert to back-ups. Windows XP does have this cool feature which “rolls-back” a patch meaning the original contents before the patch was applied are reverted. Nice when you installed a buggy patch or in this case not the back-up you wished for. You evade how many times in the timespan of X a back-up should be made.
“Another method of improving security is to improve the code in the system, audit [Audit] the code for errors and bad code. Better code means less bugs and less bugs means the system is more stable and more secure.”
Again you’ll like OpenBSD, OpenVMS and other hardened solutions but i doubt everyone will run such system as their desktop.
“but it may also be possible in languages like C [Code].”
Basic tools exist to determine common vulnerable program errors in C, other exist to detect memory leaks.
“These techniques could be taken further – if libraries include potentially bad functions why not remove them and replace them so they are not in the system API.”
Hello strncpy and strncat. See http://www.google.com/search?q=cache:kIdJ2plGHcQJ:www.dwheeler.com/… paragraph 6.
It also works, OpenBSD [OpenBSD] have been using this philosophy and at least some of these techniques for years.
But it does have drawbacks: not all software in Ports is audited while users do prefer some of that software. Systrace is not enabled by default you’ll have to make all the configurations yourself. Systrace nor OpenBSD do however not, as you propose, run every application as dfferent user. No sane Joe User will run OpenBSD with Systrace privsepped and chrooted and you’ll better be careful not to give them default behaviour which is uncalled for (see your auto-update proposal and my criticism for example). Normal people do want a functional OS.
The File System
Also see ReiserFS4.
I have no particular interest in the rest of the file system discussion.
As i said i don’t think you’ve wrote a bad article despite my criticism. I actually found it quite good, but i felt you left various existing and ready to rumble stuff left. You could have told the reader far more on things like ACLs and hardening in general or more specific software like OpenBSD, SELinux, Systrace, Jail, Chroot, Privsepping, GrSecurity/PaX and links to secure C programming. There are even top-notch quality books in paperform on this subject.
My bottomline is that what you propose is laready possible on UNIX albeit not in a very non-timeconsuming manner. That’s what it is, and that is why your proposals are not practical. People who are getting paid for their security job will incorporate your (already available) suggestions. They’ve done it in the past and will still do so. Joe Users and not Ultra-secure-to-be-servers however, will not or will be forced to with all kind of implications (see Le Grande criticism) and that is the main flaw of your arguments.
If you are only using user level security (or, to a lesser extent, an access control list) like traditional Unix and Windows then, yes, this is true. But systems can be built with approx. object level security with capabilities where everything works via message passing that can mathematically be proven to make it impossible for a trojan to do what what the user it runs under can.
So how does the computer magically know what the user really (*really*, truly) wants to do ? Half the time the user isn’t sure himself…
I think what you mean is that it’s possible to make a security model that doesn’t allow a process to do anything is hasn’t been explicitly allowed to do, which is where the usability problem comes in – computers that require signed-forms-in-triplicate just to copy a file or browse the web aren’t going to be very popular (as anyone who has tried to enforce a secure passwords policy will attest).
Social engineering would likely be the only way to attack such a system.
A trojan *is* social engineering – it’s the epitomy of social engineering.
Quite how a tunnel would be implemented is open to question. One possibility is to have two network stacks: The Internal stack is in contact with the programs in the system. The External stack is in contact with the external network interfaces. The tunnel sits in between connecting the two stacks. You could run a Router, Firewall, NAT (Network Address Translation) and other services on the external stack yet have the inner stack completely isolated from the internet. When you do connect you could go via NAT which itself adds another layer of security.
Obviously this guy has some good points, but here he demonstrates little knowledge of a network structure. What’s the point hiding services on an “internal” network ring? The threat for security begins with services exposed to outside. And this services should have disk and system access also, outter ring or not. So you have traditional “stealth ports + jailed process” that works very good against “stealth ports + two layers of network + jailed process anyway”? this is several levels of complexity worse, and no security gain at all. Sure, NAT adds security but when the NAT is on another device that has no access to the system. To expose a service to the outside through NAT you have to do inevitably port forwarding, and here begins the potential security flaw: the app that listens and reacts to that port. There’s no other way, at least with the current implementation of the internet protocol. No matter how many levels of network structure you want to put between the outside net and the system internals, if you want a foreign computer to access a service (like a web server) of your system you have to make sure that the webserver can receive outside data without requesting it.
Chris wrote “Running services not as root? A good idea, possibly make a “sandboxed root” for services. One that doesn’t have that much access. But here’s the problem. Some things need full access. Although I bet most services you could get away with having almost no write permissions. Of course, cutting uneccassary services is still the best way to protect yourself here.”
Solved by OpenBSD, the binaries doesn’t need to be SUID and have all rights, they only get what’s needed to get the work done.
The threat for security begins with services exposed to outside. And this services should have disk and system access also, outter ring or not. So you have traditional “stealth ports + jailed process” that works very good against “stealth ports + two layers of network + jailed process anyway”? this is several levels of complexity worse, and no security gain at all. Sure, NAT adds security but when the NAT is on another device that has no access to the system. To expose a service to the outside through NAT you have to do inevitably port forwarding, and here begins the potential security flaw: the app that listens and reacts to that port. There’s no other way, at least with the current implementation of the internet protocol. No matter how many levels of network structure you want to put between the outside net and the system internals, if you want a foreign computer to access a service (like a web server) of your system you have to make sure that the webserver can receive outside data without requesting it.
(NAT is horrible anyway.)
What he described seems like a bridge, but then not a so-called hardware bridge but a bridge on the OS itself. Something like currently is possible with a VM. Ofcourse you can add several layers of security on such bridge, like a packet scrubber (PF, Hogwash) or a NIDS (Snort or a proprietary one). With a price: time, money, knowledge neither of which Joe the Desktopuser wants to spend on something like this (my main point of criticism in my earlier posts).
First I’d like to thank Nicholas Blachford for another well argued adn thought provoking piece. Personally I agree that by default all code should be in a fine grained sandbox. If a program needs more access, then if the user trusts it the user can give it more access. If the user doesn’t know what they are doing then it is possible that they should be thinking twice before downloading a program from the internet and giving it unlimited access anyway. The default should always tend towards security.
Another commenter mentioned having the basic sandboxing inside the message passing of a rich Object Orientated framework that programmers use to create appliactions. Using it rather than API calls to the lower operating system which would be kept to a minimum (Linux 2.0, for example, only had a couple of hundred so adding so hooks into the sandbox so that they can’t be accessed by untrusted code should not be impossible).
This sounds a good idea, it places the security at below the sight of most programmers so they can write what they want to do without having to worry about security at all as there program won’t be able allowed to do anything bad. They just have to remember to handle exceptions, which a good coder would do anyway. This is already inside .NET and Java can could be added to the message passing system of Objective C, with the problem of Function Pointers in Objective C and .NET allowing it to be bypassed by a skilled hacker.
Having just read up on FreeBSD Jails they don’t seem fine grained enough in some respects and too restrictive in others. Basically they seem to have been designed to solve a different problem of an hosting company setting up a single server box that can run many different servers for their clients use without the clients being able to get access to one anothers servers when they do their administration.
You’re right, and I understand your point here. From a structural point of view NAT is an horrible hack that should be avoided at all costs. from an user point of view, NAT increases security (as long as the NAT device is secure) and can yield to a better network usage if the NAT device is configured properly.
I see a danger on what you’re saying, apart from complexity (all that secure methods could be embedded on an integrated app). NAT security relies on two facts: the first is that there’s no way to contact a machine inside NAT without port forwarding, the firewalled machine is the only that could open that connection. The second is that the security solution should work (may it be a kernel thing or by embedded apps) on the same computer your’re trying to protect, which would be very resource hungry apart from the security risk that implies. And you would have not only one firewall to worry about, but as many as computers you have, duplicating efforts. As I said, the main threat for security is not to be accesible through the internet, but the app that is listening. Imagine that you have an FTP server, with a snort IDS running and some other active network monitoring and content blocking apps. All these apps are catching the data flow from the outside and doing things with it… like having a buffer overflow. Clearly the ftp server should be the weaker one, and the most probable break point, but you have some other apps processing network data destined to it. You want offer web services? the webserver must be accessible from outside. No matter how many security methods you put amongst internet and your web server, it still have to be visible from outside.
DMZ, port stealthing, jailed services and automatic updates are IMHO a solid security policy when leaving it to an automatic device. Of course there are more elaborated (and stronger) methods, but IMHO if you want secure a machine on a finite amount of time, that is the way to go. I prefer to have my network stack doing the things it is supposed to do, not providing me security. After all, the internet protocol was never designed for security so why play with it? have a traditional network stack and leave the security from outside it.
Anyway, that’s only my point of view after all.
“So how does the computer magically know what the user really (*really*, truly) wants to do ? Half the time the user isn’t sure himself…
I think what you mean is that it’s possible to make a security model that doesn’t allow a process to do anything is hasn’t been explicitly allowed to do, which is where the usability problem comes in – computers that require signed-forms-in-triplicate just to copy a file or browse the web aren’t going to be very popular (as anyone who has tried to enforce a secure passwords policy will attest).”
That is pretty much what I mean. Don’t allow a process to do anything is hasn’t been explicitly allowed to do. But you can avoid the “signed-forms-in-triplicate” / “interface hell” problem if you do it at a fine enough grain. You can actually do it in a way that is almost totally invisible to the user.
It’s been done by researchers and looks good in the initial phase. Marc Stiegler made a capability based secure desktop for DARPA that is invulnerable to viruses and trojans. You don’t have to deal with security dialog boxes, passwords, certificates, dongles or any other bothersome interruption.
Just use it like you would an unsecured desktop. Only difference is, it’s one of the most secure you’ll find.
http://www.skyhunter.com/marcs/skynet.wmv is an introduction to the concept.
“A trojan *is* social engineering – it’s the epitomy of social engineering.”
Social engineering gets you to run the trojan. But I’m talking about something that once the trojan runs, it’s powerless to do any harm.
The only way to attack something like that is to trick some idiot into thinking “format c:” speeds things up. That’s what I mean by social engineering.
That is pretty much what I mean. Don’t allow a process to do anything is hasn’t been explicitly allowed to do. But you can avoid the “signed-forms-in-triplicate” / “interface hell” problem if you do it at a fine enough grain. You can actually do it in a way that is almost totally invisible to the user.
I’m not sure where you get that idea. As security becomes finer grained, it becomes *more* intrusive into the UI and day to day activities, not less.
It’s been done by researchers and looks good in the initial phase. Marc Stiegler made a capability based secure desktop for DARPA that is invulnerable to viruses and trojans. You don’t have to deal with security dialog boxes, passwords, certificates, dongles or any other bothersome interruption.
Well, I found this with a quick google:
http://www.skyhunter.com/marcs/capabilityIntro/index.html
Quote:
So Melissa, upon starting up, would first find itself required to ask you, “Can I read your address book?” Since you received the message from a trusted friend, perhaps you would say yes – neither Melissa nor anything else can hurt you just by reading the file. But this would be an unusual request from an email message, and should reasonably set you on guard.
Next, Melissa would have to ask you, “Can I have a direct connection to the Internet?” At this point only the most naive user would fail to realize that this email message, no matter how strong the claim that it came from a friend, is up to no good purpose. You would say “No!”
Unfortunately, this suffers the same flaws as existing systems:
1. The final decision rests in the hands of the ignorant user.
2. It works under the assumption that the end user has enough knowledge to make the right decision.
3. It assumes the absence of social engineering.
All a trojan has to do to circumvent this system is say in the email that it will attempt to make an outside connection to “authenticate against our servers to ensure your eligibility for our offer and protect your privacy” and to please allow this connection.
Not to mention the intrusive messageboxes all the time – just how long do you think it will take people to simply start clicking “Yes” or “I Agree” to every box that pops up, like they do now ? I’d peg it at less than a day.
As usual, whenever someone starts talking about the latest great security system that’s going to save the world, I’m reminded of this webpage:
http://www.rhyolite.com/anti-spam/you-might-be.html
The *technical* problems of making a secure system were solved decades ago and are sufficiently present in every OS currently available. But the *technical* problems are easy. The biggest security hole in the system – and the hardest one to solve – is the *social* problem of the end user.
Social engineering gets you to run the trojan. But I’m talking about something that once the trojan runs, it’s powerless to do any harm.
Of course it can – all it has to do is convince the user that the prompts the “security system” throws up are completely normal, and harmless and to agree to them. Viola, instant access to anything the user has access to.
Given the eager willingness of people to click on just about anything that has words like “free”, “nude” and “college girls” in them, I don’t think that would present much of a problem.
The only way to attack something like that is to trick some idiot into thinking “format c:” speeds things up. That’s what I mean by social engineering.
Which, again, is not especially difficult. For example, you might remember the “sulfnbak.exe” hoax.
Of course, (truly) destructive malicious code (like targeting servers) is old & busted – the new hotness is assembling millions of end-user machines for DDoSing or spam relaying.
NAT security relies on two facts: the first is that there’s no way to contact a machine inside NAT without port forwarding, the firewalled machine is the only that could open that connection.
The “security” offered by NAT relies on no-one “bad” having control of the upstream router(s). If/when they do, they can almost always direct traffic into the network(s) behind the NAT device.
The second is that the security solution should work (may it be a kernel thing or by embedded apps) on the same computer your’re trying to protect, which would be very resource hungry apart from the security risk that implies.
The resource overhead is insignificant. A first generation Pentium can quite easily perform NAT and firewalling for dozens of machines on a 100MB network. On a modern machines – particularly one on a simple broadband connection – the processing overhead of host-based NAT and/or firewalling should barely be measurable.
And you would have not only one firewall to worry about, but as many as computers you have, duplicating efforts.
That shouldn’t be a problem for a properly managed network, as any such system could be centrally managed (eg: using Group Policies).
The “security” offered by NAT relies on no-one “bad” having control of the upstream router(s). That was the first I said: (as long as the NAT device is secure). The two underlying secure factors of NAT I say have that embedded.
The resource overhead is insignificant. If you only perform NAT yes, but NAT is not enough to keep a system secure. Real time dataflow analysis could be very resource hungry on a very busy networked machine.
That shouldn’t be a problem for a properly managed network, as any such system could be centrally managed (eg: using Group Policies).
I was not speaking about admin methods. But every computer on a private lan having a fw seems like a bit of overhead to me, having in mind that they would be protected from the internet by the main NAT device. Controlling 1 device or controlling 100 devices, even with automatic behaviour… hmm I prefer to go on with only one FW. A properly configured LAN is very secure as long as you do not trust your clients and have a monitoring system to track and avoid any potential inside threat, for instance MIM attacks.
Seldom have I seen such a high concentration of clueless morons, including the author of the article. The only one here who seems to understand something about security is de Selby.
> Most security problems are misconfigirations and other errors
> by humans, the users themselves.
No! Most security problems arise because program designers make it extremely easy for people to make the wrong choices, inadvertently more often than not. Or do you mean that it’s not at all Microsoft’s fault that a Windows program can create a window that looks exactly like the login screen or like your email client password prompt or a network drive password prompt?
> You might not *like* it, but the simple *fact* is that the biggest
> security hole in the system is the user.
> […]
> A computer is designed to do what the user tells it to
So you mean that when a user executes “rm *>o” instead of “rm *.o” it’s primarily the user’s fault that all his source code just vanished to the great bit-bucket in the sky? Or that a delete is recursive if you run “rm *” and happen to have a file named “-r” in the same directory?
Most popular OSes, especially unices, seem to be designed such that even the tiniest mistake would have extreme consequences.
> If end users didn’t do silly things like deliberately running
> malcious code
Oh, how silly of the user to run the file named “THIS IS A VIRUS.exe”! Really, how often have you seen malicious code announce itself and give you an option to disable e.g. reading your accounting files and sending them over the network? At most you’ll see a dialog box saying that the program you’re about to run isn’t “trusted”, and this dialog box is more or less identical to twenty other ones that you needed to answer “Allow” to in order to do everyday tasks, such as run Windows Update.
There is no such thing as a trusted program! There are only different levels of trust. You don’t trust a new screensaver to read your accounting files, nor do you trust it to use the network. However, you do trust it to run in fullscreen mode when you’re not providing user input. You also trust your accounting program to read your accounting files, but perhaps you don’t trust it to connect to the internet.
Somehow most of these idiots that think they know something about security have reached the obviously incorrect conclusion that code is either “trusted” or “untrusted”. I can’t for my life understand how anyone would believe such a stupid thing. And these same idiots apparently also think security means lots of ACLs, and that having a fine security granularity must mean more work for the user and higher degree of complexity codewise.
It’s not like the solution has been hidden for the last few decades or anything. Good ideas have been shown to work orders of magnitude better than current systems, both theoretically and practically. There have even been several successful implementations, but still people just refuse to see, even if you throw it in their face. One can give URLs to security papers, or to successful implementations, but people just refuse to listen.
Slightly adopted from a saying about horses: You can try to lead an ignorant “security specialist” to a good solution but you can’t get him to not ignore it.
Google keyphrase of the day: capability-based security
No! Most security problems arise because program designers make it extremely easy for people to make the wrong choices, inadvertently more often than not.
No, the security problems arise because people *make* the wrong choices – usually numerous times (every time an email worm goes around I’m reminded of that scene from The Simpsons with Bart trying to grab the doughnut (?) and getting zapped).
Or do you mean that it’s not at all Microsoft’s fault that a Windows program can create a window that looks exactly like the login screen or like your email client password prompt or a network drive password prompt?
Not really, no. Or are you proposing third-party applications shouldn’t be able to present password prompts ?
So you mean that when a user executes “rm *>o” instead of “rm *.o” it’s primarily the user’s fault that all his source code just vanished to the great bit-bucket in the sky?
Yes, although your example is (by necessity, since it’s the only way to support your reasoning) ridiculously extreme – ignorant users rarely use commandlines and most linux distros (and even commercial unixes these days) default to aliasing “rm” to “rm -i”.
Or that a delete is recursive if you run “rm *” and happen to have a file named “-r” in the same directory?
Again, a stupidly corner-case example and, again, even if the command prompts for confirmation, it can’t stop the user typing “yes”.
The underlying point that you’re trying to make here – that destructive behaviour should default to “are you sure” – is quite valid (and I’ve never argued otherwise). However, you still can’t stop the user from answering “yes”, even if they don’t mean it or don’t understand the consequences.
Most popular OSes, especially unices, seem to be designed such that even the tiniest mistake would have extreme consequences.
Well, certainly Unix is “designed” (in as much as unix is designed at all) to be used by people who know what they’re doing and is heavily biased towards automatating actions – hence there is little feedback and prompting (and there are certain benefits to this). More user-friendly OSes, OTOH, usually err on the side of too much caution.
Deleting your files and answering “Yes” to the “Do you want to delete this” prompt isn’t a “tiny mistake”, it’s a honkin’ great *big* mistake, so please don’t use it as an example of “the tiniest mistake having extreme circumstances”. It’s like feeding the wrong documents into a paper shredder, or accidentally faxing something instead of photocopying it.
The fundamental issue here is that computers are multipurpose devices and a significant proportion of the population – at a very basic level – can’t handle that concept. There’s a reason “appliances” are so popular.
Oh, how silly of the user to run the file named “THIS IS A VIRUS.exe”!
Yep, pretty much.
Really, how often have you seen malicious code announce itself and give you an option to disable e.g. reading your accounting files and sending them over the network?
Pretty much *every* email-bourne virus I’ve ever seen (or even read about) has announced itself as – at best – questionable with weird subject lines, atrocious spelling and grammar, clearly bogus offerings and the like.
Added to that activating them requires answering “Yes” to a dialog (that defaults to “no”) warning of the dangers of opening dodgy attachments. People merrily answer “Yes” all the time, so I really can’t see a dialog saying “Program [X] is trying to access the network, allow it” is going to get a different response.
At most you’ll see a dialog box saying that the program you’re about to run isn’t “trusted”, and this dialog box is more or less identical to twenty other ones that you needed to answer “Allow” to in order to do everyday tasks, such as run Windows Update.
Uh huh, and somehow the dialogs saying “This program wants to do [X]” *aren’t* going to look just like the twenty other dialogs ? Are they going to look like the hundred-odd dialogs that will start popping up every five minutes that just happen to have a different thing where [X] is ?
There is no such thing as a trusted program! There are only different levels of trust.
So there’s no such thing as a trusted program, except the ones that you trust [to a certain degree] ?
You don’t trust a new screensaver to read your accounting files, nor do you trust it to use the network.
Do you have any idea how popular screensavers that download new pictures every [minute|hour|day] off the web are ?
Do you think the average user knows the file paths and names of “the accounting files” ? (More importantly, do you think they should have to ?)
You also trust your accounting program to read your accounting files, but perhaps you don’t trust it to connect to the internet.
So no integration between your accounting package and your bank’s online banking, then ?
Somehow most of these idiots that think they know something about security have reached the obviously incorrect conclusion that code is either “trusted” or “untrusted”. I can’t for my life understand how anyone would believe such a stupid thing.
Because at some point the code has to be run, or else it is useless – that is the point at which it must be “trusted”.
OTOH, “trust” is not a binary state and no-one is suggesting it should be. Certainly, some current security models (Unix’s superuser concept) treats it as such, but more advanced models like SELinux or NT’s ACLs don’t. As you imply, levels of trust exist and need to be delegated – the problem is that the delegation of that trust fundamentally lies (and must remain) with the user, and the user is often not capable of making an appropriate decision.
And these same idiots apparently also think security means lots of ACLs, […]
These are security *mechanisms*, not “security”. Security is a process – one that is centred and reliant on the end user. That is why the end user is the biggest security hole in any system.
[…] and that having a fine security granularity must mean more work for the user and higher degree of complexity codewise.
Any evidence you have suggesting the contrary would be fascinating, because it certainly goes against any theory I’ve ever heard of.
Google keyphrase of the day: capability-based security
Still won’t stop the user from deleting the wrong file, or allowing malicious code to run.
> No, the security problems arise because people *make* the wrong
> choices – usually numerous times
Get a clue! Email worm problems are almost solely the cause of the morons that made the OS and email client in such a way that email attachments are executed with the authority to do whatever they want (e.g. to send your accountant files over the network).
> > Or do you mean that it’s not at all Microsoft’s fault that
> > a Windows program can create a window that looks exactly like
> > the login screen or like your email client password prompt
> > or a network drive password prompt?
>
> Not really, no. Or are you proposing third-party applications
> shouldn’t be able to present password prompts ?
Are you serious or just trolling?
Of course applications should be able to present password prompts. (That is, if they have the capability to do so, e.g. by having a capability to open some kind of window. Remember, you if you decouple authority and designation you’ll sooner or later get a confused deputy.)
What applications shouldn’t be allowed is to create a window that is (visually) indistinguishable from another program’s password prompt, or worse, from the system login screen.
> > So you mean that when a user executes “rm *>o” instead of
> > “rm *.o” it’s primarily the user’s fault that all his source
> > code just vanished to the great bit-bucket in the sky?
>
> Yes, although your example is […] ridiculously extreme –
> ignorant users rarely use commandlines
Obviously that example wasn’t about ignorance. I was trying to highlight how a very simple mistake (which in this case was “releasing the shift key a microsecond too soon”) can have disastrous consequences in a system “designed” by morons.
> It’s like feeding the wrong documents into a paper shredder,
> or accidentally faxing something instead of photocopying it.
No, it’s like a company making a combined paper shredder, fax machine and photocopier with one single paper feeder, and then making the “mode” switch have the labels ‘1’, ‘2’ and ‘3’ for the three different modes. Sooner or later someone will mix up those numbers and think ‘1’ means ‘copy’ instead of ‘shred’.
> activating them requires answering “Yes” to a dialog (that defaults
> to “no”) warning of the dangers of opening dodgy attachments
After answering “Yes” in the same dialog twenty times when your workmates have sent you some internal documents, or whatever, makes the answering pretty automatic. When you then receive another email from a workmate and with the subject “check this out”, an attachment and no message body, you pretty much double-click the attachment and choose “Yes” before you even noticed there was a security requester there.
Damn it, there shouldn’t be any frigging “Sure you want to run this attachment?” prompts!!! The email client should just run the attachment, but of course it shouldn’t give all its cabapilities to the program.
> People merrily answer “Yes” all the time, so I really can’t
> see a dialog saying “Program [X] is trying to access the network,
> allow it” is going to get a different response.
Oh, please! There shouldn’t be any “Program [X] is trying to access the network, allow it?” prompt. Get a clue already! If you don’t separate authority from designation a program has no way of even requesting access to the network without having permission to access the network.
> Are they going to look like the hundred-odd dialogs that will
> start popping up every five minutes that just happen to have
> a different thing where [X] is ?
It was more or less exactly this I was refering to when I said “these same idiots apparently also think security means lots of ACLs, and that having a fine security granularity must mean more work for the user”.
Get this: There won’t be dialogs every five minutes.
> So there’s no such thing as a trusted program, except the ones
> that you trust [to a certain degree] ?
Programs aren’t either trusted or untrusted. All programs are trusted to some degree. I would trust even a known virus with the single capability to use 10 seconds of CPU time.
> > You don’t trust a new screensaver to read your accounting
> > files, nor do you trust it to use the network.
>
> Do you have any idea how popular screensavers that download
> new pictures every [minute|hour|day] off the web are ?
No, I don’t. However, if there are such screensavers then perhaps screensavers should generally be given a capability to contact the site they were originally downloaded from, or perhaps some user-specified site. Or perhaps you instead should make a “downloader” system service to which you give screensavers a capaility.
Still, it’s completely unacceptable to give each downloaded screensaver access to everything you (as a user) have access to.
> Do you think the average user knows the file paths and names
> of “the accounting files” ?
No. So?
> (More importantly, do you think they should have to ?)
No. So?
If the screensaver and the accounting program isn’t one and the same then I don’t see why they would be given capabilities to access each other’s files.
> > You also trust your accounting program to read your accounting
> > files, but perhaps you don’t trust it to connect to the internet.
>
> So no integration between your accounting package and your bank’s
> online banking, then ?
Sure, but I most certainly wouldn’t give the accounting program a cabaility to unrestricted network access. At least not by default.
> […] that is the point at which it must be “trusted”.
> OTOH, “trust” is not a binary state and no-one is suggesting
> it should be.
Au contraire. Quite a few suggest it should be. Very seldom can you choose between anything else than either “run with all privilegies that I have” or “don’t run at all”. This is one of the problems and such idiocy is what I’m opposing.
> the problem is that the delegation of that trust fundamentally
> lies (and must remain) with the user, and the user is often
> not capable of making an appropriate decision
I disagree. More often than not this process can be made very intuitive and almost transparent. E.g. consider a text editor. Do you think it’ll have to have the capability to open all text files that you (as a user) has access to? You probably do. Well, that’s completely wrong. It doesn’t. It only has to have a capability to use the “Open file” OS service, which in turn will give the editor the capability to open the file(s) selected by the user. (Notice how the user won’t have to go to some security dialog and select which files the editor has access to. The authority is implicit in the designation.)
> > […] and that having a fine security granularity must mean
> > more work for the user and higher degree of complexity codewise.
>
> Any evidence you have suggesting the contrary would be fascinating,
> because it certainly goes against any theory I’ve ever heard of.
How about the file dialog described above? How about any purely object oriented system? (Pure OO implies not having ambient authority.)
Exactly which theories state that finer security granularity implies more work for the user and/or a higher degree of complexity codewise?
> > Google keyphrase of the day: capability-based security
>
> Still won’t stop the user from deleting the wrong file,
Correct. Those issues are almost orthogonal.
> or allowing malicious code to run.
Perhaps it won’t prevent malicious code to run, but it most certainly limits the damage such code can cause, at least by an order of magnitude.
Since you seem to be google-impaired I’ll give you some links to study:
Paradigm Regained: http://www.erights.org/talks/asian03/
MicroEssays on Capability Theory: http://www.cap-lore.com/CapTheory/
That is pretty much what I mean. Don’t allow a process to do anything is hasn’t been explicitly allowed to do.
(It goes much futher than processes. It goes all the way down to instruction calls — ask the NSA.)
Do not forget we are talking about home desktop users here. We are talking about people who on layers like this by default do not want to judge wether something is explicitly allowed or not. The home user doesn’t want their computer system to go down on 20:00 while she’s e-mailing her aunt, yet the home user doesn’t buy herself a UPS or a generator. Various other examples are available. Futhermore, who is she to decide on millions of case-by-case situations wether she should or should not be able to what? What is the question to which she will not be able to find an answer, it’ll be a nightmare which she doesn’t want to experience. In every example, it is others who decided she has to make a choice (which is both freedom and opression depending on how you look at it).
“> Most security problems are misconfigirations and other errors
> by humans, the users themselves.
No! Most security problems arise because program designers make it extremely easy for people to make the wrong choices, inadvertently more often than not. Or do you mean that it’s not at all Microsoft’s fault that a Windows program can create a window that looks exactly like the login screen or like your email client password prompt or a network drive password prompt?”
I agree with drsmithy on this (and various other points drsmithy brought up). If i’d have stated: “Security problems are errors by humans” would you have argued programmers aren’t humans?
“There is no such thing as a trusted program!”
There is no such thing as ignorance or trust either. These are merely part of the judgements on which humanity tries to function as-is. In order to function as you are you have to make judgements, and judgements based on trust and/or ignorance are part of that in our every-day life. It is impossible to life without judgements. Because of this, people decide to trust elements of life (rationally or out of ignorance of which the former eventually boils down to ignorance / misplaced trust). I trust my car. Even though i do not know every aspect on how it works. I trust the trees besides my house to not fall on my house while i’m sleeping. I trust my coffee ain’t poisoned. The key for each human is to search in every judgement a way of thinking where one is able to believe that the trust she has in an element is not misplaced. It is a balance and some are better to make such judgements on specific elements (like the one we are describing) than others however even the greatest paranoist has to make judgements even while the chance it goes wrong boils down to a very, very low %.
Let me continue, that, because of the above..
Somehow most of these idiots that think they know something about security have reached the obviously incorrect conclusion that code is either “trusted” or “untrusted”.
..people judge on their insights wether code is trusted or not. As you might be aware, given your examples, Joe Desktop isn’t very well at making such judgements leading to various problems. Because of this, people design / have designed a variety of systems which take freedom (“oppurtunities” like you put) away from Joe, making decisions for Joe himself or forcing him to make decisions. Whatever you think this is Good or Bad, or Good or Bad, it isn’t the Holy Grail.
Don’t even start on things like AI. They are not 100% able to fullfill trust either. Nothing is. End philosophy.
If security vs freedom was such a simple issue, there would be no freedom and only security in a dictatorship while only freedom and no security in an anarchy. The system we would live in would be both secure and free; the current systems are a mix of both. It is also the reason why there is no factual correct, non-debatable definition of “freedom”, “opression”, “secure”, “dictatorship” or “anarchy”. I’ll have loads fun to argue this analogy, but it is my current opinion that i’ll have to take a look at the trees besides my house or rather that it is more constructive to leave this open as food for thought.
drsmithy, you, and i disagree on a philosophic level. Is it is mature to use your freedom to humiliate (flame) the other discussion partner(s) on this level of discussion? I hope we’ll have a mature discussion and futhermore i am very much interested in your philosophy but please, no such examples as you posted in the post i replied to. I’d rather have one in-depth example in which you describe your philosophy or have you explain your philosophy; that’s something different than the arrogant tone of “use Google” which i find not applicable in a discussion like this. If you understand what has been told, you are able to tell it in your words to us. Thanks.
> If i’d have stated: “Security problems are errors by humans”
> would you have argued programmers aren’t humans?
Don’t be silly. Of course humans are the cause of security problems. However, users aren’t the ones to blame for the mess we’re in at the moment.
> > There is no such thing as a trusted program!
>
> There is no such thing as ignorance or trust either.
> [loads of semi-philosophical ramblings about trust]
I already explained what I meant by what I said. Let me do it again: The choice should not be between “I trust this application with all my data and all my resources and the life of my first-born” and “I won’t run this application at all“.
Just because you don’t trust your email attachment with the only copy of your thesis doesn’t mean you don’t want to see the funny animation that your workmate says it displays if you execute it.
The levels of trust should be more along the lines of “I trust my accounting application with my accounting files and with its own configuration file” and “I trust my network sniffer with full read-only access to my networks and with read/write access to its own configuration file“. Of course it’d be nice to be able to specify such detailed security profiles for each application that you run, but most likely you would have one standard “application security template” (or perhaps a few ones), so that a novice user won’t have to make all these choices.
So far this is equivalent to ACLs, but then comes the important part:
No ambient authority! This means that a program can’t do anything it hasn’t got a capability to, and it can only get a certain capability from a program that has that capability. In other words, a program won’t be able to give another program a capability it doesn’t have. (Yeah, I know this sounds obvious, but to many OS designers and so called “security experts” this is news, even though it’s decades old tried and tested “wisdom”.) This means that if an email client doesn’t have a capability to read your love letters then even if a virus would get complete control over the email client it still won’t be able to read those love letters. Also, if a program holds some capability that could be misused (e.g. one to access the network), then it’s quite easy for the programmer to hold this capability in a very small part of the program. This means that there is very little code that needs to be free of bugs, which means that security audits are orders of magnitude easier. Now compare this to OSes such as linux, windows and macos where if you breach one tiny part of a program you gain all capabilities of the user running that program, and if that user happens to be root then you’re hosed. You see how one solution scales whereas the other does not?
> [is it] mature to use your freedom to humiliate (flame) the other
> discussion partner(s) on this level of discussion?
No, it isn’t. I didn’t mean to flame, even though I was (consciously) a bit harsh. I just get so pissed off when people just refuse to see the solution to the mess we’re in, even though it’s shown to them time and again.
> that’s something different than the arrogant tone of “use Google”
> which i find not applicable in a discussion like this.
I never said just “use google”! I told the exact phrase to search for with google!
(Hello?! The wheel is spinning but the hamster is dead?)
> If you understand what has been told, you are able to tell it in
> your words to us.
Many others have better insights than I and have already said it much better than I ever could. Besides, this is hardly the right place for lectures (with the 8000 character limit and all). I’m quite certain that you’d rather read “Paradigm Regained” (which is really nice and authored by the lecturer Jonathan S. Shapiro, father of EROS (the Extremely Reliable OS), and Mark S. Miller, father of E (a secure, distributed programming language)) or some other papers/essays.
By the way, sorry if I got the two nicks wrongly mixed up.
> This means that a program can’t do anything it hasn’t got a
> capability to, and it can only get a certain capability from a
> program that has that capability.
Another big difference between ACLs and capabilities is that a program can throw away a capability or replace it with a weaker capability. E.g. even if you give your email client all capabilities that you have when you start it it would probably only store a few, such as limited access to the network and access to the local mailbox dir, and ignore the rest of the capabilities such as full screen mode (thus throwing away those capabilities).
Think about the consequences of this. Exactly this makes it possible for novice users to not have to tell programs what they can and cannot do, while still having a system orders of magnitude more secure than what it would be if based on ACLs. At the same time advanced users can make their systems even more secure by explicitly specifying different permission profiles for different applications.
Security based on ACLs only is based on infallibility of the programmers and, to a lesser degree, the users. It also requires more work for the users (having to set up detailed security profiles for apps to be secure), for programmers (having to create sandboxes and taking other security precautions) and for security auditors (having to audit the entire source code instead of just those minimal portions of it that contain important capabilities).
No tell me, why does most OS designers and so called “security experts” still stick to ACLs even though that paradigm has been shown time and again not to work and the capability paradigm to work very well? It’s not as if any of this is “hidden knowledge” or anything. I can’t see any other reason for such stupidity except just that, overwhelming stupidity.
And how is it different from Systrace: http://niels.xtdnet.nl/systrace/
Example rule: http://niels.xtdnet.nl/systrace/usr_sbin_httpd
More on the website. Please no reply with “not every application is systraced”). We assume that is the case in a sane environment.
The only difference is that on one hand, you have it build from the OS itself while with Systrace it is an addition to a previously flawed design. However, the rules for the system you describe were still build by someone.
Anyway, i’d like to know more on the example you provide: you have an e-mail client. This e-mail client has read / write access on the mailbox, is allowed to make connections to server:143 over TCP and server:25 over TCP. The user decides that after she got an e-mail, that she wishes to execute a program (which was to be an offcial update for the OS). Hence, she executes that. The program requires a capability. From where the the e-mail program able to draw that capability? The user is in power to allow this execution, yes? Or is it impossible to download the executable and execute it? Why not, if it is not a virus? Because it boils down to the security vs freedom pragmaticism i posted earlier.
PS: I’m still reading the 2nd URL.
> how is it different from Systrace: http://niels.xtdnet.nl/systrace/
From what I read on the webpage I noticed the following problems with Systrace:
– An application can’t throw away capabilities it doesn’t need. This combined with the above means that applications will have way too much authority.
– Designation is decoupled from authority, which might lead to the confused deputy problem.
– The authority profile is more or less static. This means that you have to give an application the authority to do everything it ever might have to do. E.g., a text editor would have to have read/write “capabilities” to all text files in the system. (The alternative is to change the profile all the time, but I doubt anyone would want to do that.) Of course you could just answer thousands of (almost identical) security requesters, but after a while answering would become so automatic that you wouldn’t even see what they are asking for.
– It’s a bitch for the user to set up! The user will either have to write those rule files or answer hundreds, if not thousands, of security requesters. Either way the user has to know a lot about the system and the applications.
– An application may run another application with greater authority than itself. E.g., although a worm might not have access to your love letters or the network, it might use a text editor to read the love letters and a browser to send them over the network. Here we have a confused deputy again.
All in all, this simply won’t work for anyone but a few extremely careful and knowledgeable users, who value the security way more than the time this solution requires of them. (And even then a bug in Systrace could bring it all down.)
> However, the rules for the system you describe were still build
> by someone.
Not exactly. The effective rules are implied by the usage of resources. Designation is coupled to authority, remember?
> Anyway, i’d like to know more on the example you provide: you
> have an e-mail client. This e-mail client has read / write access
> on the mailbox, is allowed to make connections to server:143
> over TCP and server:25 over TCP. The user decides that after
> she got an e-mail, that she wishes to execute a program (which
> was to be an offcial update for the OS). Hence, she executes
> that. The program requires a capability. From where the the
> e-mail program able to draw that capability? The user is in
> power to allow this execution, yes? Or is it impossible to download
> the executable and execute it? Why not, if it is not a virus?
First of all, a program would has to get all its capabilities from somewhere else (i.e., no ambient authority). It usually gets most of them from the program that runs it. Thus it mostly matters which program runs the email attachment. There is a world of difference between the shell executing something (which it does only on behalf of the user and only when requested to) and the email client executing something.
Naturally the email client won’t have a capability to modify system files. However, probably not even the shell would have that capabilty, so this particular case is indeed tricky. The shell might, however, have a capability to access a system update thingy, and the user could give the attachment to it, which, after verifying some signature or something, could update the system files. Thus the problem is turned from a security problem into primarily a user interface problem. I.e., how should this be presented to the user? I don’t know.
> Because it boils down to the security vs freedom pragmaticism
> i posted earlier.
Not exactly. The user is free to do anything he/she wants, but the email client might not be. Even if it was it could give something of that up, which it would if the people who designed it are even remotely sane.
> PS: I’m still reading the 2nd URL.
You might want to read “Capability Myths Demolished“, which explains some of the things you seem to be wondering about: http://zesty.ca/capmyths/
> The user decides that after she got an e-mail, that she wishes
> to execute a program (which was to be an offcial update for
> the OS). Hence, she executes that. The program requires a
> capability. From where the the e-mail program able to draw
> that capability?
In my reply I assumed you meant that she got the OS update as an attachment, but now I see that you didn’t say that she did. However, if she didn’t then I don’t see what the email client has to do with the whole thing.
Are you serious or just trolling?
Of course applications should be able to present password prompts. (That is, if they have the capability to do so, e.g. by having a capability to open some kind of window. Remember, you if you decouple authority and designation you’ll sooner or later get a confused deputy.)
What applications shouldn’t be allowed is to create a window that is (visually) indistinguishable from another program’s password prompt, or worse, from the system login screen.
Ok, so if *applications* can throw password prompts, what stops *malicious applications* from doing the same thing ? Who manages the universal table of “password prompt windows” so that the OS developers know what type of windows they are to disallow custom drawing of ?
You’re waving your hands in the air and saying “applications shouldn’t be able to create windows like X” while completely ignoring a) the *legitimate* need to be able to draw “windows like X” and b) not saying who is going to keep track of all the windows X represents.
No, it’s like a company making a combined paper shredder, fax machine and photocopier with one single paper feeder, and then making the “mode” switch have the labels ‘1’, ‘2’ and ‘3’ for the three different modes. Sooner or later someone will mix up those numbers and think ‘1’ means ‘copy’ instead of ‘shred’.
As I said, there’s a reason unix works the way it does – efficiency and automation. The more “are you sure” prompts you put in for the user, the more work it takes to use the software and the less it can be automated.
After answering “Yes” in the same dialog twenty times when your workmates have sent you some internal documents, or whatever, makes the answering pretty automatic.
So when you’re deleting a hundred files in a directory and getting asked “are you sure” every time, how is that different ?
Damn it, there shouldn’t be any frigging “Sure you want to run this attachment?” prompts!!! The email client should just run the attachment, but of course it shouldn’t give all its cabapilities to the program.
How about when the attachment has *legitimate* (or even just non-malicious) need for more capabilities ?
Oh, please! There shouldn’t be any “Program [X] is trying to access the network, allow it?” prompt. Get a clue already! If you don’t separate authority from designation a program has no way of even requesting access to the network without having permission to access the network.
It was more or less exactly this I was refering to when I said “these same idiots apparently also think security means lots of ACLs, and that having a fine security granularity must mean more work for the user”.
Get this: There won’t be dialogs every five minutes.
So how do arbitrary third party application get the capabilities they need to run ? Evidently, they don’t ask the user…
No, I don’t. However, if there are such screensavers then perhaps screensavers should generally be given a capability to contact the site they were originally downloaded from, or perhaps some user-specified site. Or perhaps you instead should make a “downloader” system service to which you give screensavers a capaility.
And this is where the cracks start showing.
Who decides what sites ?
Who decides what programs ?
Who decides what capabilities ?
What’s stopping a malicious application masquerading as a legitimate application to acquire the privileges it wants ?
Still, it’s completely unacceptable to give each downloaded screensaver access to everything you (as a user) have access to.
All a spam zombie needs to be able to do is create network connections. Hardly something uncommon.
> Do you think the average user knows the file paths and names
> of “the accounting files” ?
No. So?
So how do they know which files their application should or shouln’t be accessing ? How are they going to differentiate between legitimate and illegitimate activity ?
If the screensaver and the accounting program isn’t one and the same then I don’t see why they would be given capabilities to access each other’s files.
You can’t conceive of a piece of multi-stage malware (say, a “file organised” to be able to read every file coupled with something network-enabled – like the aforementioned screensaver – to copy them away) ?
Sure, but I most certainly wouldn’t give the accounting program a cabaility to unrestricted network access. At least not by default.
Again, who decides how much access ? Who can change it ? How much can they change it ? Who verifies that application X has only as much access as it needs ?
I disagree. More often than not this process can be made very intuitive and almost transparent. E.g. consider a text editor. Do you think it’ll have to have the capability to open all text files that you (as a user) has access to? You probably do. Well, that’s completely wrong. It doesn’t. It only has to have a capability to use the “Open file” OS service, which in turn will give the editor the capability to open the file(s) selected by the user. (Notice how the user won’t have to go to some security dialog and select which files the editor has access to. The authority is implicit in the designation.)
As I said, the decision of which file to operate on, remains with the user.
> Still won’t stop the user from deleting the wrong file,
Correct. Those issues are almost orthogonal.
Untrue. After all, malicious, file-deleting code is just another way for the user to accidentally delete the “wrong thing”.
Perhaps it won’t prevent malicious code to run, but it most certainly limits the damage such code can cause, at least by an order of magnitude.
Perhaps I’m not explain what I mean sufficiently well. I’ll try again.
Really, all malware is just a programmatic, automated tool for allowing users to shoot themselves in the foot. There is little – if anything – malicious code does that a user wouldn’t do themselves in the process of using the computer at some stage. You insist users will be able to continue to use their computers as they do now – or with even less restraint – yet do not explain how the computer will be able to tell the difference between actions deliberately initiated by the user and identical actions initiated by malicious code pretending to be the user.
You also brush over (if not completely ignore) significant issues critical to making capabilities-based security even remotely workable:
* Who decides what applications can do what ?
* Who delegates access permissions for unknown programs ?
* What stops third party software vendors from simply giving their applications all access to everything to make their development jobs easier ?
* Does anything stop software from prompting “on the fly” for capabilities permissions ?
* Should users *ever* be given unrestricted access to all aspects of the system ?
I know what capabilities-based security is, I just don’t think it’s the be-all and end-all because since it still (as it must) allows users to do things that could be bad, it still allows bad things to happen. There is also a *significant* management and delegation overhead that you are ignoring, that severely impact its relevance to unmanaged environments.
> > Are you serious or just trolling?
> > Of course applications should be able to present password
> > prompts. (That is, if they have the capability to do so, e.g.
> > by having a capability to open some kind of window.
> > Remember, you if you decouple authority and designation you’ll
> > sooner or later get a confused deputy.)
> > What applications shouldn’t be allowed is to create a window
> > that is (visually) indistinguishable from another program’s
> > password prompt, or worse, from the system login screen.
>
> Ok, so if *applications* can throw password prompts, what stops
> *malicious applications* from doing the same thing ? Who manages
> the universal table of “password prompt windows” so that the OS
> developers know what type of windows they are to disallow custom
> drawing of ?
Get this, you moron: Application windows cannot be allowed to be indistinguishable from each other! If they are then they are free to spoof. How hard can this simple fact be for you to grasp? There is no frigging “password prompt ACL”. Why don’t you read the stuff I linked to and get a clue?
> You’re waving your hands in the air and saying “applications
> shouldn’t be able to create windows like X” while completely
> ignoring a) the *legitimate* need to be able to draw “windows
> like X” and b) not saying who is going to keep track of all
> the windows X represents.
No. There is no legitimate need for being able to make a window that is indistinguishable from another application’s window, and certainly not one that looks like the system login prompt.
> > No, it’s like a company making a combined paper shredder,
> > fax machine and photocopier with one single paper feeder,
> > and then making the “mode” switch have the labels ‘1’, ‘2’
> > and ‘3’ for the three different modes. Sooner or later someone
> > will mix up those numbers and think ‘1’ means ‘copy’ instead
> > of ‘shred’.
>
> As I said, there’s a reason unix works the way it does – efficiency
> and automation. The more “are you sure” prompts you put in for
> the user, the more work it takes to use the software and the
> less it can be automated.
So you mean that it’s innefficient to have different buttons for copying and faxing, and a separate feeder for shredding? And tell me, what on earth does “are you sure?” prompts have to do with anything?
> > Damn it, there shouldn’t be any frigging “Sure you want to
> > run this attachment?” prompts!!! The email client should just
> > run the attachment, but of course it shouldn’t give all its
> > cabapilities to the program.
>
> How about when the attachment has *legitimate* (or even just
> non-malicious) need for more capabilities ?
That depends on what kind of program it is, of course. E.g., if it’s a program that the user wants to have then the user may install it in whatever way you are supposed to install applications on that particular OS.
> > Oh, please! There shouldn’t be any “Program [X] is trying to
> > access the network, allow it?” prompt. Get a clue already!
> > If you don’t separate authority from designation a program
> > has no way of even requesting access to the network without
> > having permission to access the network.
> >
> > It was more or less exactly this I was refering to when I said
> > “these same idiots apparently also think security means lots
> > of ACLs, and that having a fine security granularity must
> > mean more work for the user”.
> > Get this: There won’t be dialogs every five minutes.
>
> So how do arbitrary third party application get the capabilities
> they need to run ? Evidently, they don’t ask the user…
Normal applications are given lots of capabilities by the shell when they are executed by it. Some of these capabilities are callbacks, such as the “Open file” dialog box capability.
> > No, I don’t. However, if there are such screensavers then
> > perhaps screensavers should generally be given a capability
> > to contact the site they were originally downloaded from,
> > or perhaps some user-specified site. Or perhaps you instead
> > should make a “downloader” system service to which you give
> > screensavers a capaility.
>
> And this is where the cracks start showing.
>
> Who decides what sites ?
Ultimately the user, but the OS provider may default to all installed applications being able to contact the site they were downloaded from (by the system download service, perhaps).
> Who decides what programs ?
Huh? The user may install whatever programs he/she want. Yes, he/she may also install malware.
> Who decides what capabilities ?
Ultimately the user, but the OS provider may provide defaults.
> What’s stopping a malicious application masquerading as a legitimate
> application to acquire the privileges it wants ?
Nothing. However, it isn’t executed directly, e.g. by an email client or a web browser, with those privileges, or even with direct means to obtain those privileges.
> > After answering “Yes” in the same dialog twenty times when
> > your workmates have sent you some internal documents, or
> > whatever, makes the answering pretty automatic.
>
> So when you’re deleting a hundred files in a directory and
> getting asked “are you sure” every time, how is that different ?
It isn’t! That’s why you shouldn’t ask a hundred “Delete”/”Don’t delete” questions when the user selects a hundred files for deletion. This is quite obvious, but still the unix rm command does exactly what it shouldn’t do, making the -i switch more or less useless, and more annoying than the MS paperclip.
> > Still, it’s completely unacceptable to give each downloaded
> > screensaver access to everything you (as a user) have access
> > to.
>
> All a spam zombie needs to be able to do is create network
> connections. Hardly something uncommon.
Untrue. It has to be able to create network connections to specific sites, none of which it was downloaded from.
Still, if the user has installed a spam zombie (with the default privilegies) and the default privilegies include capabilities to create arbitrary network connections and the user executes the spam zombie, then the zombie will indeed be able to do its malicious task.
> > > Do you think the average user knows the file paths and names
> > > of “the accounting files” ?
> >
> > No. So?
>
> So how do they know which files their application should or
> shouln’t be accessing ? How are they going to differentiate
> between legitimate and illegitimate activity ?
Users do have at least a vague memory of what they called their saved documents. Most users are fairly sure that e.g. MSVDX32.DLL isn’t something they want to try to open in their text editor.
Less ignorant users have a very good idea of which files are safe to open and are thus able to protect their computers from that kind of threat. However, as it is now, each program will automatically has a lot of ambient authority, including the ability to try to open any file and the authority to open all files that the user has access to.
> > If the screensaver and the accounting program isn’t one and
> > the same then I don’t see why they would be given capabilities
> > to access each other’s files.
>
> You can’t conceive of a piece of multi-stage malware (say, a
> “file organised” to be able to read every file coupled with
> something network-enabled – like the aforementioned screensaver
> – to copy them away) ?
One could, but it will be quite hard and raise a lot of warning flags in the heads of all but the most ignorant users.
Not only would you have to get the user to install a file manager (who is a bit different from other programs in that it needs more capabilities than normal programs and thus complete novice users might not even be able to give it the authority it needs to operate properly), you would also have to get the user to install the screensaver, and then you’d have to get those programs to talk to each other. (The last part isn’t that hard, though, once the file manager gets write access to the screensaver’s “configuration file”.)
> > Sure, but I most certainly wouldn’t give the accounting program
> > a cabaility to unrestricted network access. At least not by
> > default.
>
> Again, who decides how much access ? Who can change it ? How
> much can they change it ?
Again, ultimately it’s the user who decides how much access to give to which application, but the OS supplier would provide half-decent defaults, so that ignorant novice users won’t have to make choices they can’t make.
> Who verifies that application X has only as much access as
> it needs ?
The user, if he/she wants to and knows how.
The most important part, however, is that a well behaving program can throw away all capabilities it doesn’t need. Thus there is a lot less chance that a malicious program could use another program to gain the authority it wants. It would have to use bugs in several different programs to get their different capabilities. Adding to the unlikeliness of this is the authority being highly modular so that very small parts of a program needs to be bugfree in order to not leak out the important capabilities. All in all this results in a level of security orders of magnitude higher than the current crop of OSes provide. It’ll never be perfect, since humans make mistakes, but an improvement of that magnitude deserves the developers’ attention.
> > > Still won’t stop the user from deleting the wrong file,
> >
> > Correct. Those issues are almost orthogonal.
>
> Untrue. After all, malicious, file-deleting code is just another
> way for the user to accidentally delete the “wrong thing”.
Oh, sorry, I misunderstood you. If you were talking about file-deleting code then it certainly is very much about computer security. I believe my comment below should answer your question. (Keep in mind, though, that a “Delete file” requester capability is different from a “Open file” requester capability.)
> > I disagree. More often than not this process can be made very
> > intuitive and almost transparent. E.g. consider a text editor.
> > Do you think it’ll have to have the capability to open all
> > text files that you (as a user) has access to? You probably
> > do. Well, that’s completely wrong. It doesn’t. It only has
> > to have a capability to use the “Open file” OS service, which
> > in turn will give the editor the capability to open the file(s)
> > selected by the user. (Notice how the user won’t have to go to
> > some security dialog and select which files the editor has
> > access to. The authority is implicit in the designation.)
>
> As I said, the decision of which file to operate on, remains
> with the user.
Yes, and the user has to choose that file. A program can’t just try to open a file (e.g., “C:WindowsSystem32srvmgr32.dll”) hoping for the user to press “Allow” in the security warning popup. I find it unbelievable that you can’t see the difference between answering “Allow” on some security warning popup (similar to hundreds before it) and something like having to select “srvmgr32.dll” after navigating to C:WindowsSystem32.
Also, I don’t think the file manager would even have capabilities to access the system directories, thus any file requesters it provides also lack this ability. Therefore the user wouldn’t even be able to navigate to C:WindowsSystem32.
You seem to be completely irrational in your delusions that the users would be responsible for the current mess (security-wise) that we are in. Snap out of it! Go read the stuff I linked to.
> > Perhaps it won’t prevent malicious code to run, but it most
> > certainly limits the damage such code can cause, at least
> > by an order of magnitude.
>
> Perhaps I’m not explain what I mean sufficiently well. I’ll
> try again.
>
> Really, all malware is just a programmatic, automated tool for
> allowing users to shoot themselves in the foot. There is little
> – if anything – malicious code does that a user wouldn’t do
> themselves in the process of using the computer at some stage.
> You insist users will be able to continue to use their computers
> as they do now – or with even less restraint – yet do not
> explain how the computer will be able to tell the difference
> between actions deliberately initiated by the user and identical
> actions initiated by malicious code pretending to be the user.
It actually hurts a bit inside me to see someone write such nonsense. (You like that, don’t you?)
It’s in security based on ACLs that authority is bound to the user. Now re-read that sentence. And again.
In capability-based security it’s possible to do stuff like having an “open file” requester like I’ve described several times already.
Now go read the stuff I linked to.
> You also brush over (if not completely ignore) significant
> issues critical to making capabilities-based security even
> remotely workable:
>
> * Who decides what applications can do what ?
This is orthogonal to the issue. You have the exactly same problem with security based on ACLs.
Still, I’ll tell you once again that the user is the one that has the final word in this, and the OS supplier will provide semi-decent defaults. You don’t need different security profiles for all applications. That’s something for advanced users who want that extra bit of added security.
> * Who delegates access permissions for unknown programs ?
Huh? Any program with a certain capability might give it to any other program that it also has a capability to communicate with using a channel that supports the transfer of capabilities.
This is answered in the stuff I linked to. Go read it.
> * What stops third party software vendors from simply giving
> their applications all access to everything to make their
> development jobs easier ?
Authority is not for the software vendors to give. Only an entity that has a capability may give it, or a subset of it, to another entity.
I know there are special applications that need some particular bit of extra authority. When they should request this (e.g. when an application is installed) is up to the OS developers. This is indeed an opportunity to take advantage of human error, but I’m sure it can be done in such a way that it will raise warning flags in the heads of all but the utter most ignorant users. Now warning dialogs are shown all the time, and thus the users are accustomed to answering “Allow” each time if they want to get anything done. This will have to change, and it can, as soon as authority is bound to designation.
> * Does anything stop software from prompting “on the fly” for
> capabilities permissions ?
Uh, of course! You still don’t understand even the basics about object-capabilities. If designation is bound to authority then there is no way for an application to even ask for something it doesn’t have authority to do.
Go read the stuff I linked to.
> * Should users *ever* be given unrestricted access to all
> aspects of the system ?
First of all, in a capability system authority isn’t bound to users.
That said, yes, at least some user have unrestricted access to the hardware and thus to everything. Within the contraints of the OS, well, I don’t know. I don’t think there would ever be any one entity (except the kernel or somesuch) with all capabilities.
> I know what capabilities-based security is
With all due respect, no, you most certainly do not.
What you’ve written above shows quite clearly that you haven’t even grasped the basics.
Go read the stuff I linked to.
> I just don’t think it’s the be-all and end-all because since
> it still (as it must) allows users to do things that could be
> bad, it still allows bad things to happen.
True, it’s not perfect, but since it’s several orders of magnitude better than the current mess, I still think we should go down that road. Now.
> There is also a *significant* management and delegation overhead
> that you are ignoring, that severely impact its relevance to
> unmanaged environments.
Excuse me? There is considerably less overhead since calls won’t have to check who you are and whether you are authorized to do a certain thing or not. Your idea of security as being something added (“managed”) just highlights your ignorance of capability-based security.
Go read the stuff I linked to!