Every hard-core OS aficionado has done it: Laid out a grand scheme for creating the perfect OS. Taking all the best features and attributes from the OSes we love, and making sure to assiduously avoid the pitfalls of the OSes we don’t. Maybe our goals were modest, and we just wanted a slightly tweaked version of an existing OS. But sometimes we’re feeling ambitious, and we have large, creative ideas for revolutionizing computing. Long-time OSNews reader and contributor J. Scott Edwards just couldn’t help himself, and he has set about to not only plan, but to try to build his dream OS.
This is my third attempt at writing this all down. The first was too detailed. The second was still too long. This time I’ve decided to just lay out my ideas, and try to keep them clear and brief. This is more or less just an overview. There are just too many details to try to squeeze them all in here. Many of the ideas are not really new, some have been talked about before. I’m just trying to figure out how to put them together into one system.
I’ve been working with computers for 30 years and they certainly haven’t gotten any easier. In fact I’m pretty sure they have gotten harder. True, I’m doing things now that I never dreamed of doing then. But, it feels like, while they’ve become 1,000 times more powerful, it seems like they have also become 1,000 times more complex.
I’ve tried almost every major operating system out there and I always seem to run into difficulties. So I have been thinking: “What if we started over from scratch and re-thought how an operating system is put together?”. One of the first operating systems I ever dealt with was Exec 8 on a Univac 1108. It contained most of the concepts used by all of today’s major operating systems. Now you are probably thinking “it didn’t have a GUI!”. That’s very true. What I’m talking about are the underlying concepts: processes, files, how programs run and interact, things like that.
I have been brainstorming for some time now, and I have some ideas that I would like to share. They are just ideas right now, some or all of them may never work. It is possible that I have failed to think of some aspect of an idea that makes implementation infeasible. But I feel the need to get them out there for several reasons. One, I am hoping that if they are published somewhere they will quit going around and around in my head and I can get some sleep. Two, I don’t want a scene like the one with Microsoft and SenderID
One last thing I would like to say up front. I am going to mention some software that I use, as examples, such as Linux, OpenBSD, gcc, K3B, etc. Please don’t take that as my disparaging that particular piece of software. I am just using what I am most familiar with. I still think it is excellent software, which I use everyday and will continue to use for the foreseeable future.
So without further ado, I guess I’ll jump right in with both feet.
Simplicity
I have been reading the book Better, Faster, Lighter Java and although the author is talking specifically about Java, he might as well be talking about computing in general.
Chapter 2 is titled “Keep It Simple” and explains the value of simplicity. It even goes on to say that “Simplicity may be the core value”. I wish I could reproduce or link to the first section of chapter 2 because I think it applies to all of computing.
So that is my primary goal is to keep the entire system as simple and elegant as possible. When complexity is inescapable, contain it in one place, and don’t allow it to infect the rest of the system.
Flat Files and Relational Databases
The biggest thing that I can think of that adds complexity to computer programs is storing the data. It seems to me that any non-trivial program has a good portion of the code just to mess around with files. There are configuration files and data files, which most of the time cannot be used directly by the program, and have to be interpreted and converted into the internal format of the application.
Another alternative to storing data in flat files is to store them in a database, most commonly a relational database. While this has several advantages over the flat files, if using a relational database, the program has to do queries on the database and then usually reformat the result into the program’s internal data structures. This is really messy when using Object Oriented programming. There are of course Object Oriented databases, but I am not aware of any that have gained wide acceptance, especially ones that are Open Source like MySQL and PostgreSQL.
My idea is to have all of the information stored on the disk in the native Object format of the program. That way instead of having to constantly convert and interpret data, the application can just access that object directly. And when that data is needed in a flat file format, you have a converter App (object) that can access the internal data and convert it to a flat file type of format.
For example, let’s say you have some compressed (with Ogg Vorbis or whatever) audio objects on your computer. And you want to burn an audio CD which can be played in a normal audio CD player. You would create a playlist object and connect the output (more on this later) to the input of a Ogg Vorbis converter object and then into the Audio CD burning object.
Where Is It Stored?
Another problem that I would like to solve is the “where is that data?” or “what did I do with that file?” problem. Just yesterday I was looking for a spreadsheet that had some information I needed in it. At one time I had it stored on my desktop machine. But I’ve changed that machine several times in the past year and reloaded different operating systems several more times. So I finally put it on my file server. But when I looked for it, I didn’t see it anywhere. Worse yet I couldn’t remember exactly what it was named. I had to do many searches before I finally found the stupid thing. I probably wasted 20 minutes looking for it and as time went by I started worrying that I had lost it and wondering when (if?) I had backed it up. I just don’t need this kind of stress!
My thought on that is to remove the distinction between different partitions, disk drives, file servers, CD and DVD media, etc. No more C:, D: and E: partitions. No more host7:/shared… No more mounting the CD drive. It’s all just have one big object storage space. If it’s out there the machine can find it. If it’s public information it can even be obtained over the Internet. If it was stored on a tape, CD or DVD backup and that media is not in the drive, the OS will tell you on what media that object was stored and you can find that media and insert it and the App can continue without even knowing what happened. If you don’t have access to that media at that time you should have the option of stopping that App right where it is and restarting it later at the same exact point when you have the media (more on this later).
**It seems that Google, Apple, and others are moving toward making this a reality on today’s operating systems, so that’s good news.
Relating to Each Other
Another problem with Flat Files is the lack of information about the data in the file. The BeOS had a database built into the file system with was nice addition, and I understand that Microsoft’s Longhorn was going to have some sort of database built in before it was scaled back. I also recently discovered that the Linux ext3 filesystem allows additional “attributes” to be added to a file.
But I would like to go beyond all of that. My idea is to take an object as defined in current languages like C++ and Eiffel and add some additional information to it. Information such as when the object was created, who created it, md5 checksum, some protection bits, the usual meta-data that a file would have in the OS now.
In addition to those, the most significant addition is the reference table. This is a table maintained by the OS which lists all of the objects which reference that object. That provides a couple of benefits. First it allows you to easily find any object that references a given object. Second it allows one to determine which objects are still in use on a system. If no other active object on a system refers to an object (other than the directory) it could be moved to back up storage and deleted from the active system.
For example, going back to the music example, all of the music objects would reference the artist that created them. All of the Beatles tracks would reference the Beatles Artist object. The Beatles Artist object would not specifically have references to songs because songs would be only one type of object that could refer to the Beatles. We wouldn’t want to encumber an Artist object with explicit references to every possible type of object that could refer to it. We don’t even know what every type of object that could reference it even is. And if we did that we would have to change the Artist class every time some other class was created that referred to an artist class. Instead every object will just have a generic list that can point to any type of object (the “ANY” class in Eiffel). This list will be automatically maintained by the OS and updated as the program runs. And when you want to find all of the songs you have by The Beatles, you simply get the Beatles object and search it’s reference list for objects of the class “Track”.
Versions
I have also been considering keeping versions of each object when that object is modified. So instead of actually changing that object, it clones it and then modifies the clone. That way if you have made a mistake and need to go back, or if you want to see what changes have been made you can view the differences. I.E. you have built-in CVS.
To go along with this there should be some mechanisms for cleaning up old objects that are no longer required. For instance if you have been modifying some source code and you had version 7 that worked and then you made several changes in versions 8, 9, 10, 11, and 12. And 9, 10 and 11 had bugs that you don’t want to keep. When you are finished you should be able to remove those unneeded versions.
Disk space is extremely cheap now. I just checked and you can buy a 200 Gigabyte hard drive for just over $100. And Sony’s Blu-ray discs will be able to hold 23 Gigabytes of data per layer. Given that kind of storage space I think saving multiple copies of your work just makes sense.
Security
Security is one aspect that I haven’t gotten completely figured out. The things I have thought about are as follows:
- Different parts of an object can have different levels of security. For example if you had a Person object, some of the info might be public, for example the person’s name and e-mail address. Then other parts might be accessible by a smaller group such as your co-workers, for example your cell phone number. Then other information may only be accessible by a select few, like your Social Security Number.
- All non-public data, when not in RAM, will be encrypted. When that data is stored on any media or transferred over the network it will be encrypted.
- There needs to be some sort of group mechanism that allows people to be grouped, similar to the Unix/Linux group mechanism.
- There should be different levels of security as well. For example, if you are connected as yourself, you can access some data just based upon your login authentication. Then a higher level of security that even though you are logged in as you, you have to put in another pass phrase to access that specific data. Possibly an even higher level of security that uses one of those dongles or something. Mac OS X does this to a certain extent with regular access and “administrator” access to install software and make major configuration changes. I would like to have these multiple levels within a single person’s account. For example, you may have a normal level of security where you are writing a report on global warming. But you would want to have a higher security level for a object that contains your Social Security number. I want to incorporate something like SE Linux which defines “policies” of who can do what. For example there should not be one all powerful user who can read any object on the system, even the ones containing your SSN.
RAM Caching
I also want to change the way RAM is used. I want to use the main RAM simply as a fast cache for the objects that are stored on the Hard Disk. (Or really if there was no hard disk, objects loaded from the network.) So that the whole image of a running application is kept up to date as possible on the disk. As an object is executing in RAM, it should be copied back to disk as soon as possible.
There are several benefits to this scheme. One is that if the power is lost, the loss is less significant. Which brings me to one of the other ideas…
More Continuity
Another thing I want to try is to make applications have more continuity. It seems like applications now still work in the same way that they did on batch processing systems years ago. They start up completely new each time read their configuration files, inputs, do their processing and write their outputs and stop.
Instead of starting up from nothing each time and then being terminated as if it never existed. I think it would be better if (at least some) applications kept their state intact. It would be more like you turned them on and off.
For example, I wonder if a word processing program couldn’t be made more like a typewriter? Say you have a piece of paper in the typewriter and you are in the middle of typing something. You get a phone call and get interrupted and have to fly to Istanbul, you shut it off (assuming it’s an electric typewriter) and leave it. Six months later you come back, turn it back on and it’s right where you left it.
While it’s true that most word processors have a “recent documents” feature that you can re-open the document it is usually not exactly the same thing. What if the power fails, can you easily get back to where it was?
I think it should be possible to have applications that you can just turn on and off and they are just like you left them.
Deeper Down
Most of what I have discussed so far is at a higher level, really on top of the operating system. In fact my initial plan for doing a proof of concept is to build the above Object store on top of Linux and/or one of the BSD’s.
But then I want to take the Object paradigm further down. I would like to have all of the operating system be object oriented, except the very core. I envision having something like a Java Virtual Machine at the core. When the machine powers up it loads the virtual machine, most likely with something like a Just In Time compiler for speed. Everything after that is object oriented.
What’s in a Language
I know that some people disagree that Object Oriented programming has delivered what it was supposed to. But I have found that if you use the right OO language it has significant benefits to productivity.
Ok, I’m going to break one of my tenets and get down on C++ a bit here. I know it’s possible to write good code in C++. It’s possible to write good code in Assembly Language. The thing is the amount of difficulty involved. I have worked on many C++ projects, but I have never worked on one where I felt that it was productive. Even the projects I have done all by myself in C++ have been disasters. I have found that usually one of two things happens, the developers stick to a small subset of the language so it’s really just C with a few objects thrown in here and there. Or it becomes so complex most people have difficulty understanding it and a nightmare to add features or maintain. I remember on one project a developer complaining that to add one simple feature he had to modify 42 different files. When code becomes that complex is a bad thing. In his book Thinking in C++, Bruce Eckel even says that C++ is more complicated than Ada. There are many sites on the internet about the problems with C++, for example: Why C and C++ Are Bad. In my opinion a programming language should make it as simple as possible to write code that is well designed and easy to maintain. I feel C++ falls short in this area.
I know 99% of the programmers in the world are going to tune out right now. But in my experience I have found that Eiffel and the Design by Contract an absolute boon to programming.
Let me give an example of where I think Design by Contract would be a major improvement. One of the biggest causes of security holes in software is the buffer overrun. And I think the fault lies in the C language, it does not check anything. And programmers are not perfect, they make mistakes and C allows them to make mistakes. I believe if you use a language like Eiffel, that checks for error conditions, software would be more secure. (One caveat here is that you could not turn off these checks off, they must always be in place. I would gladly trade a slight performance hit for security.)
Now that I’ve said all of that, I want to say that Eiffel is not exactly what I want to program this in. It is the best language that I have used. It embodies simplicity, correctness, and ease of programming. Programs are easier to debug and need less debugging. But I would like to change the programming paradigm as well.
One think I would like to do is have the documentation embedded in the programming language. (Now I’m sure that any remaining programmers are tuning out) I think there needs to be some way that the documentation and help are programmed right in the programming language. It should not be a separate thing. I hate documentation as much as the next programmer, here you’ve gone to this incredible amount of work writing this great program, and now you have to go back and document it. Which may be as much or more work than writing the program. It sucks! So I think it would be better if creating the documentation and help were built right into the program, in the same place. Then it would be done at the same time as the program.
And along those lines, I think it should be such that you can embed other types of things besides text in the program itself. I think you should be able to just put a drawing, a chart, or even a picture right in the source code. I think the source code should just be another object which can contain any other types of objects.
One thing that occurred to me about storing the code as objects instead of text is that you could display/edit the code in your preferred format. If you like the curly brace scene as seen in the C family of languages, you could view the program like that:
if (a == b) { c++; }It could even prevent all those arguments over where the curly braces go:
if (a == b) { c++; }And if you prefer an easier to read syntax:
if a = b then c := c + 1; end
Object Editor
Because everything is stored as an object, there needs to be simple editor. Something very similar to a text editor, that allows you to view and manipulate objects. It should work on any object (assuming you have permission to view and edit that object). It of course won’t be able to view or edit parts of the object that are encrypted, unless you have the key.
For example: many programs have a configuration file, now they will have a configuration object and you can change the configuration this way. Similar to editing a configuration file with a text editor now.
And the OS Has Continuity Too
I think the same thing should be possible with the whole operating system that is done with the apps. It should have more saved state when it starts up. It should basically be restored to where it was when you turned it off.
My memory is so bad that I pretty much have to leave my machines running all the time. That has been one nice thing about Linux that I have been able to leave it running for months. Then when I come back it’s just as I left it. The bad thing is when the power fails. It usually takes me an hour to get some semblance of what I was doing. And then it is never exactly everything I had going on. I just can’t see why it’s not possible to have the machine almost in the same state it was in when the power was lost. Obviously if you are in middle of typing something, some stuff is going to be lost. But why do we have to start over from nothing?
Modules and Software On Demand
One of the problems I’ve had on several occasions in Linux is the installation of software. Oft times, I have wanted to install a piece of software to do one function. But when I went to install it say via RPM, it had a dependency list a mile long. Sure “apt” and “yum” in most cases can handle the dependency thing. But it still bugs me to have all of that additional stuff, that I never use, on my hard drive wasting space.
As an example the other day I had done a minimal installation of SuSE on a machine (because it had very little disk space). I wanted to use K3B just to create data CD’s and DVD’s. However, when I tried to install it, it wanted MP3 libraries, Ogg Vorbis libraries, FLAC libraries, the list was quite long. I did not need any of this stuff on that machine, it was completely superfluous.
I think the software should be set up in smaller modules which will work independently. I further think that only a minimal amount of software should be installed on the system when it is started up. Then when you need to perform a certain task that software is installed automatically (with your authorization of course) and ONLY the software you actually require gets installed.
I alluded to this before, but I want to mention it again. I would like to develop a mechanism for independent modules to communicate with each other each other. Similar to the way in Unix/Linux you can hook the standard input and standard outputs together in a pipe. But have other types of interfaces (audio/video/whatever) and connect them at run time to perform some other function. For example if you had say, a filter plug-in, instead of just being able to use that plug-in inside the audio editor, you could just use the plug-in at the OS level and plug it between the output of some device and the mixer. Or you perhaps you could plug it into the mixer.
Can’t Change Everything
While it would be tempting to try to change everything about a computer to see if it could be improved upon. I have decided there are limits to what makes sense.
The hardware is well, still “hard”. It would be expensive to change the hardware and it would make it impossible for anyone without the special hardware to use the software. Making the whole point of the open source software moot. So we have to stick with the existing hardware.
I feel like the same holds true for the networking. It doesn’t make sense to use something other than TCP/IP. TCP/IP seems to be flexible enough to support everything I have envisioned. Then we can communicate with the rest of the world. That can only be a good thing.
Endian-ness
One of the problems that I have not decided how to handle yet is the endian-ness of different machines and data. Since the machine has access to the class definition of each object, it could automatically convert data to match the current processor.
The dilemma is which one should it be stored in? Or should it always be stored on disk in the format for the CPU the disk is connected to? It is tempting to store the information always in one format, say big-endian to match the network byte order. But then since the most common machines (x86) are little-endian, they would have to always have to take the hit.
Another alternative, would be to do as is done now and store the data on the hard drive in the native order for the CPU. This has speed benefits, the data only needs to be reversed when it’s going over the network.
User Interface
I haven’t worked very much on the details for the GUI. I would certainly like to go beyond the 2D interface. Sun’s Looking Glass project looks kind of fun. It will be interesting to see what happens there.
I had envisioned something beyond that, where you didn’t really have windows but more like different rooms, like a music room, a library, a tv room, etc. But as I said I haven’t put as much effort into this as the underlying pieces.
One thing I would like to come out of having software built out of separate modules that could be connected is that an app could have different interfaces without having to change the app.
For example, an application would have a control connection to it. It would not have the interface coded into the app itself. Then you could connect an interface, either GUI or command line, to it. Depending upon your preference.
Where to Start?
This is the big question. Should I take the top down approach or a bottom up approach? I read once that “Real programmers do it middle out!”. Or perhaps start at both ends and see if I can meet up in the middle somewhere.
Actually my current plan is to build the Object store on top of an existing OS. Find out if it is going to work at all and how well it works. In fact I don’t see any reason that one couldn’t build practically the whole OS on top of another OS, similar to User Mode Linux.
One other thing I plan to investigate is how well the object paradigm works at lowest levels of the operating system. It seems like a natural fit, but I could be wrong. One of the things in Linux that seems kind of strange to me, although it seems to work pretty well, is the simulating of a SCSI device for IDE and SATA drives. It does seem like it’s extra work.
It appears to me that came from a program such as ‘cdrecord’ that wanted to talk to a SCSI device. Although I’m sure there are other good reasons. But in Object Oriented land it would seem like the application (cdrecord) could be written to talk to a generic object (CD-Drive), which could be instantiated as either a SCSI or IDE object. So I want to investigate this further to see if that would be possible or if I’m up in the night.
I also plan to use as much of existing Open Source as possible. It makes no sense to have to write all device drivers from scratch (and it would take years). So I want to see what drivers can be used from Linux or any of the *BSD’s and perhaps wrap them with an object interface. This seems like the beauty of Open Source, that you don’t have to start completely from scratch. At a minimum you can at least look at how someone else has done something and try to improve upon it. Standing on the shoulders of giants if you will.
Wrap Up
Well, this is already longer than I had intended it to be. And there are more details that I would really like to include. I have started a project at SourceForge for further details. It will be at http://nwos.sourceforge.net/ when I get it set up. I intend to put up copies of my hand scribbled notes, some drawings and some more details in the next few days.
As I stated in the introduction, none of this may actually work. Or it may be so excruciatingly slow that it’s unusable. On the other hand with processors now running over 3 GHz, I cannot imagine that would be the case. Seriously, when I’m balancing my checkbook the CPU is probably not too burdened. I know sometimes (like rotating photographs) that you need all the speed you can get. But in the general case I would gladly trade some horsepower for some ease of use and reliability.
In the end this is really all about the famous “Itch I have to Scratch”. Since I deal with computers all day, almost everyday, I constantly wonder “does this have to be this complicated?” and “is there a better way?”. Perhaps there isn’t, perhaps computers, difficulty, and complexity all go hand in hand. But I just have to see for myself. I really cannot give up the idea that there must be a simpler way. It keeps haunting me. Therefore, I have to prove to myself that it either can or can’t be done.
When I submitted my GrannyOS project
specifically designed for old people
they rejected
Agree with most of the general things that you pointed out. You are not alone, I know quite a few people whith the same goal and same directions (bringing the oo concept down to the OS level) but with some slightly different directions and plans. Just to get to know your ideas better, can you tell me about the general architecture that you are thinking about and the features that you see as your main requirements that you want your OS to address ?
Just reviewed the project page in Sourceforge, it seems like you have picked Eiffel as the language of choice besides C and assembly (which will probably be exploited in low level stuff only). I would like to know the rationale behind this decision, (Just to say that I only know Eiffel is an OO language which is statically typed and has a few fans). It would great if you tell me other languages that you have possibly considered before choosing Eiffel.
Do you have any particular idea about other stuff (user interface, file system, distribution model, multi processing,…)
I don’t remember the Granny OS, but if you submit your detailed plans for a new OS, that are well thought out, we’ll post them.
if my understanding is correct, plan 9 treats everything as a file, including other servers and hd’s on a network.
you might want to look into some of their whitepapers for some good starting references on your fs.
Erm, if you haven’t checked out Squeak (a smalltalk computing environment) and SqueakNOS (Squeak with “no” OS underneath) you really should. They’ve been doing a lot of the stuff you mention for years.
You have some great ideas.
Unfortunately (or fortunately, depending on how you look at it), most of what you mentioned has been thought about and implemented in full. Check out the Eros project for most of what you want. KeyKOS was the precursor to Eros and also has many of the ideas (since ~1982).
It features:
– a single unified namespace for objects (instead of one for the in-memory system, and a file heirarchy for the in-disk system.)
– passive and continuous checkpointing so that you can pull the plug, and 30 seconds later you are where you were at (exactly)
– security between objects (which are technically called capabilities)
– etc…
It does not have a relational model built on it, but I would imagine you could do that. I doubt it has a competitive GUI, but its point is research, not widespread adoption.
Anyway, I would give you a link, but you can google as well as I.
Give it an honest read (especially the SOSP paper), and you will see the similarities. Hope that helps.
Is it just me, or does it strike any one else that the people that can’t stand having to remember where they put files are the same people that can never refer to things by name. Instead they refer to things such as “bring me that thingamig” or “wheres that hooseywhatsit”. The entire concept of the file system, directories and files, denotes refering to things with exactness. if you goto a row of file cabinets can you just reach in and grab something and automaticly get what you want? you need to know which cabinet, which draw, and the label on the file. Thats the concept. I fail to see this as a drawback. I understand that in the modern system we have a seemingly unending number of files litered about, which is why i do support adding metadata the fs space. However i do not support throwing all our data into one giant DB system as many people have suggested. (thankfully the article didnt)
i guess my point about not-an-os is that a good object database should be pervasive beyong the os level. it should exist on many os’s and be taken advantage of by applications in many domains, not simply to programs for one os. to inherently tie a good presumably networkable object database to a single os would be either criminal and insuring one’s own uselessness.
ultimately the object layer should become more and more what a user would consider the operating system by its mere functionality. the kernel just becomes another abstraction layer your object db must adapt to, and gui becomes a layer on your odbs to manipulate objects.
…when will Linux see some of these ideas implemented? :/
so i dont know where to start.
still, the idea of building it as a kind of virtual machine iliminates your endian problem in a way as the virtual machine would deal with that. allso the virtual machine can then be run on top of any other os…
the everything as a object with defined inputs and outputs that you can then link together (either command or visualy) basicly takes the commandline idea from unix into a visual enviroment. small parts that together become a bigger whole.
i allso like the idea of have the state of the program contiualy saved to disk. still, its a bit of a performance hit. but outside of gameing there is few taskes these days that realy task the system. most of the cpu time is spent waiting for user input anyways.
the seperation of gui and app is allso interesting. this would allow for anyone to, in theory, write their own desktop. tailoring to how they work. i have been thinking about a similar system ever since i tested that haystack project. it would allow me to call up a image manipulation app when handleing any graphical object, anywhere in the os.
folders become basicly just lists of objects, and the same object can be in many lists (kinda like the way you can link the same object to many places in the filesystem in unix). in fact now that i think about it, this is basicly unix. with a microkernel (object drivers), and a database based filesystem (should be simple enough, but i would rather wrap it around a flat file system in case of system recovery. kinda like how ext3 is ext2 with a journal, and can be mounted as ext2 if needed. you mount it with the database loaded or not). all inputs and outputs of the apps must be datastreams however (unless its sound of imagery it should be ascii or unicode) and the guis must be able to interpet these datastreams. hmm, why on earth am i thinking tcl/tk here? allso, it kinda reminds me of kparts…
get in contact with freedesktop.org and start work on a desktop standard for unhooking the gui from the base app core and the needed datastreams. this is basicly the only thing needed (outside of the database-like filesystem) to bring your ideas to most existing unix equivalent os’s…
*rant off*
Yes I did submit.
Many times with all the details.
I gave up waiting and redirected my efforts on other projects.
Basically it was that:
An operating system for the older.
That is very accessible and simple to use.
And I would do some market-research to see which programs would be their favourite (Gramps? Easy Game? Safe browsing)
Very visual and iconic.
Email would be spam protected.
And so on.
Babyishly easy to use and operate.
Totally Free.
And possibly with free support too – if I or others can find the extra time.
I would have to dig my documents it’s been a long time now.
“Erm, if you haven’t checked out Squeak (a smalltalk computing environment) and SqueakNOS (Squeak with “no” OS underneath) you really should. They’ve been doing a lot of the stuff you mention for year”
Being a Squeak fan and user for a while and being fair at the same time, I should point out that Squeak (regardless of how innovative it is) is still far from being a practical Operating system, and SqueakNOS project last update was 2001, the number of developers never reached 4 and after almost 4 years it is still in pre alpha status. Although Squeak has many neat things, I think starting another project and reusing those ideas towards operating systems should not be discouraged at all.
KeyKos has done some good job as a proof of concept, but as you mentioned it was just an academic project and we don’t see anybody using it in practice nowadays (and it wasn’t even the first one Apertos and many others did started before that).
if the fact that somebody has done something similar before could stop everybody, We wouldn’t have Linux, Apple, Ferrari and all of the other cool things that we have today, because most of them were based on previously existing ideas.
Summary of the article: current operating systems still leave sooooo much to be desired, you want a)… b)… c)… d)… etc. So much to do, so little time.
There appear to be many people like you (and myself, I might add), and many projects, with many hobby OS’es at best getting to the bootloader or ‘hello world’ stage. Just look around on the web, and you can find a huge number of dead/abandoned projects.
I suggest you spend significant time to determine what exactly it is what you want. IMHO that is the hard part. Many hobby OS’s get rewritten from scratch time and again. To me, that’s just a sign the creator didn’t really know what he wanted to make (or changed his mind continuously during the rewriting). Ofcourse, if rewriting time and again works for you, as a way of finding out what you want, then go for it. When you know exactly what you want, you sort of have a definition of your system. When it’s all vague, you’d better break things down further. Limit the scope of your project to what you can chew (at least for initial attempts), and do that well. Making a small, but solid improvement somewhere is better than trying everything at once, but not getting anywhere. Maybe you think you want an OS, but find out you really want an alternative UI or desktop. Knowing that can save you a lot of time. As Carl Sagan once said: “To make an apple pie from scratch, you must first create the universe”.
It may be a better idea to join an existing project, than to try and do everything yourself. There do exist some really interesting (and functioning!) projects out there. Exokernel, full OS’s based on higher/save languages (=no C or C++), standalone forth systems, you name it, it’s been done. So: reuse the work of others where possible. Need a bootloader? Sure, you can write your own, but why not use eg. Grub to start with? You can replace it later if you want.
The ‘OS Resources’ link on the left might be some start. And ofcourse… Google is your friend.
If it gets reviewed and accepted I will get this going:
But it was rejected twice(?) pending further review.
Registration description: An operating system (or distribution) running on a Linux kernel. The user base are the elderly (60+) although the system may appeal to other groups (complete novices and internet cafes). The system is designed to be highly visual and easy to use. There are no menus but clickable icons/links allowing users to navigate up-and-down a task tree. There shouldn’t be more than 7 icons at any given time. The whole design should be very intuitive and the testers old-people possibly without prior experience with computers. The tasks that can be accomplished by this O/S should be of interest to the old-age (diaries,emailing,message-boards,live-radio,etc). The system should at all times hide unncessary complexity from the user. Security should be paramount shielding the users from hackers and viruses. The O/S should be very light and stable so as to be installable on older machines. The installation should have an “advanced option” for techincal volunteers to fine-tune the system when assisting the user. The O/S is static with no upgrades (but for any critical security pack). The Installation CD itself can be “self-installing” (unless she/he doesn’t opt for the advanced mode). There will be a run-from-CD version of this O/S for those who just want to try it first. The project should be tested by the old-age community with feedbacks coming from educators and psychologists alike.
If anyone wants to copy the idea – I don’t mind.
Just make it very nice and accessible.
Do you mean rejected by OSNews or rejected by Sourceforge?
AFAIK OSNews articles do not require a “registration description” so I guess you mean SF.
Please note that David is responsible for OSNews not SF!
And AFAIK SF accepts almost everything. They actually accept even the most useless idiotic crap. You should read the SF docs more carefully. Probably you just entered invalid data somewhere..
The guy complains about all the things about c which I love; the amount of control it gives you over things at a base level. He also complains that it lets you shoot yourself in the foot; what, are programmers all stupid?
No I submitted to SF
Sorry for the confusion!!
Rejection Cause: Unable to approve on initial review; further review pending.
I don’t know I then tried a few times more and then gave up.
Thanks for clarifying things.
Wow, J. Edwards, your ideas are so similar to ones I used to entertain…it’s scary. Even down the the Eiffel stuff. I read Bertrand Meyer’s big OO book myself and was very impressed.
Since then my opinions have changed a little. I’m less enthusiastic about the design by contract idea. Design by contract is useful for documenting what exactly you intend your methods to do, but I think it’s less useful in making sure your code is error-free.
I think we can expect more of our methods. Design by contract just puts tests around them. It would be better to prove the correctness of pieces of code before they are run.
Tall task, yes. I suspect that programming has much to learn from math, especially discrete math. When you build this new OS, perhaps you can prove parts correct?
I’m having a tremendous problem finding anything in your article that I agree with, and most of it I’ve heard many times before. There’s just too much for me to begin commenting, at least for now. So I’ll just end with a big:
Good luck!
(You’ll need it)
Bah. almost everything in this article is technical. You are never gonna build a dream OS if you approach everything from a technical point of view.
Be creative and try not to look at existing stuff!
C libraries is what he meant . Linux kernel coders don’t use the standard c libraries because of overflows.
It seems to me that what the author wants is exactly what the guys at http://www.unununium.org are trying to implement.
Actually MacOS has a lot of the features you mention, or will have some in the new release (Tiger): http://developer.apple.com/macosx/tiger/
It will be possible for developers to store information in a SQLite database, by using a easy-to-use API.
Meta data can be used to relate objects (files), even more efficiently using smart folders in the next OS X release. You will be able to create queries using a simple interface and all related files will be shown inside the folder whenever you open it: http://www.apple.com/macosx/tiger/search_finder.html
Automater in Tiger can be used to pipe output of one app to another app, again this will be very easy to use: http://www.apple.com/macosx/tiger/automator.html
In Mac OS, when applications are coded in Cocoa (Objective-C), the interface is quite seperated from the controller and model classes, though this will also be even more improved in Tiger (OS X 10.4) with Core Data. Whenever one design an interface in Interface Builder, one also creates actions (for buttons, etc…) and outlets (for views, etc…). The controller interacts with the outlets and actions (the way the MVC pattern is meant to be used): http://cocoadevcentral.com/articles/000080.php
This proposal doesn’t seem all that simple to me. Particularly the fine-grained way the operating system gets into the business of managing objects could grow into a source of plentiful complexity both of implementation and API.
For an example of radical simplicity in OS design, have a look at http://www.oberon.ethz.ch/
Your proposal reminded me strongly of what (little) I’ve read about OS/400. In particular the ram as cache as persistent storage idea made me think of OS/400’s “single level store”:
[quote]
The AS/400 stores objects in a single address space, encompassing virtual memory and the file system. Each object has a single permanent address to which it is referred by all users and processes. Real memory functions as a cache for objects located on disk, managed by the SLIC, invisible to the MI or to OS/400. Programs access objects via the object’s permanent address, rather than copying portions of an object into a user’s address space, then using the temporarily assigned address in this address space. The system assigns all objects a permanent address in the virtual address space. Objects are referenced by name or by pointer, with the virtual addressing mechanism being responsible for moving the object into real memory.
[/quote]
You’ll find that paragraph and more at http://pages.sbcglobal.net/vleveque/AS400_Arch.doc
i think using an VM is not very good sience it will have a big preformance hit.
i am thinking to create an os(after i tried from scratch and only made Boot Loader and Basic Kernel(text i/o mem alocation and process mangment and basic read/write from HD)
based on FreeBSD kernel but change it from Mono kernel to Nano kernel that runs FreeBSD on top of it(like MacOSX).
while i change the X server(it is very old idea it need a big change to this method coz it slow take alot of mem and it is very complex) and the WM on top of it to a Simple one more “Connected” to the OS(like MacOSX) and still have compatiblity to FreeBSD and Linux_base module and to UNIX-like structure.
one thing that i think is very inportant is to change the structur of the root partion coz it have alot of directory and it make very hard to see where every file need to be.
and i agree to the idea:”Keep It Simple” coz in this way i not only have a code that can be read it will run fater and will have less bugs.
and i recommend you to use an existing kernel and not reinvent the wheel.
Good luck and take only the best ideas not all the ideas so you can Focus on the the best way to do it.
It’s nice to see another poster familiar with UNIVAC operating systems. That’s what I’m still coding in for a living, though the OS is now called OS2200. With a decent shell (CSHELL), filemanager (VSH), scripting language (CALL), and fullscreen text editor (UEDIT), its still a relatively nice environment.
I’ll get back to actually reading your lengthy article, but I had to toss out this silly comment first. 🙂
However, the problem with the approach is that though it adds convenience for programming, (no file IO data required etc), each object oriented language has its own semantics and quirks. The biggest challenge will be to converge upon a set of OO semantics which all languages running on the system conform to. Also, it might be risky to stick to the OO paradigm for an OS is the OS is supposed to be used for other progamming models to. Eg. the functional programming models or the newer distributed communications programming model. All of this goes pretty messy. However, if you want to look more into data persistence and other cool stuff that could go into an OS take a look at tunes.org. I am sure you won’t be dissapointed.
A good way to approach this problem will be not to start making an OS from scratch but use an existing OS like Linux and make a new “userspace” in it. Currently the userspace in linux is predominately ‘C’. All binaries finally depend on libc and you know how the whole thing works. Any thing else (python, perl, Ocaml, Lisp etc) require a ‘C’ layer beneath to function. You could remove this dependency completely by making a compiler and linker toolchain. Consider Java as an example (though not a very good example as it does have C dependencies). Java has its own language, compiler, and linker. Hence it is possible to think of a userspace implementation of Java which loads and runs its own programs natively (using syscalls only) and not depend on the ‘C’ userspace. Now you might have the freedom of doing cool API and language tricks.
I previously had thought lot about these issues. I am personally in favour of “automated” data storing and retreival rather than pushing the abstraction right till the language runtime level. Data import and export libraries. Usage of RAM as a cache for the disk will be a cool idea but this also means strictly using the memory images for usage on disks which might not be a good idea for a number of reasons. However, one might still think of using a data exporter flush the moment some RAM needs to be reaquired (instead of pushing all the raw memory data onto the swap.)
Before I conclude, let me just warn you that it is very easy to fall into the trap of “object orientation is convenience” myth. However, a deeper understanding of typing theory and how OO fits into it will tell you just how incomplete the current OO paradigms are.
Anyways, this post is not a means to discourage you. Hope that your ideas turn into something concrete and well thought out.
Ritesh
control is good if your writeing low level stuff like kernel or drivers as there you need to map directly to the addresses. but when your writeing a app you dont want to worry about memory cleanups or buffer sizes, you just want the app to work. this is the essence of what he was trying to say, i think. just look at the number of remote exploits that are buffer overflows (virtualy all). if the system did automatic size checking on all traffic then that attack method would go away (alltho at a cost of performance).
fine control is good, when you need it. other times it just gets in the way of what you are trying to do…
In many ways this is an ideal situation.
If you have, essentially, a boatload of non-volitale RAM, you never have the issue of serializing data out to a different medium (at least not as an everyday task like loading and saving a file).
You simply work on your internal structures just like any other piece of internal data. Say your program has options that you’d normally store in something like the Registry, or an XML file, or whatever.
In a pervasive system, this problem is gone. Once a program is “installed”, it’s essentially there forever, always “running”, but simply “swapped out” when idle.
The classic example of this is something like a Smalltalk image. “Everything is an object”, and always alive, running, and kicking. When you snapshot the image, you “freeze it” in time for revival later.
Now, imagine the new 64-Bit systems, with their ungodly address spaces. Imagine taking your 120GB drive.
int fd = open(“/dev/hda”, O_RDWR);
void *heap = mmap(NULL, 120*1024*1024*1024, PROT_READ | PROT_WRITE | PROT_EXEC, MAP_PRIVATE, 0);
and away you go…
It seems like a panacea, but while it can be argued that writing programs would be easier because we no longer have all of these marhalling problems of moving data across boundaries from “disk” to “ram”, in the end you would end up with the exact same problems that you have today.
A zillion “files” (or, now, Objects), all in different formats that requires particular programs to understand them. You still wouldn’t be able to find anything, you’d still have problems manipulating the files in an “easy way”, you’d still have vendors that may write code to the letter of the “interoperability” rules, but are still horrible “citizens” in the computing environment because they don’t go all way with the spirit of interoperability.
Think about Microsoft’s object models for its documents. Microsoft doesn’t document it’s Word format, but it does document its programming interface, and any VB programmer on the planet can make those documents jump through hoops by leveraging Word directly to manipulate the document, just like you’d do to manipulate an XML document.
The problems aren’t necessarily the systems themselves, but simply the domain of the problem.
Simple example, I have 700,000+ files on my machine, not including those files stored in ZIP/tar files. As a whole, 700,000 disparate, loosely grouped, but mostly unrelated files are just a flat hard thing to ogranize and wrap your head around.
The two framed pictures of my wife on desk are easy to organize, but two large boxes of thousands random photos at home are not.
It’s not a systems problem, it’s a data problem.
I would think that the creation of a directory structure sorting files by “last touched” date instead of by location (each with a pointer back to the original file) would be a good way to handle something like this at the filesystem level.
That would make referencing “the spreadsheet I played with yesterday” a relatively simple task.
A background process constantly creating other types of indexes into the filesystem might also be interesting and useful, especially if the files contain some form of metadata indicating creator, general file type, or other bits of information which describe the actual nature of the file (and not just its name, size, and date).
Perhaps a number of filesystems already do this type of thing, I really don’t know…
I think hierarchical filesystem evolution to the usage of database like file system has already began,(Apple: SpotLight,
Gnome: Storage, Microsoft: WinFS and many other attempts).
It might be slower from traditional filesystems in the beggining and it might be first just a layer on top of lets say ext3 or NTFS, but it seems that it will find it’s own seperate path when database technology allows faster access to
the stored objects comparable to the hierarchical organization of files.
I’m sorry to say, but this object model in article sounds very familiar. Where I’ve heard of it? Right, Microsoft’s COM and ActiveX. Everything is based on objects, examples like getting info out of media file are currently implemented as directshow(=activex) filters in windows, (automatically) installing missing (generalized) objects reminds me activex in IE, saving objects to disk is equivalent to activex object persistence, using/storing objects on remote locations is similar to dcom etc etc.
read just like plan9, to me.
Just as your article states everyone that has used a computer for a long time has dreamed of changing little things. A litle tweak there, a fix here. It all boils down to actually doing it. I`m all for dreaming up an OS (I`ve done it myself) but if you actually pull it off, or at least some of it then you will really shine. It seems you have taken the first and brave step though, allowing other people to know about it. I let a few of my friends know about my OS (Similiar to your OOP design but with more VM/User Mode Linux thrown in to allow it to talk or be all/any other OSs when needed) and was laughed out of the room.
Hmmm… ok, I’ve read the story, here’s how I think about this kind of stuff.
In the olden days of computers there was no kernel, programmers spoke directly to hardware, but as computers got faster and the abstraction was proved useful we added another layer with the kernel api.
Then ontop of kernels came other layers like programming libraries, a GUI layer, a network layer, and this provided abstraction.
Some abstractions have come along but they weren’t deemed necessary at the time, so the layer failed. This proved that abstraction isn’t always good, either because the programmers/managers didn’t understand the value, because it was complexity, or because of bad code, etc.
An operating system is an ever changing thing. It’s not just the kernel layer, the registry, the gui, browser… recently it’s also been the VM layer, of Java and .Net (and, I guess, Parrot).
These doesn’t sound like revolutionary ideas to anyone I hope. I didn’t find this story particularly revolutionary either but I think that’s a good thing. Good technology tends to be evolution, not revolution.
So with the current stack of operating systems layers it actually sounds like what you’re wanting here is feasible, as opposed to an operating system dream.
With a VM layer in the operating system you can serialize objects to disk, and you’ve got a working time-tested
oo model in Java and .Net (maybe IKVM is the bridge). Also both languages can do (as I understand it) aspect oriented programming, functional programming, etc.
So I went through and picked out a few ideas to see whether they were plausible at the vm layer, or possibly above as a library.
Data storage
Serializing objects is supported in .Net/Java. Can be done now.
Where’s it stored?
The unix home directory still seems good, and depending on the setup can be hosted remotely on Subversion which provides versioning. Better gui integration is needed, but Subversion does it well for me.
Maybe it needs more pervasive use of OpenLDAP to provide location independence, as the NZ Police Force does.
Indexing… there’s a lot of software for this. We just need it better integrated into the gui. It’s not much matching strings in files, we need structured responses from email data, chat logs, everything.
Security
Sure, programmers could say that the kernel must understand OO in order to securely implement the OO security model that you propose, however lots of applications implement security from databases and code, not at the kernel level. If it’s done correctly as it done every day then it could be done at any level.
SELinux doesn’t get to the object layer, I think. But then I don’t think it should be done there, it sounds more like a VM level security like sandboxing.
Network connections being encrypted, I completely agree. Most people beyond dialup can support it.
Ram Cache of OO
I think generally this is a good idea — in that you’re considering ram as a version that’s derived from disk. I haven’t heard of any performance problems with this yet but I’d like to hear some benchmarks.
Deeper down
I think this is a bad idea. I don’t see a benefit from combing the layers here (or maybe you haven’t articulated what benefits this would give). I don’t think that these layers should be blended. I don’t think an OO kernel would be practical, because I trust Linus’s opinion that they do OO type stuff where they want but often it’s in the way at the layer.
I think it should be done at the VM layer.
Code syntax / design by contract
I don’t think code syntax is something that should be expressed in OO.
Design by contract is a good idea, but I think it’s a social thing that shouldn’t be forced.
Object editor
I think that beyond mime-types we’ve got more complex objects coming out that we need editors for. I mean, an object store may be as simple as a registry/gconf, but with an array of vorbis audio bytearrays do you have an “object editor” or do you just pass them to Audacity? I think, obviously, you’d pass them to audacity — because editing them isn’t something easy to write. But we do need a way of editing, say, serialized Vorbis Audio files and saving them back into the serialized version. So it’s more a framework that handles this, I think.
The framework that handles serializing, updating, loading into ram, would be interesting to see. I think the caching in ram is already done by many operating systems though, if a file has been touched recently — right?
Endian
VM handles this.
Interfaces
As text is -by its nature- two-dimensional, I think most of the time will be spent looking at a 2d figure on screen. That 2d figure should be able to be spun around, maybe with some badass lens flare Gimp effects — infact the GUI should be 3d to support the scaling, rotation, transparency, of windows. But I just don’t think it’ll help users for most current desktop applications because they’re 2d, and I’ve never heard or seen a mockup of anything that looked like a 3D interface that people would use (but I’m willing to be proved wrong).
The Looking Glass proejct is the same old 2d windows spining around thing… that’s good tech, and I’d want that at the core of the GUI and done with OpenGL and stuff like that
In conclusion…
They don’t sound like big dreams, and I think you’ll get them all in ten years or so.
While I haven’t gotten around to playing with it myself yet, the versioning mechanism you envision sounds very much like what currently exists in (Open-)VMS.
I’d imagine that if this ever did happen, it’d be sufficiently revolutionary to require its own programming language.
“Id imagine that if this ever did happen, it’d be sufficiently revolutionary to require its own programming language.”
Huh, why?
have been done before, somewhere. the trick is to bring them all together into one whole system without makeing a mess, stepping on to many toes, and lastly, makeing sure that it plays nice with everyone, not just the creators and what other stuff they supply…
this is in fact being done in linux (as the whole os, if one strictly wants to label it the gnu wau ti would be gnu/linux/(kde|gnome)/whatever other big bits there is in there), most tech in that os is not revolutionary or inovative, but they are a evolution on ideas tryed before that worked, bringing them together in a working whole. its a evolutionary software, rather then a revolutionary software. and at the same time protected by a licence that both give power to the community, but at the same time protect the community from being exploited.
so what is needed now is to get beagle or something to work, maybe slap a system in there that allow me to take a kpart and run it inside a gnome gui or something similar from gnome. basicly the biggest idea in the whole article is the idea of seperateing the gui from the app core so that the core can be called from anywhere, any time.
palmsource have recently started to look into useing linux as the base of its next gen palm os. this means that they most likely will try to port in a kind of run-in-place system so that you run the app from the memory area its stored in. this allows for typewrite style workings in that you can turn the system off, turn it back on and the system will just continue from that location (kinda like sleep on a laptop in fact only without the power requirement).
First of all, a huge portion of your rant can be summarized as “object persistence.” This is and open field of research, and you might benefit to read some of the numerous papers and articles on it. Several languages have some academic or prototype persistence implemented for them including Java, but they are certainly nowhere near what you would need.
There are some implementations that claim to do object persistence that map to an RDBMS (see hibernate.org), but this is not persistence in its purest form. Rather, it is a functional hack.
Also, I’m obviously not a researcher myself so I’m not familiar with the advances in plan9 and Inferno out of Bell Labs. From what little I know, though, they do some of the OS-level things you’re talking about as well. Making Limbo objects in Inferno persistent actually sounds like it’s there aside from some specific details. Be sure to check them out:
http://www.cs.bell-labs.com/plan9dist/
http://www.vitanuova.com/
As someone whose interests overlap with some of your ideas, I don’t discourage you from trying. But you should definitely prioritize your plan of action to tackle the highest risk features first and work in incremental stages. That way you minimize the chance that a significant problem in a critical component will require you to rewrite everything else and compromise elegance, correctness, or functionality for compatability. Or worse, start over or give up in failure.
I myself haven’t got sleep the last month trying to think a better OS in terms of ease and usability. Now you’ve got some great ideas but the point is to find other people to contribute, too. Joining a project isn’t a bad idea at all maybe unununtu, why not, it sounds promising.
But for me, linux has already taken its route and is like a stone house that the only thing that needs to be done is place a roof.
If we need an iron house then I think we should build the main thing again and name it binux or finux or whatever. Yet the main idea remains the same: Grabbing the best ideas from around and putting them together (open-source-software and GPL are prerequisites for that).
Now from a strategic point of view what I see is if microsoft is to be taken down, we require a massive movement
(such as linux) to keep it all under one name. Now if this project pops here and there we lose the war…
So there’s your dillemma! To linux or not to linux?
“If we need an iron house then I think we should build the main thing again and name it binux or finux or”
His whole argument was geared towards not seeing the new operating systems in traditional ways (everything is a file and blah blah). we definitly don’t need yet another Unix clone.
About GrannyOS
I am not sure why the fact that SF did not publish your project, discourage you. There are obviously thousands of websites who would have given you free web space.
I think it is a great idea actually. Go Granny Go.
Do you really think that you’ll be able to implement an entire OS yourself alone? Vaporware?
To say the least I think it’s quite nice that an OS is to be implemented in Eiffel, yes it is a great language but I think this is _WRONG_ move. Firstly why is it a good move.
a) A lot of people agree that Linux is complex, there are a lot of mechanisms and abstractions that can be expressed directly in an Object Oriented Programming. I even _dared_ to ask this in lkml, and a lot of people did agree with me.
So as a means of expressing various abstractions and maintaniability it is _very_ good move.
Now why is it a bad move.
a) The guys at lkml also pointed out the primary reason[s] that they _hate_ C++ are
1) Not a natural language for them.
2) The code is _not_ as effecient as compared what a C{GCC} compiler can generate.
So though your code could be _highly_ maintainable_ it might not be efficient. Also if you want to bring in a lot of developers I think C++ might be a better choice.
—
Imanpreet Singh Arora
PS–> I am not sure why after submitting articles a lot of authors just go into hibernation. If you posted an idea, you obviously wanted some feedback on it. And you as author need to respond to a lot of questions that arise from the discussion.
Mail–> [email protected]
It’s gmail.com
…is off course a good vision. However, a while ago there was an article here or on /. about the rejection of a C++ patch for Linux. It would allow C++ code in the kernel (not force everything to be rewritten). They reason it was rejected was plain simple: performance. Afaik, Torvalds’ argument against it, was the lack of good C++ compilers on the platforms Linux runs on. C does not have this problem, according to him afaik. Hence the rejection of a C++ patch, no matter how lovely it may sound.
Same goes for a microkernel, but I guess we’ve all read that discussion from 1991.
For a good performing microkernel, you might want to look at AmigaOS. I heard it’s know for both it’s flexibility and performance. It is a microkernel, I read that filesystem support is actually a set of libraries on the system (in userspace that is). Don’t know it it’s C or C++.
I guess you could always design it for C++ and then write it in C using structs and some other hacks…
I consider kernel programming to be a subset of hardware programming. At that level of programming, the object oriented paradigm of composing software is fragile, inefficient and complex.
I strongly suggest you shift your thinking from the concept of everything being an object, to that of a component-oriented paradigm. I also advice you to use a language that is small, light, structured and fast. Assembly, C and Forth come to mind. I think you are approaching the problem from an extremely high level perspective.
From my observation, there haven’t been too many successful kernels designed from that kind of a perspective, at least in practice. And that’s another problem with object-oriented programming languages and paradigms. They only sound good in theory.
Free support? Ummm no? Have you ever tried to support a complete novice, for free? It would cost far to much time.
I like this article, because it stipulates some of the key issues that made me start with JNode.org a long time ago.
JNode is a Java OS, that is based on the idea of having an all OO (java) OS, that tries to make thinks as simple as possible.
So contact me, maybe we can help each other.
Ewout
I think opencroquet may be the UI you’re looking for, it has rooms, worlds and it can handle current 2D apps with no re-write. It needs a beefy graphics card but that’ll be no issue in years to come.
i dont think writing an os from groundup would be a feasible idea. Why not implement it on top of another existing open source operating system. As someone has already commented that some of the ideas you have presented are being implemented slowly by existing operating systems.
Many of the ideas you have mentioned are already being worked on. Why not work on top of an existing opensource operating system? A new linux distribution would definitely be a good idea. You can look around for linux projects which are in ‘some way similar’ to what you have mentioned in your article, and then throw them together into your distribution. This will allow you to save millions of hours of work and your ideas will be implemented much more quickly, though maybe not in their full. But it is about evolution, not revolution,as it has been mentioned already.
I am sure as time goes on and technology improves you will discover that what you want, could have been implemented in some other way and not as you are presently thinking. You will discover new possibilites and better ways to make things work. This is what the existing operating systems are trying to do….’evolve’ into something better.
But if you want to start from ground up,which i do not recommend, then i think all i can do is wish you luck and a very long life.
The author proposes introducing OS features like “native” object storage and such in order to make typical OS stuff (file systems etc.) transparent to HLL. I was very enthustiastic about that idea too, but later came to conclusion that there are only two ways to do it:
1. Integrate programming and OS environment *completely* – as with Smalltalk.
2. Don’t.
Basing an OS on an OO relational database is not a bad idea, but IMO it does not tackle the real problems facing any OS today.
These are support for the enormous range of hardware that is around (how many printer models are there?) and support for proprietary codecs, streaming media protocols, and secret file formats. Most of these belong to businesses that want to see big money before they will even answer your phone calls.
Without such support, a new OS has little chance of being usable except in specialised embedded applications.
First maybe you should remove the ‘simplicity’ bit, simple for who, users, developers? Because the implementation of many things you discussed is definetely *not* simple!
Let’s take an example: the usage of RAM like a cache of your HDD, writing modified data to the HDD as soon as possible.
Your laptop users will *hate* you: having the disk used permanently will kill the batery very fast!
So to be usable a balance must be found between writing to the disk ASAP and batery usage.
Also let’s suppose you’re viewing a video, the video play will stutter because all backgroup app write their own states change to the disk, causing lots of head seek, great!
All these problems are solvable of course, but they are definitely not simple!
About AmigaOS, doesn’t a big part of the performance came from *not* having memory protection?
If so, thanks but no thanks!
Note that I didn’t say that it was the only part, I know that DragonFly uses some ideas coming from AmigaOS such as doing some system call in the context of the caller..
A lot of what you have written about here is also written about very interestingly in the various different books about the IBM/Apple “Taligent” project of the late 80s/early 90s.
One of the main problems this project exposed (apart from management issues, and market readiness) was that an object/RPC approach is too often bound tightly to the implementation languages of choice, and is intrinsically difficult to ‘scale-out’ across the network, which is where much of today’s computing takes place (albeit with some fairly clunky interfaces between your web browser and your word processor, for instance!)
I’m not really sure that what we need is a “new operating system” at all. There are a variety of kernels available which make various different trade-offs (some necessary, some unnecessary); there are a variety of choices in terms of messaging or RPC protocols, and language bindings; there is a whole *heap* of research needed around interoperable information transform and persistence models (we were *nearly* there with a workable solution with WinFS, but still not quite – particularly in the distributed case)
There’s work in both the MacOS and Windows worlds on better visualization stacks (from hardware up to presentation layer), and I’d *love* to see some more innovative work coming out of the Linux/OpenSource folks there; the forks and branches of X just don’t really cut it for the next generation, there needs to be a more fundamental, but interoperable rethink there.
Then there’s the applets and tools that are now considered “part” of the operating system, and the way that they can leverage this plumbing to deliver user value. And that’s the key. User value. What problems are you trying to solve? How does this technology help solve them?
Starting from scratch is really not going to get off the ground. You end up having to make all sorts of compromises just to get bootstrapped, and you would be better starting from an existing platform.
If you always end up porting the same tool chains and applications then we’ll always have the same OS. And we seem to always make sure the old code bases work because that’s the easiest thing to do. A new OS will need to break away from the current infrastructure to do something really different.
WinOsX a la MacOsX would be *the* desktop OS.Plenty drivers,larger community,best of both worlds,no hardware commitments.
Your idea for serializing the in-memory objects to disk is a good one, one I am using in an application today, but be aware it has one serious shortcomings : flexibility.
Once an object is on the disk, its “schema” is somewhat locked. Future versions of the application can read the object as long as fields, properties, and methods haven’t been added or removed. If these changes have been made to the object, then a custom tool to convert the object from version 1.0 to 2.0 must be run on each object to bring them up to date. So the thought of upkeeping a suite of conversion tools for every object for every app on the os seems a bit daunting.
If you always end up porting the same tool chains and applications then we’ll always have the same OS … A new OS will need to break away from the current infrastructure to do something really different.
I agree. It really is the applications that make the OS look great. An OS worth writing is one which would allow applications to do things which are too slow or impossible to do with current OSes.
Also, personally I would advise not to lock yourself to an interpretation of OOP. It will only create monstrosities like Java. It is easy to create an abstract interface to a specific one. It’s just gruesome trying to do it the other way round. For example, you can easily write a fancy Java class over a well-designed C library. How on Earth is C code supposed to call a Java library in any sane manner? What happens when you want/need to do a low-level thing, but have to somehow emulate it with a high-level interface? Abstractions are that, simplifications specialised to a certain usage. For this they are great, but not as a general programming base.
Endian-ness is a non-problem. If you don’t byteswap differring endianed data then your program won’t work anyway. Is it not too much to ask the application programmer to handle this?
Virtual machines: just don’t bother. Compiling and distributing different executables again is a non-problem. For the tiny gain of not having to distribute different executables, your users have to suffer sssssssslllllooooowwwww programs (never mind JIT) and the developer has to port an extra piece of software per platform (a compiler per architecture vs. a compiler for the VM and a VM per architecture!) This is mindless extra work for you and your user’s CPUs. VMs are only really useful for running old programs for old systems for which you don’t have the source (consider 68K emulators for PowerPC Amigas).
I am not sure I understand what the author means by “object-oriented databases”. Does this mean that tables invoke each other’s methods, and new tables inherit other ones? Surely not! Please, someone explain why this would be any advantage over a normal relational database. The author’s complaint about having to formulate queries and reformat results is not a consequence of relational databases but of SQL. It would be great if there was a relational database library with a proper programmable interface and native data types (no more text strings please!)
The author expresses the desire to store data in “the native object format” of the program. Isn’t this just ‘serialisation’? I agree with not having to translate to-and-fro the disk with some text file or binary file format, which is quite wasteful.
these are bloated and complex os ideas. maybe MS might be interested.
“WinOsX a la MacOsX would be *the* desktop OS.Plenty drivers,larger community,best of both worlds,no hardware commitments.”
The Rhapsody Project joins with The Next Information Archive.
http://next.z80.org/
hylas
> Also, personally I would advise not to lock yourself to
> an interpretation of OOP. It will only create monstrosities
> like Java.
Agreed. An interpretation always means decisions which one might disagree with, e.g. no multi-dispatch in Java. Then those create another language where all these decisions are different, and all the nice interoperability problems pop up.
… that was one of the nice things in C (though I usually rant about C). It built a common ground for *all* applications, nothing had to evade it (except for very tiny pieces of optimization and very few OS internals). It would be nice though to see something similar, but without the problems of C (e.g. platform-dependent; needs virtual address space for security; …)
> Endian-ness is a non-problem. If you don’t byteswap
> differring endianed data then your program won’t work
> anyway. Is it not too much to ask the application
> programmer to handle this?
This, on the other hand, is very nicely handled in Java. And DON’T ask the app programmer for it. You underestimate their stupidity.
> Virtual machines: just don’t bother.
80% disagree! Admitted, you can get things working without a VM, and source-code portable (though nit binary portable). But slow? How are VMs slow, unless they *interpret* a program? Don’t let yourself be confused by bad Java implementations. There’s a lot more to VMs than Java.
what you want is a persistent object store.
Matthew Adams wrote:
> “There’s work in both the MacOS and Windows worlds on better visualization stacks (from hardware up to presentation layer), and I’d *love* to see some more innovative work coming out of the Linux/OpenSource folks there; the forks and branches of X just don’t really cut it for the next generation, there needs to be a more fundamental, but interoperable rethink there.”
Maybe i’m way off, but somethig like GGI ? :
http://www.ggi-project.org/
I’m a linux fan, but I really like that “Hibernate” feature in Windows XP?? Isn’t this what the author is talking about? Does Linux have such a nice feature, even as a third party tool?
This seems to offer a part of the solution.
What it really needs is a root-and-branch re-engineering of the whole stack from the device drivers up; and the processor has to play nice too;
For modern multi-media apps you need a bunch of things that are heading in to the hard-real-time realm (like guaranteed processor time for decoding, properly display-synced refresh etc.)
Then you need a layered API on top of that that lets you gain control at the *appropriate* level for your app. Might be lower for, say, a video editing suite, or complex animation system, might be *much* higher for your QuickBooks clone.
The Longhorn stack (as it evolves) is going some way to addressing this problem. I can well believe that MacOS has something similar in the offing – especially as they control the hardware platform. I’m guessing that the big problem the OSS community has here is getting the DD manufacturers to play ball with even the current graphics stack.
is basicly dumping the memory to hd and then putting the computer to sleep. next time it comes up it will read the dump back to mem and continue from there. its a more secure way of doing a laptop sleep as memory isnt wasted if the battery runs low but is slower then comeing out of a sleep as it have to be read of a hd. i cant say if linux have this, but if it dont have it i dont think it would be hard to put it in.
one of the ideas in the article is to take this to a pr app level so that if a app (or the whole computer) is shut down then the apps state is dumped to hd. a simple task for stuff like a wordprosessor as it can just save the file worked on in a holding file and then mark of somewhere that the file should be loaded back in when the app is started. maybe save some other small bits like where the caret is at the moment and so on but thats about it. other stuff may well be best of to start fresh when the computer comes back on.
and a small comment on that ggi stuff, very interesting. the screenshot of that multiwindow gldoom was impressive
and i want to again state that the idea of haveing everything as a object looks more and more like a retake of the unix way: everything is a file. for isnt a file in many ways the base definition of a object?
now if one could bring the unix commandline pipelineing onto the desktop so that one could see where the inputs and the outputs go graphicaly then we are one step closer. then maybe put a graphical gui creator on top of that so as to route outputs and input to diffrent parts of the gui. in many ways im thinking a rival to visual basic if the gui created is saved as maybe xul or similar and can later be read back by a engine inside say kde and/or gnome. it would be interesting to see what would come out of it, and what uses people would find for it.
hell, with the gui editor working the right way one can have it change commandline switches for the diffrent parts based on manipulations of the gui presented to the user. and you can even embed any kind of scripting language you feel like as long as it can work with commandline datastreams.
hmm, kinda makes me think of tcl/tk, but without the close bindings of one to the other…
Some of the ideas you have presented are good. Major OSs are already working on making things easier. Like recovery of the state of the object in the event of a crash, is already workng in MS Office. Also WinFS, FS as a database will surely come out in LongHorn, though stalled for the first release. But you have assumed total developer community as the only users in some of the features like object editor and security. This a basic flaw. As yourself, I would too love to sacrifice performance for functionality. A bit slower for better features, security etc. But the new OSes are not just meant for the PC alone. You must fit a new OS anywhere a chip can be and this is the essence that Windows and Linux atleast so far have been able to fit in somehow.
Just a last word of encourgement, the one idea I really loved was saving files as serialized objects as well as general-purpose standardised format for interopertability. This wud definately offer a bit of performance enhancement as well as inter-operatability. Infact this particular idea can even be followed by individual applications rather than the OS itself
Your thougts about software on demand made me think of the ‘zero-install’ idea ( http://zero-install.sourceforge.net/ ). Zero-install is quite an interesting idea: files/binaries get downloaded as soon as you ask for it and get ‘catched’ only, not installed. Maybe you should check it out…
Your article is almost too silly to comment except for the comment “too silly to comment.” Go read a book on OS design or start with UNIX or MACH and see what can be improved. It’s obvious you have no domain experience in an OS or for that an object oriented DB.
I don’t want anymore to deal with the file menu, this menu is only present in the applications because the OS need to load a document from disk into memory or to save from memory to disk. I really don’t care about the constraints of having an hard disk and a memory. I just want to find easily my documents and to work (with the ability to do versionning).
Microsoft did it in MS Money for example (no need to load or to save).
Again, I don’t want to shutdown my computer and to restart all my applications, I just want to continue the work I was doing before the shutdown (like on my palm pilot for example). Why not storing the state of the OS and applications before the shutdown and restoring this state at the reboot (this goes further the suspend feature because it survive to a reboot).
That are interesting changes to implements in next OS.
Cocoa in OS X has a lot of what you talked about. The entire cocoa/objective-C interface is very well done. Additionally, the entire system works in the Obj-C runtime – objects can be loaded in or edited dynamically. It also has that ANY type which you referred to (id).
As was mentioned by a previous poster, the interface is almost completely separate from the code. In fact, by opening up the program folder (“Show package contents”). You can modify the interface directly with Interface Builder. You can even create your own objects and link them into the program interface dynamically.
OS X stores a lot of info as property lists (XML), every object has the capability to write out a description of itself into one of these XML files. There is also the Property List Editor which is a app that lets you edit these file. So, to edit an object, all you would need to do would be to send it an export info message, edit the resulting file, and instruct the object to load the file back in again. While this would obviously need some work, a lot of what you were talking about is already there (in some form or another).
One experience I have had with continuity is with IBM’s OS/2. There you can define a workspace in which your source files, documents, etc. are stored. When you want to take a break, you just close the workspace and all the opened documents/files are closed too. When you return, just open the workspace and everything pops up as it was!
That I would like to see in a new OS too!!
Best wishes from Latvia
All of you people interested in this need to look at the work in progress of the brilliant Richard Kulisz on his operating system to save humanity. Currently he is involved in the First Initiative, as described at the original Wiki utl http://c2.com/cgi/wiki?FirstInitiative. The project in general is discussed at http://c2.com/cgi/wiki?BlueAbyss and the linked pages.
Take a look at BeOS R5 PE (free at http://www.bebits.com) and Zeta (www.yellowtab.com) 100 USD.
They have most of what you imagined.
hi
regarding your idea of always open applications and the possibility to return to the exact state of the application last time it was used
magnetic ram would solve this:
http://www.theregister.co.uk/2003/06/10/ibm_infineon_build/
I really hated to say anything about the language on SourcForge because what I envision is not really any existing language. But I had to pick something and Eiffel is the closest relative that I could think of. I have looked at many, many languages over the years, like Algol, Ada, Bliss, C#, Java, Modula 2, Pascal, Perl, Plus, Python, Ruby, Sather and several others. They all have their good and bad points. What I would like is to have the program stored on the machine in some sort of “Program Object” instead of a textual language. Then when a human wants to view it there is a translator to a textual/graphical representation. So if you prefer you can view it in an Eiffel like mode or a C++ like mode. Which ever you are more comforatable with.
One thing, that I think I left out of the article, that I have been thinking of is to have different language at different levels. For example if you are dealing with low-level stuff like DMA registers, there would be some way of representing registers that is more descriptive than having a C struct. But I am not sure you would want/need that at the application level.
I have been planning to put more details on the project page at SourceForge, but other things have come up that have been getting in the way. I will hopefully get that done in the next couple of weeks.
Plan 9 was one of the things I had intended to talk about in the article. In fact it may have been in one of the previous versions of the article and I forgot to include it in the final version. I was looking at Plan 9 after I had started working on this and was pleasently surprised that many of the ideas I had were already in it. The one thing that appears to be fundamentally different is that Plan 9 still uses the flat file system, but many of the other concepts are very similar. I definitely need to study Plan 9 further.
For all of you folks out there wanting to re-invent the OS. It’s already been done by IBM, and it’s called OS/400 or i5OS now, andit runs on the AS/400 or iSeries hardware. It’s funny all of you are asking for it now since this has been avalible for the past 20 years or more. Sadly IBM has no idea how to market the finest platform on the face of the earth, but then again we already knew that.
The Hardware runs Linux (Redhat & SuSE) and you can do most UNIX stuff on it to with PASE. You can also run AiX on the hardware too in a partition, but OS/400 is great and all the commands are very structured so that you if you can’t remeber a command you can pretty much figure it out by thinking about what you want to do.
To work with the system status – WRKSYSSTS
Work with the active jobs on the system – WRKACTJOB
Work with the printer Ques – WRKOUTQ
Work with user profiles – WRKUSRPRF
See how simple and the file system is a DB, and the CPU only does the work it is supposed to do, not all that other stuff intell servers do, but then again as I write this you all are probably looking for a desktop solution?
I looked at Squeak several years ago, but it seems like I had trouble getting it to work. I have just downloaded the latest Linux files and I will give it another go. Thanks for the information.
This looks quite interesting. I was disappointed that the news had not been updated since 2001 (or so it appears). But I have downloaded the files and I will certainly take a look at them. It is always good to look at where others have been.
Certainly most of my problem is that my memory is terrible. It is also a problem of quantity. I have 6 computers at home that I use on a regular basis, and several others that I don’t use as often. Couple that with 4 at work, with 7 or 8 different hard drives that I swap in and out. I have trouble just remembering which machine it is on, none the less where on a particular machine. Worse yet they all have different operating systems on them (OpenBSD, FreeBSD, ,RedHat Enterprise Linux 3, Fedora Core 1, 2, and 3, Mandrake 10.0, SuSE 9.1, Mac OS 9.something, Mac OS X, Windows 2000) all with different directory structures, so things are not always in the same place when I go from one to the other. And another complicating factor is that I am constantly running out of disk space, so I have to move files off the systems to CD-R or DVD-R. And I have done a terrible job of organizing these discs. Looking around me right now I have 11 piles of unorganized discs containing probably 60-70 discs. Add to that several hundred more discs that are suck on spindles in no particular order and my terrible memory, well it is sometimes difficult to find things. Filing cabinets have the advantage that usually people put them in some particular order.
i guess my point about not-an-os is that a good object database should be pervasive beyong the os level. it should exist on many os’s and be taken advantage of by applications in many domains, not simply to programs for one os. to inherently tie a good presumably networkable object database to a single os would be either criminal and insuring one’s own uselessness.
I agree with you completely. I probably didn’t make it clear in the article that my intent was to have the object-db runnable on top of any OS, but that it could also be taken down to the OS level it self (self hosting).
…when will Linux see some of these ideas implemented? :/
Actually, I started implementing them on Linux already. Unfortunately other higher priority things have preempted it for the time being. Hopefully, these other things will be resolved soon and I can get back to it.