“Developers are working to create experimental open-source operating systems with modular microkernels using the C# programming language. The SharpOS and Cosmos projects both announced their first major milestone releases last month, demonstrating the technical viability of the concept. Although some previous research has been conducted in the area of VM-based operating systems, the Cosmos and SharpOS projects break a lot of new ground. One particularly notable prior effort in this field is Microsoft’s Singularity experiment, a research project that that began in 2003 with the intent of creating a managed code operating system that uses the Barktok compiler and leverages static analysis and programmatic verifiability to ensure high dependability.”
The news so nice they scooped it twice ;D
Writing an OS in C# is like writing an OS in Java.
It’s possible.. but either way, I’m kicking your ass.
Unix… C.. End of discussion.
Unix and C are not the be-all-end-all panacea for operating systems. They just happen to be some of the most successful over the past 30 years. That’s not to say something else, with different paradigms, can’t take a stab.
Though I kind of feel bad for the Lisp crowd out there. They were the originals (as far back as my memory goes), with Lisp machines. I still get lost in all the parentheses, but it’s still at least interesting to study for perspective.
Ah, LispM… Such awesome hardware and environments. I wish they hadn’t gone the way of the dodo
If people formate their LISP correctly then it is very readable. for some reason people like writing their LISP(Like(this) which(is(stupid)))
Though, with a mathematics background, LISP is actually kind of easy to read like that… I did it every day with composite functions.
I think a number of lessons have been learned since the heyday of UNIX and C. For one, type-safe languages can allow for better optimization since the compiler can statically know what pointers can or cannot be aliased.
It’s really worth looking beyond UNIX… it wasn’t handed down by God to K&R from a mountain in the 70s.
Blasphemy is still illegal in the UK
Right… so it will be twice as fast to create a modern OS that is as stable and functional as Unix because of the language advancements made over C since then. That does not make Unix a poor OS or a dead OS, or even an outmoded OS… it has simply been around long enough that there is almost nothing wrong with it (this is not including Userland apps.)
C# is to C as C is to ASM. Go back in time 20 years and you would have heard the same thing as you just said, just replace C with ASM, and UNIX with ITS.
20 years ago? That’d only be 1988.
All versions of Unix had been written in C since 1973. VMS had always been written in a combination of BLISS and C. Pretty much anything developed since then would have been almost entirely written in C or something similar.
This includes even OSes for home computers starting around 1984, such as the Amiga’s OS (BCPL and C, with bits of assembly), the Mac OS (Pascal and assembly), or pretty much anything except DOS. Even Windows (such as it was back then) would have been written in either C or Pascal.
The basic difference is that the transition from assembly to C had huge advantages, and minimal (if any) disadvantages. They ran no slower, but were much easier and faster to develop, didn’t require rewriting for new architectures, and so on. Most importantly, an OS written in C would be compatible with the same OS written in assembly, and would be able to run existing applications, or have existing applications adapted to it.
Using something like C# offers more advantages (mostly in regard to safety and potential security, with a very small possible speed advantage), but crippling disadvantages – basically, it’s incompatible with every piece of software developed over the past 40 years, and it’s nearly impossible to adapt existing code to run on it.
There are possible work-arounds, but they have unacceptable security or performance penalties.
Sorry, I should have said 40 years (I wasn’t really thinking). A longer timeframe actually supports my argument though.
You are wrong here. They ran way slower. But it was a higher level, and more importantly, a portable language. C was the big, revolutionary thing UNIX had going for it when it came out. K&R kind of decided that the machines were fast enough at that point that the additional cost in performance was outweighed by the numerous advantages.
here is a quote from the UNIX wiki
In 1973, Unix was rewritten in the C programming language, contrary to the general notion at the time “that something as complex as an operating system, which must deal with time-critical events, had to be written exclusively in assembly language”.[4] The migration from assembly language to the higher-level language C resulted in much more portable software, requiring only a relatively small amount of machine-dependent code to be replaced when porting Unix to other computing platforms.
(http://en.wikipedia.org/wiki/Unix#1970s)
I’m not quite sure what you are talking about here. it is very easy to work with C code in C# (here is a site that gives you method signatures for most of the common windows APIs http://pinvoke.net/). What C# has going for it over C is improved security, stability, interoperability and theoretically, portability. The price you pay is some additional overhead at runtime for actually managing the process.
The argument isn’t about interop, it is about performance.
Edited 2008-02-09 16:30 UTC
Speaking purely from the point of view from a Managed OS, I’d say you lose said advantages when you start P/Invoking unmanaged code.
The ideal situation would be to disallow non-managed code completely.
That is true, but the parent implied there is no way to interop with older C stuff.
Singularity also provides HIP (Hardware Isolated Processes) that can be used in lieu of SIPs to isolate native code from the kernel and other programs. So it is entirely possible to use native code in the place of garbage-collected managed code.
OTOH, managed OSes are hopefully going to bring us to a day when we can forget about TLBs, virtual memory protection, and all the other costly artifacts of unsafe languages. Computers could become about 10-20% more efficient at accessing memory if they didn’t have to go through the whole virtual-to-physical translation that’s necessitated by allowing pointers as data elements. Compatibility in this world would be maintained by just having one or two compatiblity cores on the machine running a hypervisor or VM with the old OSes. An OS distributed as bytecode would also allowspecialization of the software at the end-user’s machine, making the particular ISA of a processor matter a little less as long as it meets the needs of the program.
managed code in the kernel can work fine with unmanaged code in userland. As long as the kernel space is protected it will be hard for a buffer overun to cause problems system wide.
This is true, but unmanaged code can compromise the entire design philosophy about zero-copy IPC.
Once you’re in the unmanaged realm, you can no longer make assumptions about the security of a said application even if it is just IL converted to x86.
Since C# intrinsically has no support for pointers (Minus the unsafe keyword) and uses a Common Type System, you can make a great deal of assumptions regarding the security of the assembly.
However, in watching a Singularity video last (a reply to my original post peaked my interest in Hardware Isolation) and I also saw how this scenario could make sense.
Consider the following:
You can run non managed code inside a Hardware Isolated Process, and then the rest of the proofed C# Code in Software Isolated Processes to minimize the performance penalty incurred from Context Switches and IPC.
You also would have the option of running two processes in the same Hardware Context (But with Software Isolation, and Message Based IPC) to retain a certain level of security.
It’s a great compromise which Singularity makes, and frankly I think this stuff is the future of Operating System Development.
Be it C#, or some successor years in the future; it’s obvious this paradigm is one that has great potential.
are you saying that a C# OS can’t run binaries compiled from C++? I will be interested to see if a C# OS actual has limits on how you write programs for it. I think there will actually not be any problems as long as the proper Userland/Kernel space division is made, the fact that they are making it a microkernel tells me that they plan to make it as interoperable as possible with other language environments. All they need is a proper communication system to talk to the kernel and it should be invisible to needs of many developers.
The story you are reporting on should be more clear. There are 4 links in there and from reading your post I don’t know which is which and what is what.
There are advantages to writing an OS in managed code. The biggest two are:
– You can run everything in kernel space. No syscalls, no context switching. Because the code is managed – the processes are still separate.
– You can pass objects directly to and from the kernel or between processes. Because of the different address spaces of the different processes and the kernel – the current kernels have to do a lot of tricks to pass data around – using copying, the stack, messages, etc. With managed code in many cases – you can just pass a pointer.
Edited 2008-02-09 12:08 UTC
You would only be able to work in kernel space if the application was also managed. If the devs want to put that kind up restrictions on their developers, fine but not many people will like that.
When one writes in Java, the code is “managed” by JVM. The term “managed” is not used in Java circles, but you know what I mean. What is managing C# code ? I thought it was some .NET framework. And .NET framework is supposed to run in an OS, usually Windows, or Mono under UNIX. So, how is an OS written in managed code supposed to run and boot ?
DG
The OS is technically not managed..yet, the kernel is converted into the intermediate language by Visual Studio and then they use an in-house tool to convert that IL to x86 Opcodes and build the executable header information.
So it may not be truly managed yet, but they can still make great security assumptions when converting to x86.
The next logical step would be getting as much runtime support as they can in the IL-to-x86 compiler and use that to build proofed low level kernel components.
From there it’d be possible to build an in-house VM capable of running true managed code.
why not work with some chip researchers to create a code manager in the chip? one that is more extensive than the buffer overrun protection that intel and AMD chips have now?
What people really should be saying is that C#/MSIL is verifiable. The verifiability of the code allows the VM (or OS in this case) to allow the code to run in the same process as the kernel without causing it to do silly things like overwrite critical bits of memory, blowing up the stack or heap, changing the current instruction pointer etc.
The JVM equivalent for .Net is called the Common Language Runtime: http://en.wikipedia.org/wiki/Common_Language_Runtime
There are some noticeable differences in architecture between it and the JVM though. Google is your friend.
As someone said earlier, the more important aspect of IL is that it contains enough information for memory safety to be verified at compile and runtime. In the case of Singularity, the IL is actually compiled down to machine code ahead of time by a trusted compiler (either the Bartok optimizing research compiler or the Phoenix compiler that’s the backend of the next Visual C++). Work is being done to keep the type information next to the ASM code so that the compiler does not really need to be trusted and only a smaller verification system needs trust.
Ultimately you do need to trust the correctness of some pieces of code… and of the hardware.
The design of the .NET framework is that that they first compile down to a platform- and processor-independent common intermediate language (CIL); this CIL bytecode can either be run via interpreter, or (this is the important part) compiled a second time into native code (as is done the most often, trading off portability for speed). This may seem roundabout, but since you only have to worry about optimizing the compilation for CIL, instead of all the umpteen .NET languages, you can get very strong optimization.
In the past 10-15 years, there have been some interesting ideas developed in operating systems. There are two ideas that stick out in my mind.
One, demonstrated by Tanenbaum’s MINIX 3, shows how a rigorously microkernel based OS can produce a system that is highly stable and scalable. Traditional OSes keep drivers in the precious kernel space. MINIX 3’s novelty is that it banishes the driver to user space so that if it wreaks havoc, it will not affect the entire system. For example, the disk driver could crash, yet the OS could gracefully restart the driver transparently without any intervention from the user.
Two, demonstrated by Bell Labs’ Plan 9, shows how a highly sophisticated and complicated distributed system could be developed from a simple concept of file servers. Plan 9’s novelty is that it breaks the traditional concept of a computer as merely a box on your desk and institutes a system whose pieces could be physically separated by thousands of miles. Currently, if I want to access music from my home computer, I’d have to copy all my files to a disk and transfer them. If the world ran on Plan 9, I could just “mount” the local computer’s sound driver file to my home computer, and play the music off the home computer.
Currently, OSes build tough security around each individual process. Bringing “managed code” into the OS level can greatly simplify the OS yet fortify its security. MINIX 3 and Plan 9 looked at previously held concepts and attempted to improve them. I think managed OSes are following in this same tradition. I think those who roll their eyes and say managed code is too slow, takes up too much memory, or whatever, are blinding themselves to new paradigms. I don’t know if managed code at the OS layer is feasible or not, but I strongly believe it’s worth the OS community to give it a chance.
I like to see experiments using Java, or in this case C# as system programming languages.
Even thought I’m a C/C++ fan, many of the security problems we have nowadays are because of their lack of safety + programmers that don’t really know how to use them properly.
Many operating systems have been done in another languages, Smalltalk, Oberon, Modula-2, Modula-3. Even the Mac was once upon a time coded partially in Pascal!
However C and nowadays C++ always kept their throne as OS development languages.
That might change in the future, who knows. That’s why I like to see these experiments bring something new to the table.