House is a demo of software written in Haskell, running in a standalone environment. It is a system than can serve as a platform for exploring various ideas relating to low-level and system-level programming in a high-level functional language.
House is a demo of software written in Haskell, running in a standalone environment. It is a system than can serve as a platform for exploring various ideas relating to low-level and system-level programming in a high-level functional language.
Every operating system project has to build its own drivers and driver system. It is difficult to build a stable and reliable kernel in an unsafe language like C. It would be so much better if we had a hardware abstractin layer in a secure language and all drivers would be written for that subsystem, so all an OS has to care about is interfacing with the HAL and the rest like scheduling and memory management is handled by the OS (which can be written in any language). This would make it so much easier to play with new OS conepts. Hardware manufacturers would only need to write a driver for each architecture instead of each OS. Just think about the possibilities. We wouldn’t need to care about drivers, anymore.
Is Haskell fast enough to make this possible or do we have to hope that projects like DST (http://dst.purevoid.org) will become more successful?
Ah crap. A link to the old DST website when it’s pretty much dead!
BTW: DST has been running on top of the O’Caml VM instead of using the native code compiler for quite some time now, so in a similar boat to Haskell, though I’ve yet to try the JIT compiler runtime…
I don’t think even hand-optimized assembly is fast enough for full hardware abstraction; Look at all the optimization that goes into writing a graphics driver, and all the extra extensions that go towards bypassing bits of the kernel. Look at the fact that we’re still using monolithic kernels mainly because the overhead of system calls on them is lower (and, yes, part of it is code intetia — “it works, why replace it? — but part of it is simply that monolithic kernels are faster.)
Also, once you get this “abstraction layer” written, you’ve got more or less got an entire OS, just with a pluggable scheduler and maybe VFS (memory management would need to be inside the abstraction layer, or you can’t really do DMA and memory mapping.). You may not even get a pluggable VFS, because if you’ve abstracted away the hardware, you probably can’t do stuff like wear levelling on flash drives, or optimizing the physical allocation on hard drives.
Assembly code isn’t much faster than good C code. The problem is SSE, but a really good compiler (maybe with something slightly less powerful than whole-program analysis) could give you speed comparable to hand-optimized assembly code.
The argument about monolithic kernels is flawed because with a secure language (in particular: no arbitrary pointers like in C) you can pretty much eliminate the overhead of micro-kernels. Even if that is not enough we could have something like a hybrid kernel (drivers as modules loaded into kernel). That would be basically as fast as a “real” monolithic kernel, but without most of the inflexibility and complexity.
Also, you forget that speed is not everything. Drivers and other insecure kernel code are responsible for a lot of system crashes. A more secure language (see Sing# which is used in Singularity) could solve a lot of problems. Maybe you could optinoally enable a garbage collector (good GCs with native code compilers can be pretty fast, don’t compare this to Java’s performance). If you really need maximum performance you could still combine it with insecure assembly-like code.
I also don’t think it’s bad that the layer is basically an entire OS. Who cares? That means less code for the actual OS sitting on top of the layer and more drivers for all OSes. There is so much code that every OS reimplements in a very similar way. The only difference is the way plug-n-play works, the VFS, the scheduler, the VM subsystem, processes, threads, single vs multiple address spaces, security, and maybe a few minor things I forgot. The abstraction layer could offer everything that is important. There is absolutely nothing speaking against adding support for what you mentioned at the end (flash drive and physical allocation).
We have so much duplicated code in every OS. The development time could be spent on really important things. Why does nobody sit down and write a cool layer in a secure language (no, please let’s do away with stupid C/C++ for system-critical code!).
Also check out this –
A public-domain Haskell-based file-server / OS
Also very small –
The two sections of interest here are titled “Generic Zipper and its Applications” and “Zipper-based file-server / OS”.
http://okmij.org/ftp/Computation/Continuations.html#zipper
The real problem behind operating system problems is the use of C as the base language for an O/S; if another imperative language is used, then the problems of C go away.
Using a functional language for writing an O/S is certainly an admirable task, but why bother with all the functional tricks when there are languages like ADA which are imperative but safe?
To put it differently: if you take Haskell and put assignment in it, you still have a fine language that does not have the problems of C, but you also have a lot more speed (try sorting a table with 100,000 records in a functional way!) and you do not need to twist your brain to find solutions to problems…