It seems like Microsoft Research is really busy these days with research operating systems. We had Singularity, a microkernel operating system written in managed code, and late last week we were acquainted with Barrelfish, a “multikernel” system which treats a multicore system as a network of independent cores, using ideas from distributed systems. Now, we have a third contestant, and it’s called Helios.
Helios is also a project coming out of Microsoft Research, and it’s described as complementary to Barrelfish – in the research sense of the word. Helios is based on Singularity, but introduces support for satellite kernels, remote message passing, and affinity. The research paper written by the team behind Helios describes the operating system as follows:
Helios is an operating system designed to simplify the task of writing, deploying, and tuning applications for heterogeneous platforms. Helios introduces satellite kernels, which export a single, uniform set of OS abstractions across CPUs of disparate architectures and performance characteristics. Access to I/O services such as file systems are made transparent via remote message passing, which extends a standard microkernel message-passing abstraction to a satellite kernel infrastructure. Helios retargets applications to available ISAs by compiling from an intermediate language.
The paper once again goes into quite some detail, a lot of which I simply do not fully understand (I’m looking at you, our dear and loving readers, again). What I do understand is that Helios and Barrelfish complement each other. “Barrelfish focuses on gaining a fine-grained understanding of application requirements when running applications, while the focus of Helios is to export a single-kernel image across heterogenous coprocessors to make it easy for applications to take advantage of new hardware platforms,” the paper reads.
What do all these experimental operating systems from Microsoft’s research department mean? Well, individually, they mean very little. However, if you take a few steps back from the painting, I think all this could signify that the Redmond giant is looking at the future, a future where computers will have lots of different processing cores, who may not all share the same instruction set – for instance, the GPU who mostly just sits there wasting electrons in most machines.
The Windows NT base system as it exists now is a pretty rock-solid piece of work which can certainly take on the competition, but as time progresses, there comes a moment where NT will no longer be the good choice. I think what we’re seeing here is Microsoft hard at work trying to look not just at Windows in 2013, but the operating system in general in 2020.
I was very excited when they announced Singularity, and was happy to see an open source release at Codeplex later on.
Unfortunately the development seems to be stalled right now. The last commit was at Nov 14 2008.
Unlike the last two prototype OSes(?), this one actually had potential, since I’m already writing command line C# applications, which it apparently supports.
Hope they resurrect the project soon.
The reason why it has died, like so many efforts by Microsoft to ’embrace’ open source is the licence they choose. Take Singularity, it is licensed under Microsoft Research License, so no one is allowed to turn it even into a community oriented operating system let alone taking it to the final step of it being a product to be commercialised.
Unfortunately Microsoft can’t seem to let go of their code or at least allowing others to embrace it and turn it into something useful without them (Microsoft) getting something directly out of someone elses enhancements. What they should have done is released it under an LGPL or even a CDDL licence and encourage the development of an ecosystem around it.
Edited 2009-09-29 04:31 UTC
It died because it’s RESEARCH OS, no sane person with that much knowhow is gonna spend time doing research FREE! Since there is tons of similiar more projects with more user friendly approach it’s pretty clear that those people with intrest are concentrating on those.
I thought it was MS-PL, but after reading your comment I checked, and it actually is licensed “not for commercial use”.
On the other hand open source projects are stalled too (SharpOS, JOS).
I actually liked the idea of virtual machine based hardware drivers, but I guess it’s for another time.
This is not a _prototype_ operating system. Its a research one.
The difference is that with a prototype it has some chance of actually making it out out into the real world. Helios, on the other hand, has zero chance of seeing the light of day.
Microsoft will take what they learned, patent it to make sure nobody else can use those ideas, and then may or may not introduce some of the features in later versions of NT.
—————
Despite what people want to believe software development is a evolutionary process, not a revolutionary one.
NT has been under constant development since 1989. With userland portions dating back to the early 80’s
Mac OS X has been under constant development since 1985 with the founding of Next Computer, with lots of BSD portions.
The BSDs themselves have been under 1977.
It is, in turn, based on concepts developed by original Unix that was started in the late 1960’s and early 1970’s.
Which was based on Multics concepts developed through the 1960’s and a video game called “Space Travel”. Of course Linux started as a combination of the Linux kernel (started in 1991) and GNU (1984).
And Linus learned his OS development from Minix, which started as a text book example of a OS published in college OS design textbooks in 1987.
Of course Solaris is a release of System7 Unix, which is a direct descendant of the first port of Unix from PDP-11 assembly to C. And earlier Sun OS versions were BSD based.
——————-
Microsoft NT is about the most modern OS your going to see that has any widespread success. Every since OS since then has been a commercial failure.
For years and years the best design researchers could come up with was the Microkernel. The only successful kernel of that type is the one used for QNX (started in 1980). (if OS X uses a microkernel then so does NT).
And the only reason it was successful was because the Microkernel design allowed deterministic scheduling (aka realtime), but it could not scale to being acceptable on desktops. (the message passing just had too much overhead)
—————-
My point is, ultimately, that these things Microsoft are working on is just pure research. They are not designed to be used, are not meant to be useful, but are just playthings to try out new ideas.
If the things they create cannot be implemented in a productive manner in NT then they will never get used.
Of course Solaris is a release of System7 Unix, which is a direct descendant of the first port of Unix from PDP-11 assembly to C. And earlier Sun OS versions were BSD based.
System7 unix? Do you mean System V Release 4 (SVR4)? Or UNIX V7?
Microsoft NT is about the most modern OS your going to see that has any widespread success. Every since OS since then has been a commercial failure.
Of course by your UNIX rationale, NT is really a descendent of VMS which dates back to 1975, which was a descendent of RSX-11 dating back to 1972, of course leading back to RT-11 in 1970. I’m sure some DEC aficionado can neatly tie this back to TOPS-10 or DECSYS going back to 1963/1964. Doesn’t sound that much more modern than UNIX….
>>System7 unix? Do you mean System V Release 4 (SVR4)? Or UNIX V7?
Both of course as V7 is a direct ancestor of SVR4 (via System III and 32/V)
Like the way I’m a direct descendant of both my father and my great grandfather.
I think Tru64 has a microkernel too.
Eh…
Me thinks you need to look beyond what you can run on your own desktop. I can guarantee you that there are more microkernel installations out there than there are desktop and server computers in the world.
You think that QNX is that widely used? I also forgotten that Vxworks is a Microkernel.
So ya.. a shitload of them in embedded systems.
——-
But having personal, first-hand experience with QNX.. they are pretty detestable things to work with and on and can’t scale up very well.
Are you serious? QNX powers everything from high-tech medical equipment, down to VCRs, and all the way back up to the Space Shuttle’s robotic arm. What do you mean “it doesn’t scale”?
Again – there is more to computing than desktops, laptops, and servers. There’s a much larger area of computing where general purpose stuff like NT, Linux, and Mac OS X don’t dare to go because of a fear of getting curb stomped.
Edited 2009-09-29 16:20 UTC
Internally, Singularity transitioned to an incubation project called Midori. There’s not much info available about it externally, though you can find a few articles here and there.
Thom,
I would say that Helios does sound like a cool OS. But I have some concerns that I’m hoping folks can answer for me:
Applications. Applications is the killer, because the best operating system in the world won’t succeed in the market place without some killer application(s). What applications can run on Helios?
If it were say an open source Unix based operating system I think that it would be checkmate with a huge volume of open source software ready to go. You could do something like Debian with all applications precompiled and a package manager that automatically handles the dependency issues. Or you could do something like the BSDs, with the ports collections.
What if the OS is a) not Unix like and b) Proprietary, how do you quickly ramp up developer interest to make developers write code for Helios? What’s the incentive if you’re not sure it’s going to be around 5 years from now with a large enough market share to be profitable for proprietary software?
Availability. How hard is it to get access to Helios to evaluate the environment? I know it’s early, but even betaware is worth looking at if it sort of works on some hardware out there.
Uh, it’s a research project. To experiment with new ideas & be a proof-of-concept for new designs is the point. Supporting lots of apps, being readily available, competing with mainstream OS’s etc, isn’t.
There was an OS called Helios back in the early 90s built specifically to provide a nix like OS on Transputer arrays, hence massively parallel. When the chip died, the OS was abandoned soon after. I only ever saw it at a VR show in London around 1990.
I also have to wonder why Singularity hasn’t continued or the experiment “done”.
google Helios for Transputers
Good job Haiku got done before BeOS name got recycled too.
Perhaps Microsoft people have been playing DeusEx. There was a Helios supercomputer in the game. BTW, any information on DeusEx III ?
It kind of sounds like “Singularity meets Plan 9”.
I was thinking the same.
Multi-kernel systems? User mode network stack? Plan 9 has this almost 20 years ago.
Add a dash of Oberon and you have MS research, 15 years later.
Plan 9’s failure makes me feel like crying. It’s how Unix should have gone, everything still being a file. These days most Unixs have bits bolted on that ignore that’s it’s a Unix system. They only look at that part in isolation, not how it fits with the whole. Simplicity before optimal, because what’s optimal today might not be tomorrow and what might be optimal in isolation might not be when taken as a whole. Micro optimization rather than macro optimization. On top of that, simplicity isn’t just easier to use but to maintain. When Unix was young, it really was everything through a simple/generic abstraction (i.e. file) in a single naming system (filesystem) that all tools could work with. ALSA and Pulse is what we have now in Linux, where as we should have something like OSSv4 and a X Audio plugin (which is also done (http://www.chaoticmind.net/~hcb/murx/xaudio/) , but unloved). And we have seperate API to use sockets, where as on Plan9 there was the /net folder with sockets as files. Glendix wouldn’t be enough to get Linux up the Plan9 design, it needs come from Linus and the kernel itself.
The Glendix folks are hoping to get their Plan 9 compatibility features integrated into mainline Linux if possible. It wouldn’t fix the non-file-like APIs but it would mean that some of the nice Plan 9 APIs (e.g. the pseudo filesystems for various things) become available to Linux applications too. Assuming they get any of it upstreamed!
Even if it’s just a reasonably clean set of patches that distributors and / or uses could apply that would still be useful.
No download link?
By the way I got BarrelFish to compile under Ubuntu 9.04, and run under Qemu. Is the instructions for accomplishing this worth sharing?
Edited 2009-09-29 03:30 UTC
Please do! If you don’t post it could you email me at holt (dot) justin 173 (at) gmail (dot) com
I would be interested in seeing those instructions.
Reading this and really the last two weeks of news at OSNews, I am starting to get the feeling that operating systems are coming back into style.
It seems like just a few years ago, whenever an alternative operating system was presented on OSNews, there would always be at least one post about how we already have Linux/BSD/Windows/[insert OS here]. This would always lead into someone saying that there is no longer a need for alternative operating systems.
However with the introduction of multiple core as well as the rise of mainstream multiple processor computers, there has been a large push to really make programs (including OS’s) more parallel.
It is great to see that there is interest in locally distributed operating systems. In my (relatively un-researched) opinion, I would say unless fabrication technologies and/or materials dramatically increase speed, the only way to move personal computers forward is to increase parallelism.
This is why I have been enjoying the news so much recently. With not one but THREE open source (depending on who you talk to) projects coming out of Microsoft, as well as an Alpha1 release from Haiku (BeOS arguably being the first “parallel” OS’s), it has been a fun time for an operating system geek.
I definitely agree with bebop, this is moving forward with a new way to think about operating systems. I’ve had many a discussion over exactly what IS an operating system; although some people think it’s the shell, most agree that it’s the kernel, and others have other opinions.
I also like Johnny’s point about applications – the OS doesn’t succeed without applications (though they may be back end applications, they’re apps nonethless).
But I think the point of these research OSs are just that – research. When discussing distributed and network programming, one can easily begin to see that a modern day desktop is, in many ways, a distributed system. A video card has dedicated CPU’s and main memory, hard drives are getting their own processors for internal encryption, shoot even the computer’s main memory is not straight forward anymore – not with virtual memory/memory management units, etc… Thus far we’ve managed to wrap these hardware devices underneath a single kernel, but that’s only because we were thinking inside the box – going along with tradition.
I think BarrelFish is aimed at opening the possibilities at treating a “personal computer” as a distributed system, while Helios is aimed at taming the cloud as a much larger distributed system.
Thom mentions that Microsoft has their eyes on 2020, and I agree completely. These projects are most likely meant to figure out how to best work these micro and macro distributed systems, so by the time MS comes out with “Windows MicroCloud” and “Windows MacroCloud” they’re backed by a solid decade of experience, not two years of hacking a previous version of windows (eh hem, WinMe & Longhorn).
Sorry for the long reply – I’m just extremely excited about this stuff!
Well actually with hardware development its going opposite of what your saying. Everything is getting sucked into the CPU and generic.
Its all about Moore’s law.
Moore’s law says the number of transistors in a processor double about every 2 years. This is due to the improvements in lithography, quality of silicon ingots, and shrinking processes.
As the quality of silicon goes up, wafers get larger. Larger wafers mean less waste and cheaper production. Higher purity increases yields. Shrinking processes and higher quality lithography mean more elements can be stuck in a smaller and smaller area.
The best CPU design people have been able to create so far is a RISC design, which is fundamentally very small and very fast core. Modern x86 processors are RISC at their core and use extra silicon to create a machine code translation layer for the legacy CISC machine code.
And such since the best cpu design is relatively small core that runs fast with large cache then using all the extra silicon area for more and more cpu cores was the logical conclusion.
However there is a limit to that usefulness. People just are not that multitask oriented.
Then on top of that you have memory limitations and the amount of I/O pins you can squeeze into a Mainboard-to-cpu interface is fundamentally limited.
So the next step is just sucking in more and more motherboard functionality into the processor. AMD did it with the memory controller, Intel has recently followed suit.
The next step is to suck the GPU and most of the northbridge into the central processor. Intel is already doing that with the newer Atom designs in order to be competitive with ARM.
The age of the discrete video card is passing. There will be no special physics cards, no audio acceleration, no nothing.
On modern architectures even the entire term “hardware acceleration” is a misnomer. Your OpenGL and DirectX stacks are almost pure software, or at least will be in the next generation stuff. All hardware acceleration is nowadays is just software that is optimized to use both the graphical processor and central processor.
Pretty soon memory bandwidth requirements and latency issues will mean that sticking a huge GPU and video ram on the far end up a PCI Express bus will become prohibitively expensive and cause too much overhead. So the GPU will just be another core on your central processor. (well.. actually more then likely just larger blobs of dozens and dozens of tiny extremely-risc cores that will get described as “the gpgpu cores”)
The future of the PC is “SoC” (system on a chip), which is already the standard setup for embedded systems due to the low price and high efficiency that design offers.
Instead of having the CPU, North Bridge, South Bridge, CPU, etc etc. all the same functionality will be streamlined and incorporated into a single hunk of silicon.
Then your motherboard will exist as a mere break-out board with all the I/O ports, a place to plug in memory, and voltage regulation.
It’ll be cheaper, faster, and more reliable. The only difference between Desktop PC, Smart Phone, and Laptop would be one of form factor, the types of I/O included by default, and energy usage.
The discrete GPU will exist as mostly high-end systems for a long time, but even that will pass as modern NUMA architectures mean you can still pretty much unlimited numbers of multicore CPU/GPUs in a single system.
(There exist high-end Linux systems with over 4000 cpu cores on a single computer)
———-
What your talking about is a extremely old fashioned computer design.
The mainframe system had a bare OS in the central running on a relatively weak central processor. The central processor box had a number of different connections that could be used for almost anything and often multiplexed for a wide variety of very intellegant hardware. Network boxes, tape boxes, DASD units, etc etc. Each with their own complex microcode that offload everything. This means that mainframes have massive I/O capabilities that can be fully utilized with very little overhead.
Of course all of this means they are huge, expensive, difficult to maintain, difficult to program for, and are largely now legacy items running software that would be prohibitively expensive to port to other architectures.
Edited 2009-09-29 05:30 UTC
This sentence isn’t correct: a RISC is a kind of Instruction Set (visible by the compiler) which allow efficient hardware usage by the compiler.
An x86 compiler cannot access the ‘core RISC’ inside an x86 CPU so it’s not a ‘core RISC’, it’s just that both RISCs CPU and x86 CPU share a lot of silicon.
Uh? You can also use multiple core to accelerate one single software, but it’s difficult to program yes.
Probably, but note that GPUs now have a cheap, huge memory bandwith (thanks to it’s fixed on board memory configuration) that the GCPU won’t have at first..
It’s possible to use different algorithms to use less memory bandwith, but first generation GCPU won’t be competitive with high end GPUs.
Your splitting hairs. To paraphrase what your saying is
The core of the modern x86 is not “risc”, it just is the same design as “risc” cpus. Its a design philosophy in my eyes. Instead of a lot of complex instructions you use a cpu that has a small set of fast instructions and your depending on your compiler to get it right.
Intel and AMD processors have logic that takes the x86 instruction sets and break them down into RISC-like instructions that are then executed by the rest of the processor. You can think of it as a hardware just-in-time compiler or something like that.
So your agreeing with me then.
Yes memory bandwidth is a issue with IGP.
But the problem with the current design is that with more and more applications using the GPU as a “GPGPU” you will never really have enough dedicated memory on that. On a modern composited desktop your looking at massive amounts of video RAM needed to cache out all those application window textures and whatnot.
Its the same reason why on a modern system with 8GB of RAM OSes still insist on having swap files and swap partitions. To make things go faster you want to use as much RAM as possible.
So all that latency stuff adds up.
So Instead of burning out hundreds of thousands of cycles on BOTH your cpu and gpu shoveling megabytes worth of data back and forth over PCI Express during normal application you end up with all the cores sharing the same cache.
Then instead of spending 200 dollars or whatever on a dedicated external video card they can spend that money on increasing the memory bandwidth from main memory to the processor and make all that fast dedicated video ram a part of your normal main memory.
edit:
Imagine a application that uses GPGPU instructions and CPU instructions in the same execution loop.
Since the GPGPU is only fast at certain things it would be desirable to easily program using both the GPU and the CPU.
So with a dedicated separate video card each time you execute a loop in that program your burning through much more cycles just moving data back and forth over the PCI Express bus then what it actually costs to execute it.
By integrating the GPU and the CPU into the same processor as seperate cores and then using the same memory and cache for both things a much slower cpu and gpu could massively outperform a otherwise faster dedicated video card for that sort of thing.
And be much easier to program for…
Edited 2009-09-29 15:32 UTC
@dragSidious: Very nicely described. I was also thinking about win modems and how increasingly even expansion cards offload processing onto CPUs. Also in the past you used to have separate math co-processors but these days they are all built into the CPU. So things have been heading in this direction for quite a while now.
But I still think integrating the GPU into the CPU will take a lot of time. Intel doesn’t seem to be as good at making GPUs as it is at CPUs. So the likes of Nvidia and ATI will have the edge for many years to come.
Edited 2009-09-30 01:10 UTC
I don’t remember precisely, but back then when there was possibility Amiga will use QNX kernel (1998), I studied their Neutrino architecture a bit, and it was exactly like that – everything in OS was like interconnected network of managers, passing messages here or there. Well, but maybe you mean something different.
I like when companies try new ideas publicly, or even old ideas in under new conditions. This might help computing world only better ….
The paper was interesting but, for some of us, nothing we have not seen 2 decades ago. Back when mainframe and minicomputers loaded intelligent peripheral controllers during the boot process. I just hope they don’t try to patent concepts this old as something new…
The RISC processor was originally designed to provide a low cost processor for these limited requirement roles (though today’s RISC processors follow a different definition of “RISC” than proposed in the 1980s.)