The Singularity team at Microsoft has recently published a new article concerning their research OS. It is a concise introduction to the Singularity project. It summarizes research in Singularity 1.0 and highlights ongoing research for Singularity 2.0.
Singularity is one of those OSs I’m really interested in.
Too bad there’s no open testing.
“Vista’s not selling! Quick, throw some Linux FUD out! OK, good – now, talk about our latest vapourware.”
“Vista’s not selling! Quick, throw some Linux FUD out! OK, good – now, talk about our latest vapourware.”
So what you’re saying is that MS research can’t talk about any skunkworks type projects? Besides, singularity is at the complete other end of the spectrum from their mainstream computing offerings.
Anyone interesting in systems programming should find singularity to be quite fascinating.
One can’t believe three-quarters of the things they say about products of theirs that actually exist; so why on Earth would I be interested in something from them that not even they have the chutzpah to release into the wild?
Do you honestly have nothing better to do than troll every article that gets posted that’s even vaguely related to MS? And in any case, you can take 3/4 of the stuff the linux fanboys say about their projects with a grain of salt as well, so that’s hardly one-sided.
Do you honestly have nothing better to do than troll every article that gets posted that’s even vaguely related to MS?
I’ll stop pointing out the hypocrisy and ineptness of Microsoft and its shills when the shills stop posting bull about Linux.
And in any case, you can take 3/4 of the stuff the linux fanboys say about their projects with a grain of salt as well, so that’s hardly one-sided.
Actually, lying about the capabilities of one’s favourite OS is one thing I will give credit to Microsoft for, for being better at than us. Thank God.
Twenex – It is ironic to see how you run from forum to forum to spread FUD about Microsoft and blame them for FUD
Edited 2007-06-27 20:58
I don’t run, this is only one forum, and it can hardly be FUD.
Fear? Microsoft? Hah. If they could code, maybe.
I think he and Supreme Dragon have a FUD spreading campaign racket going on together. Oddly I used to enjoy Twenex’s comments…is he just getting bitter in his old age?
1. Supreme Dragon and I may see things the same way, but we certainly don’t have any kind of “racket” going on between us. 2. My comments are the same as they’ve always been; its just that Windows shills get away with it, whereas people who object to Windows don’t. 3. I’m not old, and I’ve always been bitter. With good reason.
No, believe us, you used to be way more reasonable than you’re being lately. And I’m not attacking you, I used to really like your comments (You probably noticed that I’m one of your fans).
Despite my ideas about MS and Windows, I must admit that Singularity is really, really interesting. It’s not as revolutionary as many may thing (it bases itself in many existing ideas), but it’s nice to see MS investing resources in something really new. Hell, this could save them from the huge beast Windows is right now, and that’s a good thing (for them and for the users of their products). It’s a shame there’s no public testing, even in its early stage of maturity, though
As an OS testing addict, I’d love to give it a try
Seriously.
I’d love to see what MS’s money is buying them. I’m interested to see how well it stands up in the wild relying on invariants rather then protected memory. That sounds good on paper, but the proof really is when rubber gets put on the road.
Ok, I’ve got to fire up some VMs to get these shakes to go away.
Singularity predates the whole Vista thing.
Singularity predates Vista. It’s not vapourware, it’s a research operating system.
Their time would be better used removing the gigabytes of bloat and DRM/activation/WGA from Vista.
I care, this is a very intresting kernel for anyone who has a serious interest in operating systems.
html version by google: http://64.233.183.104/search?q=cache:aP5T7ouvMBIJ:research.microsof…
Edited 2007-06-27 20:08
“I care, this is a very intresting kernel for anyone who has a serious interest in operating systems.”
Why would anyone who has a serious interest in operating systems still believe anything MS says? Here is a quality kernal:
http://www.kernel.org/
Edited 2007-06-27 20:25
MS is a big company. This is a research paper, not a press release from the marketing department.
This might be interesting too
http://lambda-the-ultimate.org/node/2307
From the blurb in the first link it looks like this runs with the idea of a microkernel somewhat. Although, I’m sure the idea extends beyond traditional microkernel design.
Looks interesting. Although I’m curious to see what a company that struggled so mightily with changing some of the fundamental design structures of their Windows OS could do with this on a production level. It seems like a big departure from the status quo.
It is based somewhat on the idea of microkernels, from the article:
“The first foundational feature common to Singularity systems is the Software-Isolated Process (SIP). Like processes in many operating systems, a SIP is a holder of processing resources and provides the context for program execution. However, unlike traditional processes, SIPs take advantage of the type and memory safety of modern programming languages to dramatically reduce the cost of isolating safe code.
Figure 1 illustrates the architectural features of a SIP. SIPs share many properties with processes in other operating systems. Execution of each user program occurs within the context of a SIP. Associated with a SIP is a set of memory pages containing code and data. A SIP contains one or more threads of execution. A SIP executes with a security identity and has associated OS
security attributes. Finally, SIPs provide information hiding and failure isolation.”
On your second remark: this is a research project afaik and is not meant to go to production.
Also from the PDF..
As for the research project going production, I was speaking hypothetically. Sorry if I didn’t make that clear.
Who says it can’t be a real product; Windows NT started off as a project, a better UNIX than UNIX – now its the basis for Windows.
To me, it looks very Plan9’ish. Lets remember guys, all operating systems suck badly. UNIX has its own issues, hence the reason why Plan9 was designed, Windows sucks, but due to inetia, its going to stay.
Singularity will probably start off most likely from the bottom and move its way up, from embedded, servers might go next, and meet in the middle – the end user desktop.
As for singularity today, its research project because it has a number of levels of development – its an entirely new concept; each level needs to achieve a certain level of functionality – each has a milestone. Once it gets to a point that it can fulfil all what current generation operating systems can do, and exceed it – no use having something that does it all, but does it terribly.
Who says it can’t be a real product; Windows NT started off as a project, a better UNIX than UNIX
Hmm. Didn’t quite go to plan, then, did it?
Hmm. Didn’t quite go to plan, then, did it?
Twenes, get over yourself. Windows NT does stuff plain UNIX simply doesn’t do, like per object ACLs, and a microkernel-like system were several components (printing, video, audio, etc.) are running in userspace, making the whole system more crash resistant. On top of that, Cutler had his hands in it, and let’s just say that Cutler ain’t stupid.
If you want to bash Windows NT, bash its userland and its crappy design decisions done by Microsoft, but not the NT system itself. It’s a well-designed and clean environment, certainly more advanced than plain UNIX – and thus “a better UNIX than UNIX”.
Edited 2007-06-27 21:13
Mmm… not entirely correct. It’s a kernel, more like Linux if I could make some sort of comparison. So how a kernel (apple) is a better OS (orange) than an OS (orange)?
Mmm… not entirely correct. It’s a kernel, more like Linux if I could make some sort of comparison. So how a kernel (apple) is a better OS (orange) than an OS (orange)?
I’m talking about winnt, not the nt kernel.
No. You were talking about the NT system itself, as I quote below. Your words.
No. You were talking about the NT system itself, as I quote below. Your words.
Yes, the NT system. NT is more than just the kernel.
Like?
Twenes, get over yourself. Windows NT does stuff plain UNIX simply doesn’t do, like per object ACLs, and a microkernel-like system were several components (printing, video, audio, etc.) are running in userspace, making the whole system more crash resistant.
That depends on your definition of “plain UNIX”. “Plain UNIX” of the NT era probably DOES qualify as “doing” ACL’s.
As for the “get over yourself” comment, next time save it for a (Windows) troll.
On top of that, Cutler had his hands in it, and let’s just say that Cutler ain’t stupid.
I think it was Armando Stettner who said that ever since the first time Cutler saw Software Tools (an old Cygwin-like package for bringing Unix capabilities to VMS-era OSes), everything he has done has been dedicated to doing things the un-Unixy way. That may not be stupid, but it seems kinda sad. Not to mention going against the tide of history. And for all I know it might well account for the MS corporate culture we see today.
(Of course if DEC had had the foresight to open source VMS, we might all now be in a *very* different position.)
If you want to bash Windows NT, bash its userland and its crappy design decisions done by Microsoft, but not the NT system itself.
If you look over my post again, you’ll see that I did not bash the NT kernel. NT and Unix are both *much* more than the kernel, however.
Windows NT does stuff plain UNIX simply doesn’t do, like per object ACLs, and a microkernel-like system were several components (printing, video, audio, etc.) are running in userspace, making the whole system more crash resistant.
Hmmmm, no. NT has always been based on a monolithic kernel, and video, sound and other drivers have traditionally always had a habit of bringing down the system. They’ve tried to change this with Vista, but I’m not entirely sure what they’re realistically trying to achieve there apart from zero compatibility with older drivers and hardware.
You can argue Unix/Linux has been better in the area of video stability since X is run in userspace.
On top of that, Cutler had his hands in it, and let’s just say that Cutler ain’t stupid.
Cutler wanted to make everything VMS-like, but once Windows compatibility was on the agenda that put paid to that.
They will just do what they did with NT.
I read that stuff just a couple of days ago.
They’ve got some pretty decent (if long) videos, too.
I’d suggest skipping the first and maybe second since they give a short summary at the beginning of the following videos.
Their design decisions are pretty radical.
For example Singularity does not need hardware memory protection since everything is verified statically and resides in SIPs (software isolated processes).
That means they can get a 100% secure system and still pass around pointers.
I really recommend checking out their docs and videos!
Edited 2007-06-27 20:27 UTC
That means they can get a 100% secure system and still pass around pointers.
Not really… It is illegal for code in an SIP to dereference a pointer to another SIP. The whole concept of Singularity is that you can’t pass pointers across SIP boundaries. You have to use contract-based message-passing channels to exchange data, and mutable objects may not be shared between SIPs.
The Singularity approach relies on a fantasy world where programmers rigorously define the behavior of their programs and how they may interact with other programs. It would be great if this were true, and many top developers do approach this level of rigor in their design process. But the vast majority of software development is governed by deadlines, features, and economics.
Singularity’s designers discovered that the fundamental barrier to reliable software systems is interdependence, just as the fundamental barrier to reliable programs is side-effects. But we cannot live without interdependence any more than we can live without side-effects. For many years we’ve had programming languages that only allow side-effects under controlled circumstances, and they have largely remained academic curiosities. Now Microsoft has developed an operating system that only allows interdependence under controlled circumstances, and it is perhaps even less likely to escape its research environment.
I firmly agree that programmers should be much more careful and explicit about the way they share data. But the solution is not to regulate developers through a mandatory isolation model. Reliable software can only come through voluntary adoption of more rigorous design processes and communication methods. The ideas that emerge will be rooted in the realization that interoperability and integration are powerful justifications for reliable IPC. D-BUS and SOA are two examples of how real-world requirements can drive the adoption of more robust and reliable data-sharing strategies.
I appreciate Singularity as an important exploration of how virtual machines can be used to govern data access within a hardware address space. It’s the beginning of a journey toward a new model for virtual memory management where the scope of a thread dynamically defines its contextual address space.
Operating systems must provide methods for processes to consensually share data with other processes. It’s up the processes to define the terms under which data may be shared. It’s not the role of the operating system to get involved in mediating contracts between processes. Its role ought to be limited to enforcing these contracts.
Now Microsoft has developed an operating system that only allows interdependence under controlled circumstances, and it is perhaps even less likely to escape its research environment.
It’s research… What they do now doesn’t have to become an end product by itself, but the results could be used in upcoming projects that are intended for wide release…
Yes, they cannot pass every pointer and they cannot do unsafe casts, of course.
What they _can_ do is take a data structure from one SIP and transfer it to another by just passing the pointer to it. This means that the one process looses ownership of this data structure and can no longer access it while this ownership is transferred to the other process.
This means that you can safely transfer large data structures at very little cost by just passing pointers.
This is how SIPs communicate over channels if I didn’t misunderstand the videos.
The Singularity guys seem to think that shared data is a big problem and should be avoided.
They give the example of two processes sharing some data and one process failing.
The second process usually cannot know if the data are corrupted and must therefore terminate, as well.
On the other hand, if the communication between these processes had relied on message passing _only_, the other process would know that its data are not corrupted.
Of course it would have to save some more state in order to resume operation if the other process fails.
Otherwise it could not know which message it had just sent…
Yes, they cannot pass every pointer and they cannot do unsafe casts, of course.
What they _can_ do is take a data structure from one SIP and transfer it to another by just passing the pointer to it. This means that the one process looses ownership of this data structure and can no longer access it while this ownership is transferred to the other process.
That’s what I thought too. However, the pointer will never reference memory in other SIP’s memory space so you can’t technically “pass pointers” in the meaning of letting another process access YOUR memory by passing YOUR pointer to it, which will prevent any way to corrupt another process.
The whole idea seems cool. Of course, we should check if it works in real world and its performance.
The Singularity approach relies on a fantasy world where programmers rigorously define the behavior of their programs and how they may interact with other programs. It would be great if this were true, and many top developers do approach this level of rigor in their design process. But the vast majority of software development is governed by deadlines, features, and economics.
You definitely have a point here but it should also be noted that a migration to a contract-based infrastructure is already in place and it’s not doing bad.
The whole world of web services is contract-based and is expanding. Many well-written web applications rely on a webservice for efficiency and scalability and they’re doing well.
I agree that it makes life harder for developers but while a few years ago this was very unlikely, right now there are thousands of websites and applications connecting to contract-based services with no troubles at all.
System-wide services in Singularity will be endpoints and it’s not that difficult to migrate API calls done by static linking headers to self-describing contracts. When you need them for system calls, it’s easy to add more of them to perform IPC. It’s not that difficult to me.
I agree some programmers will need to change their mindset but it looks to me they would need to do that anyway.
Plus I think contracts-based communication will help people to “think async”, something which is and will be more and more important giving the fact in a few years we will only have multi-core CPUs.
I agree we will hear lots of complains when/IF Singularity ever comes to production but I’m sure it will not last that much 🙂
Something struck me with all talk about static verification of security and such.
If an OS depends on static verification wouldn’t that require software to be distributed in a high level representation for the system to be able to trust it? In other words refuse to run anything which wasn’t supplied as source code.
A technological dependence on open source… I’d like that.
“If an OS depends on static verification wouldn’t that require software to be distributed in a high level representation for the system to be able to trust it?”
It depends on the type of code it’s expecting to run. In the case of managed code (ala .Net), managed IL is easily translatable back to high level source on the fly, so I’m assuming that type of compiled code wouldn’t specifically need source code files per se (remove the actual .Net implementation from my statement above; should get the gist of it). Given that Singularity is largely written in managed code, I feel this would be a safe assumption.
But then again I could be misunderstanding you as I’m really not a systems guy.
I’m reading the Manifest-Based Programs bit now. And you are correct, it depends on MSIL to analyze program behavior.
In any case I think it would be and interesting project to build a system around the idea that system components are deployed as source. If application configuration is a compile time decision a much higher degree of optimization should be achievable.
Edit: OTOH it seems doable at runtime http://www.cs.berkeley.edu/~bodik/research/oopsla05-specialization….
Edited 2007-06-27 21:17
I’ve bookmarked the pdf for reading later on, hence why I was merely speculating. Interesting to see that my initial thoughts are on the right track though. I’m looking forward to sifting through the nitty gritty as I’ve had my eye on singularity for a while now and it seems to be coming together nicely. It’s even cooler that after all these years MSR is still funding research on it.
For all the MS haters on this thread, outside of academia (heck, even if you include some of academia) MS research is one of the most well funded, brightest think tanks in software realm. MS spends something to the tune of 5 billion annually just on research…if you think they’re just throwing that kind of money around for the heck of it, you’re sorely mistaken.
For all the MS haters on this thread, outside of academia (heck, even if you include some of academia) MS research is one of the most well funded, brightest think tanks in software realm. MS spends something to the tune of 5 billion annually just on research…if you think they’re just throwing that kind of money around for the heck of it, you’re sorely mistaken.
The non-research divisions of Microsoft are some of the most well funded, brightest software companies in the software realm…And they brought us Vista (to say nothing of other Windows releases).
Unfortunately, the axiom that you can’t solve a problem just by throwing money at it is as true in business as it is in government (and if Microsoft really does hold some of the brightest minds in software development, one can only conclude that the corporate culture stifles that brilliance). Also unfortunately, some people don’t seem to get that.
“Unfortunately, the axiom that you can’t solve a problem just by throwing money at it is as true in business as it is in government”
Now that’s the Twenex I used to know and love…I agree with you there. MS is getting a little fat around the middle [management], the problem is that once you’re a Microsoft employee it’s incredibly hard for them to get rid of you hence why they’ve ballooned from ~50k employees to like ~75k in just a few years…they are literally creating jobs rather than trim off the excess. Unfortunate but true, and it’s a tough issue to solve diplomatically.
From what I’ve heard, MS research doesn’t suffer from this issue so I’m really hoping Singularity stays over there for a while, and if/when they do bring it over to the core business units they form a new type of mgmt structure for the team that takes over Singularity. New kernel means new type of business unit IMO.
I know some MSR projects eventually go into commercial production. Others are released non-commercially (MSPL or MSRL or public domain), and still others just never see the light of day again.
I’d like to see the numbers on that…
Practically, yes. The representation doesn’t have to be source-level, just high-level enough to express all the basic source-level constructs in a type-safe, memory-safe way.
Interesting. Though the ideas are hardly new, it’s a different way to look at an operating system – instead of layering static checks, etc. onto a kernel via userspace and runtime, it embeds the paradigms and advantages into the kernel itself. Unfortunately, if you see this kind of OS released, I doubt it will be from MS, as they have far too much invested in the old APIs and ABIs to make such a radical switch. In any case, anyone managing to create such a system without a 10% (kernel) performance penalty will have me impressed.
RE: Plan9: It never occurred to me, but it does seem somewhat like it, however more like Inferno (with the Limbo language). As to the embedded comment, however, I’m not so sure, as I can see this sort of OS having little to no advantage on embedded machines, where suitability is usually based on cost, and speed of development, hence the tendency to stick to embedded operating systems where the necessary drivers already exist.
RE: Trolls: The article’s about a research OS, thus the parent company really doesn’t matter. In any case, Microsoft Research is hardly like the marketing or software devisions.
Edited 2007-06-27 20:58
By the time a system like Singularity could be released by Microsoft(If it’s ever released) it shouldn’t be hard to create a user-mode runtime environment to run the old Windows applications(Similar to WINE) and I don’t think the performance should be a problem because the computers will be much faster than the current generation.
Time to market probably isn’t the issue, even if in other respects Singularity mirrors Itanium: No matter what it has going for it, Singularity has the momentum of Windows and Linux going against it – and the latter can be recreated almost at will by any Tom, Dick, or Harry Hacker.
Have you heard of Go! os? It was an idea similar to this in that, no protection, compile-time verification. Based on simply changing the x86 segment registers to swap between processes, or modules.
I personally think it won’t work if you enforce people to use only your programming language, or expect all users to write verifiable code. Whatif someone wants to write in assembler or perl or C? A real OS must survive out in the wild, and cover every possible use.
The rest (ipc etc.) resembles microkernel design.
I hope it will get somewhere, but often software engineers think too high-level (cs theory of execution, memory etc) which is OK, but it is always overlooked that everything runs on hardware. (a world of oscilloscopes, waveforms, nasty devices) An os kernel *always* carries some of the dirt (unverifiable, untheoretical, unrelated hw issues) in the name of hiding certain problems with workarounds. The paper mentions they neglect ioctl() and similar uncertainty factors, but a realistic system will always have those.
I think a novel OS would require novel hardware. For example lots of devices (ethernet, ata, usb) could have been designed with much simpler register interfaces, hiding away lots of details. Especially USB is a complete mess. Only a complete idiot could write a bus spec that would require 3 different enormous stacks and a hub deamon constantly running for it.
ASM aside, .net provides an abstraction for the specific language being used by the coder. You could write in C, I could write in VB, and it all ends up IL to the system.
Singularity can use hardware memory protection.
It just doesn’t have to.
What this means is that you could run all verified code in one address space and give all unverified and potentially dangerous code its own address space, in addition.
This way, you can avoid the cost of hardware based memory protection where you don’t need it but still use it if it makes sense. I think there was a software engineer mentioning this fine grained control in the third or fourth video…
Maybe I’m missing something but I am not understanding the design reason to use flat memory and software based isolation between processes aside from a performance standpoint. I assume system calls would be faster in a flat memory configuration due to the fact that the cpu no longer needs to switch into ring 0 in system calls. There may also be some CPU time saved by eliminating the need to convert from virtual to physical addresses.
The problem I see, and it’s a major problem, is that there is nothing but the MSIL Runtime preventing me from grabbing memory anywhere I want it. The PDF even states that this is possible given a bug in the Sing# runtime. From what it looks like to me, this includes inside the kernel.
If I can write anywhere in the system without triggering hardware exceptions that is IMHO a very bad thing. At least with hardware level protection, user level programs trying to write into system level memory or going to at worst trigger exceptions and get themselves killed. Here, in theory it’s possible for a user level program to patch a running kernel.
I know it’s just a research project but I find it funny that Microsoft researchers put so much faith in software based protection of the ENTIRE system when they can’t even write secure software to begin with. The rest of the industry is moving to MORE hardware based protections (Trusted computing chips anyone) and here we’re stepping backwards.
The rest of the industry is moving to MORE hardware based protections (Trusted computing chips anyone) and here we’re stepping backwards.
That’s probably what they’re counting on in the future – preferably on a system that will only run the IL runtime and Microsoft’s new OS.
I find it funny that Microsoft researchers put so much faith in software based protection of the ENTIRE system
You can run several isolated OSes under hypervisor.
The OS itself don’t need to be hardware protected.
So, the protection offered by verifiable/safe languages is enough.
Edited 2007-06-28 14:21
They rely on static verification. In other words security constraints are a compile time thing (compile time being install time in this case).
> The problem I see, and it’s a major problem, is that
> there is nothing but the MSIL Runtime preventing me
> from grabbing memory anywhere I want it. The PDF
> even states that this is possible given a bug in the
> Sing# runtime. From what it looks like to me, this
> includes inside the kernel.
With traditional OSes, the problem, and it’s a major problem, is that there is nothing but the hardware protection preventing me from grabbing memory anywhere I want it. Bugs in the hardware protection are going to make the OS vulnerable just as bugs in the MSIL do.
> If I can write anywhere in the system without
> triggering hardware exceptions that is IMHO a very
> bad thing.
Which is why this situation cannot occur, bugs in the MSIL or hardware protection aside. With “managed code” as it is called, there is no sequence of instructions that could write to arbitrary memory locations (due to the unforgeability of arbitrary pointers).
“rest of the industry is moving to MORE hardware based protections”
Ya … uh huh… right, i’ve seen what 1 maybe 2 computers out of the thousands that have TPM enabled. And things like XD will help regardless.
Singularity targets using software to protect because then it protects EVERYONE REGARDLESS. If it relyed on a hardware chip for protection or any other hardware change it would narrow the field that it is protecting. By developing a strong set of software to protect it then it will work regardless of hardware….
Microsoft isnt Apple of the 1980-90’s.. They’re not gonna develop something that has to run on their own hardware to be effective. Microsofts has a wide view not narrow they target the world, old and new computers, from various manufacturers and as such can’t depend on a hardware resource being present.
Yes theirs a bug in Sing# but then again, this isnt a production OS, and do to it’s inherit design it shouldn’t take them too long to find and fix it, The good thing is they know about the bug, and will resolve it and others that might rear their ugly head. Seems like a pretty secure development cycle to me for a non-production OS especially.
What about software? Looks like it will require special compilers, manifests, security hashing algorithms etc.
Whereas when I program in C for example, I can write a program that takes up maybe 100 bytes in the process image? That’s exactly why still we have C. What you see is what you get.
well for things like this to work on old computers, it must be very rigorous and compact (hint: C). It should not add more requirements than existing ones. (manifests, security, just to compile a few bytes of assembler). Anyway, I like the idea, but it can be maybe a sandbox, not a whole OS. And by the way, I’ve heard that Vista requires 1GB ram to work well? Is that how they support old hardware?
I wonder why nobody has mentioned JNode yet, as it’s a similar thing done in Java. The IPC seems to be a bit different (shared objects are allowed), but the basic idea about protection due to managed code is the same.
http://www.jnode.org