MSDN’s Channel 9 has two videos in their ‘Going Deep’ series which dive, well, deeper into Singularity, the operating system in development at Microsoft’s research department. The first of the two is about, among other things, Software Isolated Processes (SIPs). The second of the two actually shows Singularity in action.
Whatever, it’s still C in the backrgound no matter how much you manage it. HAHAHAHAHa
It’s just Vista+ it’s probably best the go completley .NET for them of course but Java is bettttter. HAHAHAHAHAHAHAHA.
If you’re doing a hardcore simulation it wont really matter what they write the OS in I don’t think, as you can access the chip through SDL and the like or hopefully the drivers would have to be in C. Also it helps that Intel has opened up some things for that because nauturally MS would just lock in through the closed source drivers even more with .NEt and people would be forced into SDL which would be weird. Coding blind.
There is also an OS totally written in Java as well but Java is open source though. The problem I might have with this as the portability factor with Vista is already no that great. With Singularity it wont be that great at all but probably still a nice OS for certain things. Open .NEt up and see how it competes MS.
Singularity is technically a black hole.
Edited 2006-08-19 16:30
In video The kernel uses 17 048 Kbytes. 17Mbytes? Is this normal?
Modern kernels will use as much RAM as they can trying to make efficient use of whatever is available. So while a kernel might only need 2 MB, it might actually use 50.
Wrong. The buffers aren’t part of kernel memory, generally, any more than mapped pages from the filesystem are. They’re usually managed by the VM system in the kernel for other application’s use. Also, if it was buffering inside the kernel, I’d expect the kernel to actually show up as several hundred megs in size.
However, if all the code running is running in .NET IL, then there would have to be an entire VM in the kernel, as the alternative would be to have the VM statically compiled into each process as native code. This is probably the source of the relatively huge size of memory.
Edited 2006-08-19 20:43
isnt every bit of code managed?
I dont consider pointers managed.
I dont consider pointers managed.
You mean a null pointer:-)
It’s more complicated than that on other OS’s. The default memory split up in NT is that the first 2GB can be reserved for the kernel (forgive me if I botch this). This is largely meaningless until you want to have a 2.1GB process of your own. TMK there is a way to set NT down to using less, but very few people are concerned with using 2GB of RAM on a single process.
I don’t really understand calling the kernel a process anyway.
Unix. Did anyone see him type in ‘dir /dev’? Very funny.
Maybe it’s Microsoft’s secret linux distro.
I believe MS had a little thing called Xenix, so I’m not sure what you define here as “cloning”. Xenix was an extremely popular Unix variant (indeed, it was a true certified Unix, iirc) that MS sold to SCO very early on. Linux came long after Xenix, therefor calling the use of a Unix/Linux-like directory structure a “clone” of Linux is really inaccurate.
Sorry, this was meant as a reply to the next comment below…
Edited 2006-08-21 11:50
Its shell is a unix/linux clone obscured by some modifications to look unique eg “dir” instead of “ls”.
http://www.vitanuova.com has another interesting OS which is based on running “managed code” aka. a VM.
One usually runs Inferno hosted, but there is a native version too.
It is nice to see that MS is doing some real research. But the OS didn’t impress me to much, they didn’t came up with something new or exciting, just ordinary micro kernel. And their language of choice.. well.. horid.. plain old C (and yes I know C# and what managed code is). What whould realy be interesting is if they did an OS not in an imperative language, but in functional, e.g. by incoporating MS other R&D project like F#.
No, Singularity is not a Microkernel in a different language and yes it isn’t totally new (but then what is?) you can find more information under >>Language-Based Protection<< in this article [1]. I don’t think you could write an OS in F# (or at least it wouldn’t be wise), since F# is basically an OCaml dialect and the OCaml people say they’ve created an Application language not a systems language.
Similarities to UNIX shouldn’t surprise anybody either, if you look at OS books they are very UNIX heavy.
[1]: http://www.computer.org/portal/site/computer/menuitem.5d61c1d591162…
But the OS didn’t impress me to much, they didn’t came up with something new or exciting, just ordinary micro kernel.
First, plain old microkernel? Show me a viable microkernel operating system that really functions like a microkernel OS should–with all device drivers in userspace and only the VM service in the kernel. Sure, MINIX, L4, Mach, Plan9, and others are headed down that path, but none have been successful. If Singularity really was a true microkernel operating system, that would be impressive.
But it isn’t. Although Singularity has a really small surface area of trusted code, lots of managed code lives in kernelspace to avoid the overhead of IPC and context switches. Singularity, believe it or not, is closer is design to a mainframe operating system like IBM’s VM. Isolated processes are nothing new in general, but they are revolutionary on “new-school” architectures like x86, PPC, and SPARC. The classic mainframes in the 1970s and 80s all had isolated processes, although they were at least partially isolated in hardware. The idea of a strictly software isolated process model might be uncharted territory (not sure).
What whould realy be interesting is if they did an OS not in an imperative language, but in functional, e.g. by incoporating MS other R&D project like F#.
I’m sure that’s possible in theory, but it wouldn’t be easy, and it certainly wouldn’t be pretty. Operating systems are all about side-effects, and functional programming (strictly speaking) doesn’t allow them.
C is the language of choice for systems programmers everywhere. The C language is the greatest triumph of computer science. It’s the wheel, it’s fire, it’s sliced bread. As they say, choose the right tool for the right job. But C will continue to be right for many jobs well after the newer languages fall out of favor.
I’ve read about ‘side-effects’ in imperative languages before (possibly in your posts), but what is exactly meant by them?
I’ve read about ‘side-effects’ in imperative languages before (possibly in your posts), but what is exactly meant by them?
A side-effect is something that changes the state of the system. Simple example (javascript):
pure_functional = function(a){return a+2;}
side_effect = function(a){a += 2; return a;}
In the first case, you get back a+2 but a remains the same, in the second you get back the same thing but the value of a is changed.
This is a rather trivial example, but more significant things like I/O are also side effects. Wikipedia has a rather extensive treatment of functional programming, side effects, etc if you’re interested.
That’s not so much a side-effect of imperative languages, but more how a language handles and passing data. How about another example?
How about another example?
The point wasn’t to distinguish between pass-by-reference and pass-by-value, in fact, the entire reason that distinction is necessary is because the language has side effects.
Consider the snippet:
x = 1
x + 2
print x
x = 2
print x
The output of the first print statement is 1, even though it comes after an addition operation on x, the value is 1 because addition is an operation with no side effects.
Now consider the second assignment operation. Whatever value was previously in x is now gone, the value of x is now 2. This is called a destructive update. The state of x is modified so that future references to x (the print statement) will now get the value 2.
In a language without side effects, any past or future reference to x will always retrieve the same value. Similarly, functions will always return the same value given the same inputs, this is the concept of referential transparency.
First, plain old microkernel? Show me a viable microkernel operating system that really functions like a microkernel OS should–with all device drivers in userspace and only the VM service in the kernel. Sure, MINIX, L4, Mach, Plan9, and others are headed down that path, but none have been successful. If Singularity really was a true microkernel operating system, that would be impressive.
As an aside: being a microkernel has never been a goal. And with managed code, it should be useless to be a microkernel, except if you would want all services restartable.
But it isn’t [a microkernel]. Although Singularity has a really small surface area of trusted code, lots of managed code lives in kernelspace to avoid the overhead of IPC and context switches.
Well technically, all the code in the system lives in kernel space and runs at Ring 0. That they can do this and still claim security is, by my understanding, the primary finding of the project.
If you watch the older videos, IPC between processes is basically passing a pointer to a page in memory. There is no context switch unless you deliberately set one up in hardware (what a good chunk of this video is about). My impression is that the processes in Singularity are closer to the weight of threads that don’t share memory (erlang processes would be a good example).
I’m pretty sure this research leans more towards the applied than the theoretical. It was started when MS began their big security push and the goal seems to be making the most robust OS possible and then showing it can be made into a workable production system. They occasionally do less applied stuff, but I always get the impression that “windows longhorn/vista+2” is the target for the project.
Edited 2006-08-19 20:16
First, plain old microkernel? Show me a viable microkernel operating system that really functions like a microkernel OS should–with all device drivers in userspace and only the VM service in the kernel.
Well quite. The problem is a microkernel seems like a great idea, and then inevitable compromises are made over things like performance and you end up coming to the same conclusions everyone else has for the past twenty or thirty years.
Why just one and not more micro kernels for critical areas?
with multicore why not just have one core be a kernel and then use the other 1,3, 31 for user-level threads? should prevent latency since no switching right?
C is the language of choice for systems programmers everywhere
Not everywhere. IBM’s VM, which you mention, isn’t written in C.
True,a know i flight control centre who almost solely uses ADA for their mission critical systems.
Edited 2006-08-20 05:54
So, if I understood correctly, what they are trying to do is try to run as much as possible of the operating system in a VM – including device drivers and most of the kernel. Employ some static analysis techniques that would improve code robustness. Take advantage of multithreading and multicore CPUs as much as possible etc.
To those who complained about the use of C… it’s only a few hundred lines of C. Even operating systems written primarily in C include some assembly, for example. You may be able to check and recheck by hand a few hundred lines of C and make sure it’s right, but not millions of them!
I am not exactly “sold” on the approach they are taking and I am a bit worried about MS’s usual messyness. But to those who say this isn’t exactly groundbreaking: Although I agree, let’s not forget that mediocre programmers “borrow” good ideas, excelent programmers steal them!
Edited 2006-08-19 18:40
only watched the second video with the sound down (wife reasons) It didn’t half remind me of LTSP booting off the net card just an observation
All the best
Alie
I’d like to see somebody implement a browser in managed code.
Seems like a natural fit.
Whatever. Most interestingly he tries to kill a process around 5:00. Seems like they forgot to copy that
“kill is not a command”…
Edited 2006-08-19 22:23
you can also watch it on google video: http://video.google.com/videoplay?docid=3063325760356449100
So ? It’s still around there around 5:00 ?
Its not the kernel that MS failed with in the past, its the higher processes and application on top of it like the horrible Internet Explorer, Windows Explorer, destruction-unlimited Root/User permissions, plus many many more.
MS kernel now is excellent; and instead of writting a new kernel from the ground up which even its team don’t know how to interact with and which will be used in the future when then MS will fire those guys, its better to concentrate on planing a rewrite that takes in account every part of that future design and will be very well documented and functional, unlike current cmd.exe tools which never work fine and which always change from version to version (9x to 2000/xp/2003 to vista). I couldn’t find a simple command to shut down an ethernet interface in any of windows versions till now an then start it up, like in linux/Unix “ifdown” and “ifup”.
Besides, this project is not to be implemented till year 2030 when judgement day arrives, all judged from MS eternal time it took to develop vista in 6 years which is few simple modifications of windows server 2003, i mean not a from scratch OS.
And probably they will kill this project when they feel its unachievable much like the project of early vista interface that had the room, arrows, navigation man desks and doors.
Good Luck though!
I couldn’t find a simple command to shut down an ethernet interface in any of windows versions till now an then start it up, like in linux/Unix “ifdown” and “ifup”.
Isn’t that ipconfig for?
ipconfig /renew id_of_connection
I don’t know if it works in 95-Me, but i’m sure it works since Win2000.
“ipconfig /renew” is like “ifupdown” or “ifconfig restart” and this command is nothing like ifdown at all.
MS kernel now is excellent; and instead of writting a new kernel from the ground up which even its team don’t know how to interact with and which will be used in the future when then MS will fire those guys, its better to concentrate on planing a rewrite that takes in account every part of that future design and will be very well documented and functional, unlike current cmd.exe tools which never work fine and which always change from version to version (9x to 2000/xp/2003 to vista).
cmd is well-documented. There are several sources for documentation of cmd.exe and it’s commands:
1) Type help at the prompt for details, or append [/i]/?[/i] to the command you want to use to see usage details.
2) Go to Windows Help and search for cmd.
If you’re running XP (may work on other versions), this link should go directly to the relevant documentation:
%SystemRoot%/Help/ntcmds.chm
3) There’s also a reference on Technet:
http://technet2.microsoft.com/WindowsServer/en/library/fdc12a63-df4…
Besides, this project is not to be implemented till year 2030 when judgement day arrives, all judged from MS eternal time it took to develop vista in 6 years which is few simple modifications of windows server 2003, i mean not a from scratch OS.
And probably they will kill this project when they feel its unachievable much like the project of early vista interface that had the room, arrows, navigation man desks and doors.
This, as well as Task Gallery, Data Mountain, et al. are research projects. They are exploratory, but may not necessarily be feasible in their research form for commercial products. Microsoft takes the algorithms and technologies developed and incorporates them into future products where possible. There have been several technology transfers from MSR to MS over the years, and many research technologies have made it into current and past products. Just because you aren’t familliar with this happening doesn’t mean it hasn’t been done. Task Gallery, for instance, explored redirection of rendering to the GPU. as well as the feasability of the 3D UI you’ve mentioned. These technologies have had several implementations from GDI2k and Chrome/ChromeEffects, up to Vista. Vista implements the rendering redirection and a 3D desktop (as well as other technologies from this and other MSR projects). It just doesn’t use the same UI as Task Gallery, likely because the current paradigm proved more efficient. As I said, however, though it looks 2D, Vista’s desktop is 3D and you will see applications and the OS use 3D in ways that make sense for data visualization, increased screen real-estate, and productivity gains. It won’t just be used because it’s there. You can see examples of its use already if you check around:
http://channel9.msdn.com/tags/WPF
http://channel9.msdn.com/tags/Windows+Vista
http://www.seewindowsvista.com/
The people that did the Task Gallery research were not fired when that project ended. Many of them are in the VIBE group at MSR, working on new research projects. You don’t automatically implement everything that comes out of research, and you don’t expect every project to result in products. Some of MSR’s work isn’t even related to markets in which MS participates.
Back to Vista…
all judged from MS eternal time it took to develop vista in 6 years which is few simple modifications of windows server 2003, i mean not a from scratch OS.
1) How many current versions of OSes in existance today were written from scratch? OS X, Linux, Unix? Certainly not.
2) Vista is a lot more than just “a few simple modifications to Windows Server 2003”. There’s more new stuff than I can name without checking documentation, but the list includes rewrites for almost all of the major subsystems people would consider part of the OS. A partial list includes:
Windows Presentation Foundation/WPF (codenamed “Avalon”)- API + New presentation system (UI, Window Manager (DWM), driver architecture [WDDM – mostly user mode, supports shared resources, scheduling and virtual memory for GPU], color system (WCS), media systems (Media Foundation), et al.)
Windows Communications Foundation/WCF (codenamed “Indigo”) – API + New networking stack, implements new RFCs and WS-* standards, new performance technologies from MSR, native WiFi support, unifies IP v4/v6 support, hardware acceleration support, new filtering/monitoring platform, network access protection, more).
Local and distributed ACID Transactions support in the kernel (Kernel Transaction Manager/KTM), file-system (Transactional NTFS/TxF), Registry (TxR) and via the Distributed Transactions Coordinator (DTC).
New I/O (Prioritization, Completion/Cancelation), logging (CLFS), power management, memory management, scheduling, and tracing technologies in the kernel.
Inclusion of security technologies like Session 0 isolation, ASLR and (for 64-bit systems, Kernel PatchGuard).
New PnP architecture for locally connected and wireless devices. Support coming for UWB devices like WUSB.
New Workflow (Windows Workflow Foundation/WF) and Automation engines, codec interfaces, filtering, and indexing infrastructure.
New Audio subsystem (user mode, better discoverability and presentation of device endpoints, higher-quality mixing, lower latency, more flexible for ISV additions).
New image-based installation and boot architecture for the OS, restart manager for applications, and installer architecture for device drivers.
Out-of-band additions:
New command shell (PowerShell) and PowerShell-based MMC.
All the features you mentioned above could be done by Apple in just 2 years, and it would probably be done better. Microsoft with its huge resources could build up 1 OS in 2 year from scratch if they want to, but they don’t. Anyway, the MS design really sucks in comparision with OSs like Apple, linux and Unix; who said that transparency and effects are more important to me than decent shortcut system that is even missing in vista!
Apple, and linux are better handled when working with because everything works with just one set of shortcuts that doesn’t change from App to App. And the Graphics Dead Zones in Microsoft shows no respect for small screens, which would be consumed with spaces.
Internet Explorer and Windows Explorer are booth under developed and way much more old technologies, already available in linux. Check konqueror and its view profiles and tell me if windows Explorer does have something like it that will reduced the clutter of 6 opened windows.
MS effort is unfortunately too little too long! and I doubt it would be more interesting like the old days of 1995. And now I am working to replace all my networks to OSX and OSX server for the peace of mind from viruses and instabilities.
All the features you mentioned above could be done by Apple in just 2 years, and it would probably be done better. Microsoft with its huge resources could build up 1 OS in 2 year from scratch if they want to, but they don’t. Anyway, the MS design really sucks in comparision with OSs like Apple, linux and Unix; who said that transparency and effects are more important to me than decent shortcut system that is even missing in vista!
Development doesn’t work like that for any project. You can’t just throw more people at a problem (especially when they wouldn’t be familliar with that area) and increase time to market.
Apple tried to build their own OS and failed (see copland). They then used technology from Next, Unix, and open source projects to put together OS X. So no, they couldn’t do it in 2 years. MS actually did most of what you wanted, ripped and replaced most of the core functionality with new code in Vista, and did so while also soliciting feedback from their developer community about the APIs, standardizing many of the technologies, and dealing with governments monitoring their moves, and this effort didn’t take much longer than it did when they originally created NT, but now you have to find something else to argue about.
Apple, and linux are better handled when working with because everything works with just one set of shortcuts that doesn’t change from App to App. And the Graphics Dead Zones in Microsoft shows no respect for small screens, which would be consumed with spaces.
I can only guess you’re talking about apps like Word or PhotoShop that have a background space in their document window. If you care about space, resize the app and move its toolbars outside the main window for a more Mac-like experience. If you follow this paradigm however, you’re actually taking up a lot more space as you now have seperate toolbars and windows on-screen rather than being contained to their own window.
Internet Explorer and Windows Explorer are booth under developed and way much more old technologies, already available in linux. Check konqueror and its view profiles and tell me if windows Explorer does have something like it that will reduced the clutter of 6 opened windows.
From what I could find on “view profiles” online, it seems that this functionality is available in IE 7 and Explorer. IE 7 allows you to save the tabs of a window. The window’s position is automatically saved on exit, but can also be customized via a shortcut. File Previews have been supported in Explorer for years. Their default exposed functionality was scaled back in XP, but you can get it back in various ways, including via WLD Search. Vista brings back their full functionality by default.
MS effort is unfortunately too little too long! and I doubt it would be more interesting like the old days of 1995. And now I am working to replace all my networks to OSX and OSX server for the peace of mind from viruses and instabilities.
If you believe Vista has less to offer that Windows 95, you clearly know very little about the OS. As I said before, Vista’s development schedule isn’t much different from that of the original NT, and more new functionality has gone into Vista than NT. Most instability in Windows is due to bad hardware or drivers. This isn’t a big issue today, but with more code moving to user mode, it’ll be even less of an issue starting with Vista. OS X has its share of instabilities as well, as do other OSes. It’s gained more exploits via dependence on code used in other OSes and its switch to x86. Viruses are only a matter of time. Good luck with the switch, but it’s not a panacea for security or stability.
“From what I could find on “view profiles” online, it seems that this functionality is available in IE 7 and Explorer.”
No, it’s not available as I ran the latest vista build 5472
“I can only guess you’re talking about apps like Word or PhotoShop that have a background space in their document window.”
No, I am talking about the whole OS GUI windows, wizards, popups and then applications.
“Most instability in Windows is due to bad hardware or drivers.”
If we execlude bad hardware the #1 factor for instability is windows reduced immunity, much like AIDS patient; a bad stability of drivers are because of bad windows acceptance for it. Then Comes the #2 Viruses.
“Viruses are only a matter of time.”
Tell a mac user this and he’ll probably end the discussion with you. Even on linux I use Fedora Core 5 for 8 months without a single unexplained behavior.
“Apple tried to build their own OS and failed (see copland). They then used technology from Next, Unix, and open source projects to put together OS X.”
Let me tell you of a worse situation, which was done by MS; Actually they bought MS DOS for couple of thousand Dollars and merely labeled it Microsoft, and next day they implemented it on IBM PCs. What a Great Job. And lets remember the MS backup utility was stolen in the first place.
If we execlude bad hardware the #1 factor for instability is windows reduced immunity, much like AIDS patient; a bad stability of drivers are because of bad windows acceptance for it.
This is rediculous. Any platform can be brought down by bad kernel mode code. On most OSes, most drivers run in kernel mode.
Then Comes the #2 Viruses. … Tell a mac user this and he’ll probably end the discussion with you. Even on linux I use Fedora Core 5 for 8 months without a single unexplained behavior.
So he ends the conversation. I don’t care if he can’t take the truth. Many Mac users are known for their denials of virtually anything that doesn’t fit what Steve Jobs tells them. Viruses are an inevitability. When virus writers deem the Mac to be worthy of their time and money, they’ll write viruses for it. Apple has lowered the barriers to entry significantly by using shared code and switching to x86. If there are people hacking around Apple’s code to run MacOS on their PCs, there’s probably people out there that will eventually start creating viruses. In some cases, they wouldn’t even have to change their code and they’d be able to exploit MacOS and *n*x. Anyone thinking they’re safe just because they’re running a Mac is sorely mistaken. Both *n*x and Apple computers have had viruses written for them in the past.
Let me tell you of a worse situation, which was done by MS; Actually they bought MS DOS for couple of thousand Dollars and merely labeled it Microsoft, and next day they implemented it on IBM PCs. What a Great Job. And lets remember the MS backup utility was stolen in the first place.
More like $50+ thousand, and they didn’t just stick the MS-DOS name on it and give it to IBM. On top of paying SCP/Tim Paterson (DOS’ developer) for his work, they also hired him, and it took them nine months to develop the OS to meet IBM’s specifications. Their backup utility wasn’t stolen. You are thinking of their compression utility, which also wasn’t stolen. It did, however, infringe on patents held by Stac Electronics. None of this has anything to do with NT and the fact that MS has provided a from-scratch OS before, has replaced and upgraded several systems since, and with Vista, has come a lot closer to doing a from-scratch OS again than Apple has.
It would be nice if they could bring a second team to start a GUI project on top of Singularity and rethink the GUI paradigm from the ground up.
Another wheel form MS