“Ok, that headline may be a bit overblown – but Microsoft Research has released part of a report on the Singularity kernel they’ve been working on as part of their planned shift to network computing. The report includes some performance comparisons that show Singularity beating everything else on a 1.8Ghz AMD Athlon-based machine. What’s noteworthy about it is that Microsoft compared Singularity to FreeBSD and Linux as well as Windows/XP – and almost every result shows Windows losing to the two Unix variants.”
Something doesn’t strike me right about that blogger’s commentary. I understand his point, but I don’t think it applies to benchmarks.
Microsoft went out of their way to do things exactly the same on all four platforms. In fact, it sounds like they did the opposite of what NVidia and ATi did with 3DMark tests – where they modified their drivers to check if 3DMark was loaded, and then started dropping polygons to accelerate performance to get a boost in the stats.
So what I’m saying is, the blogger’s point applies to real-world code, but it does not apply to benchmark code.
I halfway agree.
Assuming the blogger is correct, then a reverse test doing it the ‘unix’ way would show unix ahead of Windows, but what would that prove? Not a lot (until you compared them).
However, why would a windows programmer care at all about making windows work better the unix way? It’s not going to benefit a single windows user to do that. Why not enhance, tune and increase performance of things done in Windows in order to try to make the windows way of doing things better so that next time the benchmarks will be more even?
Or course this goes the other way too… why would a unix programmer make unix better at things that windows does when unix does it a whole different way?
The blog starts off ok, only to degenerate fast when he decides to pick on the parts where windows did better than freebsd and linux, for no reason.
So maybe there are better ways to do that thing in linux, but what if there are also better ways to do that in windows?
The benchmarking was simply done to compare one specific case that was similar between all the platforms… See it as comparing different kinds of apples, rather than comparing apples and oranges which would be the case if each was implemented in the optimal way on each platform.
Actually, it looks like the blog author just couldn’t accept any one case where windows did better than freebsd or linux, even if it did worse in most of the other tests.
He is hosted on zdnet….
Moreover he’s Paul Murphy, who’s basically ZDNet’s official zealot. Basically, give him any piece of news, and he’ll turn it into a poorly researched, poorly written article on open source vs. evil, where evil is usually Microsoft and usually ends up being on the losing side. He’s little more than another hack that fancies himself a hacker.
Moreover he’s Paul Murphy, who’s basically ZDNet’s official zealot.
You have got him a little wrong. He is an old time Unix person and really he doesn’t like Linux though he dislikes Windows more. He really doesn’t like open FOSS that much either. Fundamentally he is a Sun apologist.
Independently of this article: Unix kernels are generally some of those kernels that windows ones don’t even dream of standing against. And this is a fact recognized by major experts – not me or you. The current windoze kernel is a bloated crap. And I really can’t see how these guys over at MS will do any better with the new kernel over the old one. I mean think about it: BSD, Solaris kernels and linux are out there for more than a decade and have proven that the maturing process of a quality kernel is bound to take a long time. How are guys over at Redmond going to workaround this and a deliver a decent kernel is something beyond the understanding of most people in projects like BSD, linux and companies like IBM and Sun. No matter how much $$$ the tetra-colore has to spare on it’s project, it won’t help much and those devs of MS are very well aware that 9 pregnant women won’t give a kid in one month.
“How are guys over at Redmond going to workaround this and a deliver a decent kernel is something beyond the understanding of most people in projects like BSD, linux and companies like IBM and Sun”
Because the guys at redmond have the BSDs, Linux, Solaris, NT to study, they already know the lessons learned from them in the decade or more they’ve been around.
although thats not really the point of singularity, it’s not really in competition with them, it’s just a test bed for an idea.
Just FYI…
The Windows NT Kernel 3.1, 3.5 and 4.0 all came out in 1994. Which was, survey says…. over a decade ago.
You say that Windows Kernels don’t “… even dream of standing against [unix kernels]. and this is a fact recognized by major experts…” – Where are these experts, what are their credentals, and what, praytell, makes them “major”?
Aside from your fanboy misspellings, your bad grammar, and your complete lack of supporting evidance, clear logical thought or even a point, this is a fairly good post.
(For the sarcasm impaired, I’m saying it’s a bad, pointless post that does nothing to further FOSS.)
It’s not about speed, we’ve been telling you that for a decade Microsoft. We thought you were listening with .net and NT, but obviously not.
We all knew NT was slow to create processes (run a autoconf configure script on windows, wow); but that was by design. It makes multiple process stuff more expensive, but that only mattered 5 years ago anyway: I think machines are quick enough to laugh at process creation overhead on NT.
Benchmarks are fun, but when you get “pwned” you will lose a lot more time than you gained from the faster software. Leave the benchmarks for researchers and gamers: How about somthing secure, reliable, and powerful for the rest of us.
Anyway, I think they’re just having fun with their new toy. Looks like they’ve got some extremely lightweight processes though, nifty.
>Leave the benchmarks for researchers…
Those benchmarks were done by researchers
Researchers using software, not researchers researching software.
People who use stuff like root/paw/cern/matlab/mpi/etc.
I didn’t read the whole paper, but it appears that Singularity is for a big part written in C#. (even parts of the kernel) Yet it still beats Windows XP, Linux, FreeBSD in terms of performance. Nice to see that a modern language with garbage collection is beating the old low level languages.
And another Windows fanboy speaks out. Gee, that’s new and original. This ain’t Neowin.
What’s the difference between a Linux zealot and a Windows user?
Two pints.
*Arf*
Come on, give us a break!
The reason they chose sockets in Unix was because they compare FUNCTIONALLY to Windows named pipes and Singularity Channels. If you read the article on the Singularity kernel, you’d know that they do not permit shared memory models, and hence wanted to compare the performance of Singularity’s normal IPC (channels) with the equivelent functionality on Windows and Unix.
There’s no story here, please!
Umm, unix does named pipes as well. Sockets have a *lot* more overhead than named pipes. NT can do sockets also. Why not compare apples to apples. Ohh so you can obfuscate results for those that don’t know any better.
I’d rather shuffle a deck of cards in my rectum than read about Microsoft.
“I’d rather shuffle a deck of cards in my rectum than read about Microsoft.”
Hey, neat trick. Do you do other stuff with your rectum?
(just kidding, couldn’t resist 😀 (your rectum)…damnit..stop it…:-D)
I guess that’s the prose equivalent of linking to goatse.
The real cool thing about the article is not UNIX vs Windows, but Singularity! It’s an OS designed for high-level languages, without many of the hacks (hardware protection domains) that are necessary for safely running programs written in unsafe languages. The microbenchmarks show Singularity beating two very mature and highly optimized systems, just by virtue of its superior architecture. That’s a very impressive result.
I agree completely! Reading the rest of the paper is a lot mroe interesting than reading about the benchmarks. If Mark Twain were alive today he’d say there were 4 kinds of lies instead of 3. The differences aren’t humongous anyway, and sometimes Windows does beat the Unices. And hell, CPU per thread creation doesn’t 100% determine loadbearing capacity of a machine nor responsiveness of a GUI, now does it.
Looking at a description of Singularity’s internals, kernel hackers could pick up a thing or two here to improve on their fave hobby OS
It’s an OS designed for high-level languages, without many of the hacks (hardware protection domains) that are necessary for safely running programs written in unsafe languages.
And how exactly do you enforce that? The first program written in an “unsafe language” is going to bypass all that protection and you’re fancy “high-level language” OS is now owned.
Right now, Singularity owes it performance to promises to play nice, not any REAL security. The only way this would work is if all programs had to be digitally signed by MS so that the hardware DRM on the mobo would allow it to run… which is where Singularity is REALLY headed.
And how exactly do you enforce that?
The OS is the thing that loads the programs, and the OS can easily prevent the execution of any code it doesn’t trust. Programs writteen in unsafe languages can either by compiled with a trusted compiler (Cyclone is mentioned for C — the LLVM folks are doing work in this area as well), or run in a virtual machine.
Right now, Singularity owes it performance to promises to play nice, not any REAL security.
It’s just as “REAL” as the protections that keep users from deleting each other files. The OS is the ultimate arbiter of what gets executed, just as the OS is the ultimate arbiter of what gets written to disk. It doesn’t have to trust any promises — it can choose to execute only what it can verify.
The only way this would work is if all programs had to be digitally signed by MS so that the hardware DRM on the mobo would allow it to run…
That is one way to do it, yes, but there are many others. Programs could be digitally signed by any party, and the user could choose to allow them to run if they trusted that said party compiled the program with a verifying compiler. A more likely scenario is that programs, instead of being distributed as typeless machine code, are distributed in some sort of intermediate form that is compiled to machine code, by a trusted compiler, at install time. For open source programs, this could just be source code, and for closed source programs, it could be some low-level typed bytecode. Either way there are again no promises involved, because the OS can verify that the program cannot corrupt memory.
A more likely scenario is that programs, instead of being distributed as typeless machine code, are distributed in some sort of intermediate form that is compiled to machine code, by a trusted compiler, at install time. For open source programs, this could just be source code, and for closed source programs, it could be some low-level typed bytecode.
That would work, but I don’t see proprietary closed-source companies going for that. More likely, they’d pay a trusted service to sign their programs. As long as there were more than just one source for signed programs, that would be okay.
> And how exactly do you enforce that? The first program
> written in an “unsafe language” is going to bypass all
> that protection and you’re fancy “high-level language”
> OS is now owned.
But you can’t write a program in “unsafe language”, unless it’s “trusted”, or so I understood.
You’re joking, right? I could write an optimized loop in assembly that doesn’t do anything less than Singularity at this time, and it would be *much* faster.
Take head in hands, shake vigorously.
You’re joking, right? I could write an optimized loop in assembly that doesn’t do anything less than Singularity at this time, and it would be *much* faster.
And what would your point be?
I’d like to see you try! Remember what these benchmarks are — they test the performance of invoking system services. If you use assembly, you’ll be forced to put the OS services and the application in different memory protection domains (nobody wants to go back to the era where applications could crash the kernel). That means that even if you wrote your loop in assembly, you’d invoke a very expensive userspace/kernelspace transition on every call. That’s hundreds of clockcycles per transition, dwarfing the cost of your assembly code. Hell, you could write the benchmark in Python and not notice the difference!
Honestly said, this is one of the very few times, when I ask (rethorically…) – why is such “article” linked to OSNews? No information, not even good trolling
Some days ago I read papers about Singularity, about its concepts, realization and benchmarks. MS people didn’t benchmark different OSes, they just estimated Singularity’s concept possible performance. Differently from MS marketing their R&D people seem have much more open mind and unbiased attitude towards various OSes – their papers are usually good read.
Obviously they choose for comparison just some widely used information exchange channels (and operations) in different OSes and found that Singualrity’s experimental code performs roughly same as other choices – what proved their concept is viable. Will such OS ever created [by MS] – nobody knows, but concept is good (and yes, I know that this is not MS invention, what doesn’t decrease its value in any way).
They mention ‘Exokernel’ as a microkernel.
“Singularity is a microkernel operating system that differs in a number of respects from
previous microkernel systems, such as Mach, L4, SPIN, Vino, and Exokernel”
Exokernel is a type of kernel, which is not a microkernel 🙂
Exokernel is a type of kernel, which is not a microkernel
No, they are referring to this [ http://cliki.tunes.org/Exokernel ]. MIT has a project called Exokernel. So they are referring to that, and not the kernel design principle we call ‘exokernel’.
not a fair comparison, since Singularity isn’t meant for release but solely for research.
So it doesn’t make much sense to call it “Windows”
Singularity != Windows
(BTW: Linux user writing)
Well I think this Singularity OS is going to have some effects on MS’s next gen OS after Vista. Vista is simply a patched up XP with a lot of new features and eye candy imho. Having written this in C# and some other derivative of C# the name of which escapes me at the moment this is quiet a feat. Very well done MS! Now just if they could open source it and release it!!!!!!!
The focus of Singularity is to build a system with ironclad reliability. Granted, they note the CPU cycles for reference, but it’s far from the point of Singularity.
Pointing out that Singularity is slower than UNIX is about as dumb as pointing out that OpenBSD is slower than FreeBSD. The latter two make some safety compromises for the point of speed.
UNIX–as everyone knows–is a Platonic Form of Operating Systems. Why does Microsoft Research even try to improve upon something that was perfected once and for all time in the 1970s? It’s as preposterous as those who tried to make something better than the perfect clothing ensemble invented in the 1970s–the leisure suit.
It’s as preposterous as those who tried to make something better than the perfect clothing ensemble invented in the 1970s–the leisure suit.
Quite preposterous I must agree. The attempt for a leisure suit successor is really quite futile. I would venture to say it is on par with the search for the meaning of life. You just never get it. One must simply accept that They will never know the meaning of life and that the leisure suit is unsurpassable.
1000 cool points to 67.185.77.—
Edited 2005-11-10 00:47
wow, you are an idiot.
The NT kernel isn’t bloated. the kernel actually works quite well.. It’s all the other stuff that messes up.
it is great to see new innovations in OS design…
please spare us the comments about how great linux is. it is a unix clone, a monolithic OS filled with antiquated code. i mean linux fans think combining commands on the shell using a pipe is state of the art for god’s sake!!!
“Beats”? For that little amount of benchmarks?
Pointing out thread creation cost as THE definitive measure of XP inefficiency compared to *xes?
In the same table of that single negative data there are five tests, with one showing XP 3 times faster than *xes and another one showing XP more than 2 times faster than *xes!
On such few tests anyone who wants to troll can extract whatever!
Easy to prove: linux (5 letters), freebsd (6 letters), windows (7 letters)… 5 is less than 6 and 7, so it should be faster, right? :p
Well seriously, the point was to explain the concept behind singurality, look interresting to me, even if I wish more time to fully understand it.
The problem with performance is not “I use 3 cycle less than you to say “hello” so I’m faster”. It was just a thought about how the idea behind singurality (contract, pre/post condition, dependent,…) would make things slow. They show us that’s not the case…
It’s not an application benchmark, just a microbench… and like every microbench, it proves nothing in real live: a process on winxp is very very long to create according to the test… and now compare the starting time of various application on different OS. My fast conclusion: it has almost nothing to do with OS process creation time. (think about starting time of openoffice against office, the problem IS NOT the thousands cycles used to create the process, it’s all the stuff that are done after and the way to make them overlap).
It’s always the same for perf: profile and detect where you can get real perf boost.
… and now I go back to my slow java crap (troll inside :p)
Well, the article is full of apples and oranges comparaisons, and is utterly flawed (like comparing sockets on Linux/BSD with pipes on Windows and channels on Sing, even though Linux/BSD have pipes too).
All the Windows zealots here won’t believe me if I say that this OS performs poorly, given that :
– It uses a poor round robin scheduler, compared with the powerful schedulers of Linux/BSD
– The Linux/BSD are crippled in all tests (for example, they statically link their “Hello World” only on Linux/BSD, and then compare binary sizes with Windows/Sing, which are not statically linked of course)
– The Singularity OS tested is unusable and has nowhere near the same functionalities of Linux/BSD
But the best evidence is in the report itself, look at section 6.3. Here’s an enlightening excerpt :
“To quantify the overhead of Singularity’s extension mechanism in a more realistic scenario, we measured the performance of the SPECweb99 benchmark”.A note says that they are not even full IPv4 compliant in the bench, bench warm-up reduced, length of execution reduced, server logging absent, …, gives you an idea of why the previous benchs are better on Sing.
Here is an excerpt of the results (keep in mind that Linux runs circles around Windows 2003 on this bench) : “Singularity achieves 91 ops/s with a weighted average thoughput of 362 Kb/s. MS Windows 2003 running the IIS server … achieves 761 ops/s with a weighted average throughput of 336 Kb/s.”
Don’t go yet, here comes the worse : “System instability under heavy load and file system performance bottlenecks … consequently reduced Singularity’s overall score.”
And even worse “Singularity’s network stack … can sustain a transmission throughput of 48 Mb/sec”, remember that it is not even IPv4 compliant yet.
So when I hear Windows zealot say OMG Syng roxorz, it’s funny at best.
Congratulations, you win the moron award for this thread.
Have you completely missed the point of Singularity? It’s a research OS, not a replacement for UNIX or Windows. There is no point in writing a fancy scheduler or fully-compliant network stack for Siingularity. They aren’t researching schedulers or network stacks. What they are researching is a fundamentally different protection model for the operating system. As such, the microbenchmarks are interesting, because they show how Singularity reduces the basic cost of accessing OS services dramatically, even compared to very optimized implementations of traditional designs.
Widows is so buggy, & holey becuase of its foundation. A Unix based windows would be great? If ms wants to compete with Apple mactel systems, it seems like a great move to make.
ZDNET biased against Microsoft? No way could that ever happen.
Sorry guys, who cares if they put this test out for their new OS. I mean with different drivers and versions of the OS, these numbers change. Big whoop.
Only ZDNET (Linux NET) would even care about this.
I guess when you cover the linux world there is nothing else to talk about so you have to make some stuff up.