Microsoft’s Professional Developers Conference is currently under way, and as usual, the technical fellows at Microsoft gave speeches about the deep architecture of Windows – in this case, Windows 7 of course. As it turns out, quite some seriously impressive changes have been made to the very core of Windows – all without breaking a single application. Thanks to BetaNews for summarising this technical talk so well.
The focus seemed to have been on improving performance – not a strange goal after the arrival of Windows Vista, which simply didn’t perform very well, especially in its early days. Mark Russinovich, Windows NT kernel guru, didn’t even try to defend Windows Vista in this regard during his talk.
“One of the things we had decided to do with Windows 7 was, we got a message loud and clear, especially with the trend of netbooks, on top of [other] things,” Russinovich said, “People wanted small, efficient, fast, battery-efficient operating systems. So we made a tremendous effort from the start to the finish, from the design to the implementation, measurements, tuning, all the way through the process to make sure that Windows 7 was fast and nimble, even though it provided more features. So this is actually the first release of Windows that has a smaller memory footprint than a previous release of Windows, and that’s despite adding all [these] features.”
For Windows 7, Microsoft removed several locks that seriously hindered performance – all without breaking a single application. The global dispatcher lock, for instance, is gone completely, and replaced by fine-grained locking which provides 11 types of more specific locks as well as rules on how locks can be obtained so that you no longer run into deadlocks.
The pre-7 dispatcher spent 15% of the CPU time waiting to acquire contended locks. “If you think about it, 15% of the time on a 128-processor system is, more than 15 of these CPUs are pretty much full-time just waiting to acquire contended locks. So we’re not getting the most out of this hardware,” kernel engineer Arun Kishan explained.
In Windows 7, synchronisation on a global scale is gone, making many operations lock-free. “In its place is a kind of parallel wait path made possible by transactional semantics – a complex way for threads, and the LPs that execute them, to be negotiated symbolically.”
Threads don’t have to care about this stuff at all, and as such, it’s all completely backwards compatible. “Everything works exactly as it did before,” Kishan said, “and this is a totally under-the-covers transparent change to applications, except for the fact that things scale better now.”
Another notion, hotly contended in the OSNews comments sections as well, is that of “free” memory. As most of you know, Windows tries to page as much stuff you might need into memory as to speed up overall performance of the system. By keeping stuff in memory, the system doesn’t have to wait for slow hard drives to feed them said data. Microsoft Distinguished Engineer Landy Wang opened the Task Manager of a regular Windows 7 system with 8GB of RAM, and showed that only 97MB was truly free.
“A lot of people might think, ‘Wow, 97 megabytes doesn’t seem like a lot of free memory on a machine of that size’,” said Wang, “And we like to see this row actually be very small, 97 MB, because we figure that free and zero pages won’t generally have a lot of use in this system. What we would rather do, if we have free and zero pages, is populate them with speculated disk or network reads, such that if you need the data later, you won’t have to wait for a very slow disk or a very slow network to respond.”
Another lock that’s gone in Windows 7 is the page dispatcher lock, which has been replaced with a “more complex symbolic system of semantics that lets threads execute in a more parallel, efficient fashion”. The page frame number lock has been used for a long time, but with the advent of multicore systems it became clear that it no longer served its function very well – its negative effect popped up in Vista especially.
A page frame number entry indicates the state of a page in memory – is it free, on standby, active, shared, how many processes are talking to it concurrently. All this information is needed to perform state transitions on pages.
“The problem with the PFN lock is that the huge majority of all virtual memory operations were synchronized by a single, system-wide PFN lock,” Wang explained, “We had one lock that covered this entire array, and this worked… Okay 20 years ago, where a four-processor system was a big system, 64 MB was almost unheard of in a single machine, and so your PFN database was fairly small – several thousand entries at most – and you didn’t have very many cores contending for it.”
That has obviously changed in modern times, and in Vista, this architecture simply gave in. The statistic Wang gave during the talk was pretty… Disconcerting. “As you went to 128 processors, SQL Server itself had an 88% PFN lock contention rate. Meaning, nearly one out of every two times it tried to get a lock, it had to spin to wait for it… Which is pretty high, and would only get worse as time went on.”
The more fine-grained approach in Windows 7 and Windows Server 2008R2 yields some serious performance improvements: on 32-processor configurations, some operations in SQL and other applications perform 15 times faster than on Vista. And remember, the new fine-grained method has been implemented without any application breakage.
We often snide Microsoft for their focus on backwards compatibility, but that doesn’t negate the fact that this is some very impressive work they’ve done on the kernel. Sure, they won’t get the rockstar headlines and train station logos, but it is every bit as impressive – probably a lot more so.
The pre-7 dispatcher spent 15% of the CPU time waiting to acquire contended locks. “If you think about it, 15% of the time on a 128-processor system is, more than 15 of these CPUs are pretty much full-time just waiting to acquire contended locks. So we’re not getting the most out of this hardware,” kernel engineer Arun Kishan explained.
In Windows 7, synchronisation on a global scale is gone, making many operations lock-free. “In its place is a kind of parallel wait path made possible by transactional semantics – a complex way for threads, and the LPs that execute them, to be negotiated symbolically.”
This is what the Linux scheduler did before Ingo wrote the O(1) scheduler – there was a central list of processes, and in machines with lots of CPUs there were many cases where multiple CPUs needed to context-switch at the same time and all of then needed to wait while one of them searched the best candidate in the process list. Microsoft has replaced that with 11 types of specific locks – Ingo turned the global list of processes into per-CPU lists of processes where each CPU can context-switch without needing to hold any global lock.
Microsoft, welcome to 2002… http://groups.google.com/group/mlist.linux.kernel/browse_thread/thr…
Edited 2009-11-17 16:48 UTC
The part of the scheduler you’re talking about has been per-cpu and scalable since Windows Server 2003. The Dispatcher Lock was for synchronizing other APIs which have no equivalent on Linux.
Massive knowledge fail. Freetards continue their streak of FUD.
Just for the record, previous OSes did not suffer from deadlocks in the scheduler like they article implies. When the number of locks increase, though, more discipline is required to ensure that no lock inversion deadlocks can arise. A new careful hierarchy had to be instituted to avoid this (and the code has a lot of self-checks to enforce correct ordering).
http://en.wikipedia.org/wiki/Deadlock#Circular_wait_prevention
So they have removed their BKL years after Linux and now use unused RAM to cache disc, like most Unixes have for years (decades?). It’s still Windows, i.e drive letters, a registry, faked single file hierarchy (explorer), device files (and OS objects) are still in a separate filesystem world, Win16/32/64 API, it’s own standards rather than standard standards and it’s still closed.
Different doesn’t always equate to better. Windows has performed well enough up to date, and now performs better. AND I can watch my Netflix movies online with Windows…
That Windows Vista performed well enough is arguable, but that Windows 7 performs better seems pretty much true.
It’s nice to see that they actually addressed shortcomings, rather than just adding more features blindly.
the tech press just didn’t bother reporting on it, bashing MS with the Vista stick was more appealing.
I personally think that drive letters are easier to use than the way Linux does it, “E:\” vs “/media/usb-disk1/”
The mount points aren’t really user visible in Linux.
You plug in an external USB drive, and it shows up, usually labelled only as “4.0GB Disk” or something like that. If you’ve given the drive a label, that label shows up instead. This is consistent across most applications, and is far better than “E:”.
Besides, most users don’t notice drive letters in Windows either. I certainly don’t – I stopped caring as soon as Windows stopped forcing me to remember what drive letter was which.
Sometimes you still have to revert to command line, maybe to do something with your automatically mounted NTFS drive and then…? /media/somethingsomething
Personally I prefer drive letters, a forest, instead of a single file system tree and in windows I can use either one.
Drive letters can be a nightmare in a company with any number of machines when handling network shares. Having arbitrary limits (26 letters because of english) is just dumb.
If you really want on unix mount stuff into /drive/a, /drive/b, etc…or hook an automounter into /drive
Yes when complexity grows, ie when you feel that 26 drive letters is too little, I can understand it feels limited.
I still don’t dislike the concept of having a forest though. Letters is simple and good enough for most users. For more complex scenarios, names sort of like the Amiga had would be nice. Drivename:\dir\file
No no, the Amiga had the slashes the correct way! Partition:Directory/File
I think the Amiga-way is many times better than the Windows-way, you could name a partition DH1: but also give it a name, say Games:, and then access it by typing either DH1: or Games:.
It also had assigns which was very nice. Sort of like soft-links but with volumes. You could assign Games: to DH1:Cool stuff/More cool stuff/Games/ and then access that deep directory by just typing Games:, very nifty. I know you could do this in DOS aswell, but it’s not the same thing. Sys: on the amiga is always the disk you have booted from, whatever the name is. And so on…
But I still think the Unix system is the best and most flexible, maybe not for home users, but as long as it’s hidden (as on the Mac and in Gnome/KDE etc..) it works really nice.
Unless everything you do is on C:, Windows still makes you keep track of that.
Really? I can drag and drop files to my USB Pen drive icon on my ubuntu desktop.. on windows only geeks know which driveletters correspond to certain pieces of hardware..
And the context menu ‘send to’ function extrapolates this problem.. send to F:\ which has a harddrive icon .. Windows always presumes prior knowledge.. it is NOT userfriendly and never has been..
…what the underlying changes were, how 7 is different from Vista. It certainly felt different to use (altho’ I only used Vista briefly on my in-laws’ computer).
And it seems to work well on my netbook. Post google os announcement (that beta might be released next week) I was considering giving it a shot on my netbook, but now I am wondering just how flexible the OS will be (enough for my needs? I don’t think it will) and I am probably going to stick with Win 7. Yeah, off-topic, sorry, but the point was that it performs sufficiently well on my netbook to keep using it (instead of XP or something else).
and
So Vista is dog slow because it can’t cope with 64 cores or even 4. Well I wish my box had 64 cores, which it doesn’t – why then is it dog slow with two cores or even one – in fact why is it dog slow?
This feels like an explanation of the form, if you can’t baffle them with science baffle them with bull shit.
Oh just installed Windows 7 and other than drivers being a big problem it is quite nice, quite fast, quite responsive – its OK.
Look, Vista isn’t the best OS that MS has ever produced, but it does function OK. Saying that “it can’t cope with … even 4” cores is a little over the top. The point of the article is that concurrency was a bigger problem with Vista because of lock contention. That contention has been reduced in Win7 by making locking more fine-grained; thus, making each of the cores more efficient, since they more time doing productive work and less time waiting around for locks to clear. If anything, your Vista box will be more efficient with one or two cores than 64 because of the lower lock contention; as you add cores, you increase contention and, in turn, reduce throughput through the global lock. These changes will primarily make the kernel more scalable (as the article points out).
Edited 2009-11-17 19:10 UTC
I was just quoting the article – However, my point is I don’t think its just multiprocessor support that makes Vista dog slow. I use it on a dual core and it’s awful.
Folk complained about ME, compared to Vista ME looks like a good deed in a naughty world – admittedly the underlying technology in Vista might have been a step forward but the user experience is miserable.
MS apologists might like to say Vista isn’t that bad – if you run it on a quad core with 8 gig of ram it’s OK – it isn’t it’s awful something that MS will soon wish to forget. If MS had any decency they would make upgrades from Vista to Windows 7 almost free.
Or those that aren’t vulnerable to tech group think.
Vista vs XP on an EEEPC
http://www.youtube.com/watch?v=EXw7v1bxpSs
I ran this test several times off cam and in some cases XP beat vista and in other cases Vista beat XP by a small margin. There was no consistent differential in score. In this particular video XP just happens to beat Vista by a few points. Overall Im surprised at how well the 900 handles Vista’s overheads compared to XP
I’m sorry, but just “functioning OK” isn’t good enough when the company in question is the most profitable software house on Earth, Vista is (was) your flagship OS and you’re the run-away market leader for desktop OSs.
In situations like that, i’d expect the OS to function briliantly rather than just “OK” compared to some free OSs.
But then maybe I expect too much from my market leaders?
THANK YOU!!
I’ve been saying this to people I know a long time and they all just respond with something similar to: “well, if you don’t like it, why don’t you code an OS yourself and show how it’s done?” or “You just don’t like it because it’s trendy to not like it” or “yeah, your Linux works soooo much better, try playing [insert game] on it” or “it’s not Microsofts fault [insert application or hardware] doesn’t work, it’s the other companys fault!”.
People always contradict themselves, blaming other OS’s because xyz doesn’t work, despite beeing a Windows app, but then excusing Microsoft for the exact same thing, instead blaming the manufacturer of said app or hardware. And making excuses that it’s up to everyone to keep up with the times and upgrade their computers, not Microsofts responsbility to make sure their software run on grandmothers toaster. Instead of admitting that an OS doesn’t have to be slow and bloated and require the absolute latest hardware to run. It can be small, nimble, backwards compatible, and beautiful looking and STILL run on grandmothers toaster. It all depends on the coders, and from the market leader, I expect nothing less.
Watch AROS warm boot … then you won’t think it is ok anymore and yes 3D support is coming to hobby OSes and with that… commercial games
Just like all those commercial games that show up on OS X and Linux?
The bigger problem seems to be most game studios using DirectX instead of OpenGL. It’s a shame because they are missing a lot of customers.
I doubt they see it that way, with over 90% of the market running Windows, they feel they don’t need to bother.
What customers are they missing out on? Until OpenGL gets more competitive against DirectX, switching to OpenGL for games means worse games for your Windows based customers which are the 98% of your expected user base anyways… supporting both API’s means higher development costs so Windows + DirectX only is the wisest choice.
Even if I modded BluenoseJake and Panajev up, I’d love to agree with you. Howewever they are right. Microsoft had announced in the past they were going to merge Direct3D with OpenGL.
Not exactly the same but they had a proprietary protocol to read mails from Hotmail directly in Outlook Express (Webdav). This protocol had been reverse engineered and integrated into Linux (hotway). Microsoft then decided to change its Webdav protocol for some other closed protocol. When Webdav was about to be replaced, due to pressure (GMail ?), they came back to the standard pop3 over SSL.
I doubt the same will happen for Direct3D because it’s already THE standard for 3D games on Windows, OpenGL only being a follower.
But Windows is not the only gaming platform. How about consoles? They are missing the consoles market, not only the OS X or Linux market.
How? Nobody cares if your console is running OpenGL or Direct X or something else. You don’t write games for OpenGl or Direct X, you write it the game for the Xbox 360 or the Playstation 3. I don’t know about the playstation, but with the xbox, the development tools also allow you to publish the exact same game for Zune and Windows as well as the console.
I don’t see them missing anything.
AROS doesn’t even have memory protection. I may as well compare it to the boot-time of my Commodore 64.
I see the windows fanboys don’t like my comment… sheesh
Is is not a fact that every OS except perhaps AROS and Haiku are ridiculously slow to boot?
I have only installed Haiku myself but Those videos of AROS were mind boggling.
//which it doesn’t – why then is it dog slow with two cores or even one – in fact why is it dog slow? //
Umm … becuase you f–ked it up somehow, or your hardware is cheap-ass shit?
My old Pentium D box runs Vista plenty fast.
That is the ideal situation in which a user’s applications do not use much memory, except that which is used for loading objects off disk. That is not my typical usage of an operating system. Why can’t I just control the amount of memory it uses for cache? Do what you want for default, but let me tweak it, for heavens sake. If you’re right that the default behavior of cache everything is so correct, then I’ll admit I’m wrong after trying to improve upon it.
The nice thing about caches, generally speaking, is that if you need the space for something else you can just discard the cache, so when your programs need that space the cache will be used for the programs.
The chances that you can manage the memory better than the kernel are slim.
There is an argument for how to tune whether very old pages get swapped out though, which Linux exposes through its ‘swappiness’ parameter (great name that
I can set the size of the virtual memory page file in windows, or leave it to the default setting. Why can’t I also tune this behavior that is essentially controlling similar behavior?
The pagefile size has no effect unless it is set to be too small. The OS will not use the pagefile to store information that could just as well be stored in memory (old modified pages do eventually get written out, but they won’t be discarded and read back in unless there’s other demand for memory).
Edit: I’m tired, I didn’t understand that I had posted the previous message
Response:
Yeah, obviously the page file setting only comes into play when windows would like to allocate more memory (for cache) than you have. It used to grow the page file by loading things from the harddisk into memory ( which was the page file, located … on the same stupid slow hard disk). Read on for why I used to set this to be pretty small.
Additional page file Comment:
But I can set the size of the virtual memory pagefile. I used to have to do this on weaker systems to prevent them from running out of space while the computer was running.
I’d rather have the option to control it. My memory uses are non-conventional, I understand why they did it this way. If you gave me a choice, I’d prefer to have the option. The fact that you have to explain this over and over again to everyone. you could have avoided all of that with just an option. Let people play with it and discover how awesome windows engineers were in their wonderful default setting! Sometimes people need to see the before picture, in order to comprehend the beauty of the after.
Edited 2009-11-18 03:27 UTC
Actually I’m pretty sure that exokernel designers would differ with your opinion on the kernel being the best at memory management. see here http://pdos.csail.mit.edu/exo.html
The drawback is exokernels have an inherent bloaty tendency that could be avoided though…
Deciding what pages to prefetch into unused memory is a difficult problem that requires a lot of high level code that doesn’t really belong in the kernel.
And this is precisely why Windows doesn’t do this in the kernel. Prefetching decisions are made by the Superfetch service, which runs entirely in user mode. The kernel provides some basic interfaces that Superfetch relies on, but all the logic is in user space.
I don’t understand this. Any memory that is not being used right now for programs can be used for caching. And if programs suddenly have a need for more memory, that memory can simply be freed. Instantly. No disk activity required. Why would you *want* to limit what the kernel does with otherwise unused memory, which can be put to good use?
In the Linux world, we do have a bit of a conflict between those folks who think that programs’ seldom-used pages should get swapped out to make more room for disk cache, and those who feel that program’s pages are sacred, and should not be swapped out unless absolutely necessary. But that is a different issue.
We have enough memory in today computers that swapping is mostly unneeded. If you have more than 1GiB of memory and you do a fair use of your computer, you’ll see most of the time the swap isn’t needed at all. I have 1GiB and my KDE desktop uses no more than half most of the time with browser, mail client, chat client and torrent client. Any program that’s using more than that is probably misbehaving and ought to be terminated. When you have swap, say double your RAM, and a program goes rogue, it will start consuming all available memory, and then the kernel will allocate all available swap to it. That’s gigabytes of swapping until the OOM kicks in, producing a lot of disk thrashing and the normal slowdown. The disk IO is producing the feeling that you need a new computer. Hopefully, in a few years swap will be totally unnecessary.
I doubt it will become unnecessary. We will need a fair paradigm shift, before that occurs, I think. Having more memory begets uses of more memory.
However, for general use cases, more RAM and no swap works great, on both Windows and Linux. Any process that can try to use up all my RAM deserves to be forcefully killed .
Well, when you change the code without changing interfaces then yes – you better not have application breakage.
Except, in this case they improved performance and there are likely applications out there that may have relied on the timing to keep from breaking – so likely, they have broken some applications.
Though anyone writing an application that is that dependent on timing deserves to have it break when the timing changes like that.
An extreme and real example of how linux kernel is much more superior than the windows kernel, and how it does not matter a tad whether you have a badly designed kernel, provided that you have a more user friendly and compatible interface.
…?
There, fixed it for you.
Tomcat the old osnews user, why do you get so angry? There is a point to what I am saying. I used a single sentence I guess that’s why it wasn’t addressed correctly. Sometimes summaries don’t work.
Here’s an excerpt from the post:
By PFN he is perhaps talking about the struct page lists of all physical pages. I know the stuff having written one myself.
Now, having a single lock for this? Up until 2009 Windows Vista? That doesn’t sound right to me.
Projects have different priorities, and I understand that. Windows project is about customer-oriented usability and compatibility. It is not about performance or best technical architecture.
Linux kernel is there for technical superiority. People with high technical skills (not saying this isn’t true for Windows kernel, but more true for Linux kernel) contribute to the project because of their technical interests. Also Linux has an evolutionary aspect to it that strengthens it, which is not true for many other kernels out there.
The result? In many ways the Linux kernel is technically superior. But Windows is used a lot more. Before Ubuntu I wasn’t able to use Linux for everyday use myself. That’s what I wanted to summarize in a single sentence.
// That’s what I wanted to summarize in a single sentence//
And, like most freetards, it came out f–k-all because you’re an idiot.
Yes, very true. Windows just feels right when it comes to a friendly desktop environment. Under Linux, gnome and KDE will always remain work in progress. This is sad because Linux kernel IS indeed superior to the windows kernel. It just isn’t designed for a desktop computer mainly because kernel developers don’t care about desktop users.
How, exactly, is the Linux kernel superior to the Windows kernel?
While I would not be inclined to make any sweeping statement such as “X is superior to Y”, a willingness to open up the code to the wider world seems a good long term investment. Despite any ups and downs, my investment of confidence, professionally, has performed well over the years. I see no reason to abandon it. I would provisionally call that superior.
An open system can be better adapted to your needs.
Just look what the HPC guys are using.
New Top500
http://www.top500.org/stats/list/34/osfam
Maybe because Linux is free and for commercial OS they need to pay per machine, or even per core?
Right because 99% of computer users are working on high-end supercomputing clusters. Therefore, linux is better.
Typical freetard bullshit.
http://www.serverwatch.com/trends/article.php/3848831/Lack-of-Innov…
As far as what computer users are running lately:
Only 500 machines, but approximately 2 million copies of linux:
http://www.top500.org/stats/list/34/procclass
http://www.top500.org/stats/list/34/osfam
At the other end of the grunt scale, 11 million netbook machines with one copy of Linux each:
http://www.computerworld.com/s/article/9140343/Linux_s_share_of_net…
A bit like squeezing from both ends towards the soft middle.
Um, way to completely miss the point! Go freetardians and servers-are-the-only-computers mentality!
And? The NT kernel never “let me down” either, that’s hardly a meaningful metric now isn’t it?
Very good article btw, though as always on this subject it’ll just reap trolls again and again.
Is the point of this article that if you have 128 processors, Windows will work well?
Maybe I’d better stay with Ubuntu on a duocore then.
I hope they deploy all these high-performance features on the 64-core box that is processing the free Windows 7 upgrades. Because I sure as hell haven’t received mine yet.
Until that happens (and I find out I can still disable StupidFetch), they can geek out all they want. I’ll reserve judgment.
In the linked article it says 15 TIMES faster, not percent. That is a big difference
Either way it is good to see Microsoft putting an effort on performance. It will keep the other operating systems on their toes as we can no longer rely on Windows being as crappy and slow as Vista.
This is also making me wonder about some of the locking semantics in the Haiku kernel. I assume we have more fine-grained locking sort of like what Win7 now has.
it’s not 15x faster. that would imply that it was untenably bad before.
This is the important point Leavengood was referring to:
“While spinlocks comprised 15% of CPU time on systems with about 16 cores, that number rose terribly, especially with SQL Server. “As you went to 128 processors, SQL Server itself had an 88% PFN lock contention rate. Meaning, nearly one out of every two times it tried to get a lock, it had to spin to wait for it…which is pretty high, and would only get worse as time went on.”
So this global lock, too, is gone in Windows 7, replaced with a more complex, fine-grained system where each page is given its own lock. As a result, Wang reported, 32-processor configurations running some operations in SQL Server and other applications, ended up running them 15 times faster on Windows Server 2008 R2 than in its WS2K8 predecessor — all by means of a new lock methodology that is binary-compatible with the old system. The applications’ code does not have to change.”
Yes, Ryan, I think it’d be nice to address such scalability issues in the Haiku kernel, but…. not worry about them for R1, because that’s too sensitive of an area to mutate in such a major way without a rather high risk.
I’m going to get downvoted for this (it is off topic) but…
on osnews generally only very very few posts get downvoted so they’re not readable. This thread all of a sudden has tons, and a lot of the posts “killed” bring up points that have valid discussions that are now partly ruined because the original post isn’t visible for context of discussion (many are not in bad taste trolls).
Whats going on here?
What comment are you refering to? All the downvoted comments are made by morons and trolls on the first page at least.
There’s the one thinking “giant locks” have been removed DECADES AGO (lol) in Unices anymore (they’re still there bro, they’re being gradually removed afaik, just like MS is doing).
There’s the one claiming MS has got an inferior kernel, though he wouldn’t be able to tell why or even define what a kernel is for the life of him if asked.
Some random linux troll.
And the other one comparing vista boot time to AROS.
Hardly anything useful there.
Edited 2009-11-18 13:38 UTC
Couldn’t agree more. Sometimes I do miss the early OSNews days without Slashdot like trolling.
I don’t see the problem. The comments are properly modded down, and no longer in sight. Yet – you still complain about them. Isn’t the fact that they’re no longer visible kind of an indication in and of itself?
Why look them up specifically? I just don’t get it.
kernel’s performance enhancements are negligible on current systems. You need at least a 32+ cores system to take advantage of these kernel’s enhancements. Win7 is just ready for the future…
Edited 2009-11-18 17:09 UTC
Reaso for the new approach, maybe following Free-BSD, is actually much simpler than the given explanations and is the natural way to deal with the problems pinpointed for years in the arquitectural change to multiple CPUs.
The problem is simply the bottleneck between Multiple CPUs and Multiple RAM pages.
Therefore:
– Putting to sleep non-relevant CPU’s drops attempts to get in-memory data.
– Dividing Memory (apparently non-free) eases memory administration.
Microssoft is apparently strategies already used succesfully by AMD… and Free-BSD, with a touch of novelty that is not. But a progress, none the less, for the white elefant that windows has allways been. Finally it is changing to a mouse to become agile and fast.
May we say… at last?
And also at last we may say the promisses of w95 are being implemented.
One thing seems to be missing thought: Security!
Because the semantic complexity (if it is as declared) may open paths to UN-security (this is a comment on abstract, on a loose context). Unless eficiency and ssecurity are in diferent levels, and I believe they are (but we are talking of Microsoft, so trashing external insigth is always higly probable, as is Microsoft obtaining patents by those external insights, obvious or not).
Cheers.