Microsoft executive Ty Carlson spoke about the future of Windows recently during a panel discussion at the Future in Review 2007 conference held in San Diego, California. Carlson said that future versions of Windows would have to be ‘fundamentally different‘ in order to take full advantage of future CPUs that will contain many processing cores. “You’re going to see in excess of eight, 16, 64 and beyond processors on your client computer,” said Carlson, whose job title is director of technical strategy at Microsoft. Windows Vista, he said, was “designed to run on one, two, maybe four processors.”
Would?
They had better be.
Interesting topic. The article says that the NT kernel can already handle 32 concurrent threads on a 32 bit system, and 64 on a 64 bit system.
MS will probably expand on the thread limit, but I’m thinking the biggest changes will have to happen in Applications and in the VS API’s to make it even easier to make multi-threaded apps. Still, there’s nothing like efficient code to make an app run faster, and the jury is still on out managed code.
No no no.. the way to better performance is to write slow elegant code, and just up the speed and numbers on the processors. *sigh*
By the way, I agree with you.
Edited 2007-05-30 11:07
Is concurrent thread support in NT really that low?
The OS/2 Warp 4 kernel can be configured to handle 4095 concurrent threads, and it’s 10+ years old now.
Careful reading is in order: that’s 32 cores/processors and NOT threads as whatever has started with NT has been running many more threads than that from the start. I assure you, I’ve written many applications that had the option of using far more threads than that for the platform
Take a look at Philipp Haller’s work on “Actors that Unify Threads and Events”
http://lamp.epfl.ch/~phaller/
Sweet stuff
The WIN32 API is like 80×86 assembly: there is so much software written for it, it can not go away. And for Microsoft to deploy a massively parallelized O/S, its applications must be written in a massively parallelized programming language, which means no C/C++/C# (the concurrency model of threads/mutexes/semaphores is flawed and does not scale well)…so it means that Microsoft must abandon WIN32!
And future computers may not contain 64 cores, but 640 cores, or even thousands of cores. I would like to see Microsoft applications written for those architectures.
Of course the unix world, with C being so tightly integrated to unix, is in even bigger trouble…
I see what you are saying, but doesn’t UNIX already run on cluster farms thus having to run well on systems of dozens, even hundreds of cores?
Couldn’t these systems be used as a benchmark to build multicore-efficient desktop OSs?
The core OS might be but applications are not automatically multi threaded. For instance, Oracle Application Server’s Webcache is single threaded only and not capable of operating more efficiently over multiple threads, to my knowledge. I’d love for someone to show me otherwise however.
I can’t speek for the webcache itself, but the processes it spawns off are multi-threaded
Multicore-threaded applications are very different than running 100s of processes on a cluster.
I disagree with you. The only part of the Win32 API that is fundamentally serial is the graphics library, which must be called from a single thread for the most part. This is because it uses fundamentally thread-linked mechanisms, like Thread Local Storage and Asynchronous Procedure Calls, for delivering Windows Messages. Thus, I think GDI and Windows Messaging might need to be updated to allow the graphics functions to be easily manipulated from multiple cores.
On the other hand, the interesting uses of multicore are for doing more massive amounts of processing. Win32 is only tangentially involved in doing calculations. When all you’re doing is math, the OS APIs really do not matter. I bet Ty Carlson is just posturing, because there’s really very little of consequence Windows an do to solve the hard programming problems of multicore. Most of that work would have to be done by the compiler and language folks. Any changes to Windows would maybe be a redesign of internal components for scalability, but not a huge change to the Application level APIs.
To my experience, the problem with multi-threaded applications is not necessarily with the development language, but with the developer mindset and experience.
I would be of the mind that the language can evolve to be “easier” for multi-threaded application development, but if the developer cannot grasp concurrency and flow control then the provisions of the language may be limited.
Part of this can come down to an idea mentioned in 1984 (which I’m sure comes from a real psychological study): if you don’t have words for a concept then you can’t think of it. Alternatively, you can control people by limiting their choice of words.
The idea here, is that we want to make developers who think naturally serially to work concurrently by preventing them from falling back to serial tools. Instead of through repression, like in 1984, we should do it by sweetening the concurrent-friendly tools so much that people want to use them. An example here is LINQ in C# and VB.NET: it is an architecture that allows queries to be parallelized without the application developer thinking too much about correctness and what’s going on under the hood. People don’t care about that, but they do care about a friendlier syntax for doing complicated searches on a data set, which is what will sell LINQ if it indeed takes off.
We need more tools like this. Where the dev can get a complicated, multi-threaded task done without having to think too much about locks. Will it scale as well at first as a hand-coded locking system? No. But this was also initially with high-level programming languages. But perhaps, like HLAs, concurrency-friendly languages will allow people to do more efficient overall algorithms because they can put more logical complexity in without having to worry about the lowest levels of the code being correct (i.e. sure, you can make assembly fast, but it may be better to code a more complicated, “tricky,” algorithm in C#, where you don’t have to worry so much about annoying details.
” Most of that work would have to be done by the compiler and language folks. ”
How about this language? :
http://en.wikipedia.org/wiki/E_programming_language
I quote: “A concurrency model based on event loops and promises ensures that deadlock can never occur.”
Perhaps you don’t understand the nature of programming for a multicore architecture.
Suppose we have a program that does the following calculations:
x = a + b
y = c + d
In a traditional programming language like C/C++/C#/Java/etc, this code is executed sequentially, one instruction after the other (even if the CPU executes these instructions out of order).
But this code is inherently parallelizable. Both lines can be executed in parallel.
Out-of-order execution will do that for the specific example, but what if you had the following code?
for(int i = 0; i < 80; ++i) {
x[ i ] += y[ i ];
}
The above code will be executed sequentially in all CPUs…but if we had an 80-core CPU, each action could be done in parallel, thus speeding up this code 80 times!
As you can see, in order to be able to parallelize processing, data should not have interdependencies.
But you can not do that in an optimal way in a C program, because tracking all possible parallelizable paths is an intractable problem.
But there is a solution: the Actor model. If each variable is an Actor (i.e. object + thread + job queue), then whatever parallelization is there in a program it automatically surfaces.
See Erlang for a commercial implementation of the Actor model, with programs with thousands of concurrent objects either in the same computer or in a distributed environment.
Also see boost::futures for a C++ implementation.
So, in order for WIN32 to take advantages of multicore architectures, it should be rewritten in a language that allows for it, and C is not it.
Of course the unix world, with C being so tightly integrated to unix, is in even bigger trouble..
First, unix system are 64 bits since 10 years (not a year or two like windows).
Second, unix system runs on all the biggest SMP systems in the world since a decade now. Does windows runs on a P5 595 with 64 CPU? Does windows runs on a Sun E25K with 72 CPU?
As for the 64bit, it’s perhaps new for your windows world, but really old for the unix one.
Edited 2007-05-30 11:13
First, unix system are 64 bits since 10 years
January 19, 2038.
“January 19, 2038.”
Good thing that’s not a completely different problem or something…
Here we are talking about the number of cores, and therefore threads, an OS can utilize, who cares about the bitness of the processor? It really has nothing to do with the number of cores on a chip, or the number of processors in a system. It doesn’t matter if it is 64bit or 16bit (except for memory limitations, which is irrelevant, because even 32B OSs can access more than the 4G limit)
Here we are talking about the number of cores, and therefore threads, an OS can utilize, who cares about the bitness of the processor?
We care because the more you have processors, the more you can run parallel threads. You seem to miss something here: a dual core IS a dual processors.
“A multi-core microprocessor ( or chip-level multiprocessor, CMP) is one that combines two or more independent processors into a single package, often a single integrated circuit (IC).”
http://en.wikipedia.org/wiki/Multi-core_%28computing%29
We care because as said in the news, Vista is designed to run on one or two processors and at least 4 processors (Mac Pro already have 8 cores for example …). Unix for dozens ones with fully multithreaded kernels/network stack/etc since years (and so already using all the CPU of a system).
We care because Unix as a long history with SMP/multicore systems. Almost all of them are running Unix OSes and thus possesses a lot of programming API for parallelism.
We care because all first multicore systems come from Sun and IBM for their Unix OSes. What do you think ? They created some CPU and have not upgraded their OSes since years to get the most of them ?
Edited 2007-05-30 14:36
You quoted from Wikipedia: “A multi-core microprocessor ( or chip-level multiprocessor, CMP) is one that combines two or more independent processors into a single package, often a single integrated circuit (IC).”
If I refer to E. C. Zoller’s “Einfuehrung in die Grossrechnerwelt”, pages 3 and 4, I find a conflicting explaination where a difference between a processor core and a “real” processor is made. A multiprocessor environment is characterized by two ore more full featured processors that can be seen as uniprocessors. If they cannot be seen this way, the term is dyadic processor (or triadic processor), i. e. a processor setting containing two or more processors that can not be seen as uniprocessors. Anyways, a full featured processor does contain own I/O and channels. It’s even possible to dedicate own RAM to him.
Of course, there’s no problem to built more than one full featured processor into a container unit.
“We care because all first multicore systems come from Sun and IBM for their Unix OSes. What do you think ? They created some CPU and have not upgraded their OSes since years to get the most of them ? “
Don’t forget to mention SGI here. They built desktop machines with multiprocessors when the PC was sitting in his cradle making beep beep. ๐
Second, unix system runs on all the biggest SMP systems in the world since a decade now. Does windows runs on a P5 595 with 64 CPU? Does windows runs on a Sun E25K with 72 CPU?
No, but Windows will run on 64 CPU unisys es7000’s.
But if you’re going to brag about the impressive hardware the unix can run on (and I certainly thinks one should) why mention such puny machines as the e25k. Unix runs 2048 CPU sgi Origin machines. Now that’s hardware.
I am speaking of what I saw in production.
And with all problems we got with Windows on dual proc servers, I highly doubt that Windows really run on 64 CPU out of the microsoft’s laboratory.
Windows runs fine on dual-CPU or dual-core systems – in fact I highly recommend everyone (if they can afford it) to run any version of NT-based windows (XP, VISTA, etc) on systems with at least two cores/CPUs, as this helps get around some of the unexplained response time lags most everyone has experienced.
I have seen WinNT run on a 64 CPU Unisys box with my own eyes, but I believe that UNISYS had to make some code modifications in order to get that to work.
As for *NIXes, many of these have and contimue to run on very large systems with hundreds or more cores/CPUs, and scaling works very well. Stock Windows could theoretically do the same, and so could stock LINUX, but they are not there yet.
I would expect that MS would eventually bring out a non-backwards compatible OS for systems with large numbers of cores. I recall seeing some news a while ago about a MS research project along those lines.
Finally, some of us old fogies can remember systems built back in the ’60s that supported multiple CPUs, with OSs to make them work. A bit older than BEOS, I would say.
I only trust facts, and the fact is that how many people are running windows on a more than 2 CPU hosts ?
Now the questions: Why people do not run windows on more than 2 CPU ?
Because it’s more scalable than Unix ?
Because it can handle large amount of memory efficiently ?
Because it runs very well under heavy load ?
Actually, there are a fair number of WIN-based servers running on 4 or 8 cores – I know because I have sold many of them. But beyond that we hit a situation of diminishing returns. WIN does not work all that well on more than 8 cores right now. I do expect that to change, but when I do not know.
Proprietary *NIXs still rule the roost when it comes to very big iron with lots and lots of CPUs/cores. But for the desktop, a WinTel based system with 2 or 4 cores still offers the bast bang for the buck. Everyone should have at least 2 cores on their WinTel desktops if they can afford it, and if one can afford a PC at all these days, they can afford two cores.
From what I have read so far I conclude IRIX on Origin with its NUMA architecture was far better than anything else available from other vendors in terms of scalability, maybe with the exception of Digital Alpha systems and everone knows how that ended.
Too bad SGI has always only been about engineering and lost against the marketing of inferior alternatives by the likes of Sun.
Nowadays GNU/Linux can also run on huge NUMA systems as shown by the Altix line with Itanium2 processors. Last time there were 4096 in a single system but I don’t know if a single system image runs all of them.
BTW I am typing this on a MIPS Loongson processor and am wondering why SGI chose those power hungry Itanium processors over low power and very elegant MIPS processors that consume up to 50 times less power at half the speed.
GNU/Linux runs on both, so why choose the inferior one over the other? Loongson 3 will be multicore with up to 16 cores. Cavium Networks also has a 16-core chip and it consumes 40W max at 1 GHz.
Nowadays GNU/Linux can also run on huge NUMA systems as shown by the Altix line with Itanium2 processors. Last time there were 4096 in a single system but I don’t know if a single system image runs all of them.
The biggest single image linux system that sgi sells is 512 sockets.
BTW I am typing this on a MIPS Loongson processor and am wondering why SGI chose those power hungry Itanium processors over low power and very elegant MIPS processors that consume up to 50 times less power at half the speed.
Largely because developing new processors that can compete with Intel takes a lot of time and money. Time and money that sgi simply didn’t have. Sure the MIPS was in many ways a much nicer processor, but the simple truth is that the Itanium and even the pentiums where running circles around it when it came to number crunching. In 99-2000 or there about when the decision was made, the Itanium was by far the most promising CPU available. The Loongson design was still several years away from commercial availability, and still today even the fastest MIPS is far behind in processing speed. There are no MIPS machines on the top500 supercomputer list, and supercomputers is what sgi does.
Obviously their customers where willing to accept a twice as large an electricity bill in exchange for their calculations being done 5-10 times as fast.
Well, that’s not true.
There is (at least one I studied) C-based libraries to allow programming on 48-CPU box.
I am not an expert programmer, so I can’t speak to the scalibility of C languages to these new processors. However, regarding Win32 APIs, haven’t they already begun moving on? For compatibility sake, Win32 must be supported for some time, but .NET’s managed code runtime is Microsoft’s replacement (although it isn’t quite there yet, and Wn32 development is certainly not deceased).
It will be some time, but you will see Win32 disappear like we have seen Win16 disappear from 64-bit Windows. This will have more to do with future customer demand than the processor core count.
Windows will evolve gradually, not radically. Microsoft is dependent on backwards compatibility. If that wasn’t the case, there should be no reason why Linux is behind at this point.
Wasn’t Vista supposed to be fundamentally different?
If it has Windows in the name, it might be fundamentally different on the inside, but the user interface and supported applications better be identical or nearly so to the enduser.
There’s nothing wrong with a continual shift towards what is a very big trend change for the industry (that is, end user affordable multi-core [and thus true multi-threaded] systems).
Wasn’t Vista supposed to be fundamentally different?
Indeed. More of the same old bullshit ever since I’ve been unable to avoid them (I wonder whether it was ever any different?) And people wonder why I wouldn’t believe Microsoft if they posted a message here saying “Twenex posts on OSnews”.
Whatever an OS has to do is perfectly doable without it having to take advantage of all those cores.
The only thing the OS should have to do with respect to multi-core processors is providing a way for applications to use them by scheduling the threads efficiently.
While it is quite obvious that if they’re going to make a new windows, they’ll need to redesign it from the ground up, I don’t really see what it could bring to the table regarding performance on multi-core systems.
I think it’s just them starting to plant the seeds of a “you NEED this new version of windows because vista and XP are woefully inadequate on today’s hardware” marketing campaign so they can manage to sell the goddamn thing once they release it (unlike vista).
Although I think you have a point with your planing a seed idea it’s not that simple. As the OS provides a large amount of access to all the I/O, thread scheduling, window management and memory management all needs to be designed for high availability when you may have a large number of threads all accessing them at the same time. This was something the recently improved on in OS X increasing the number of locks for different parts of the kernel so you wouldn’t have so many threads waiting for each other via the kernel.
The API’s need to be designed to take advantage of the threading too. So thing like CoreImage which provide you with tools which are full threaded but behind the scenes so you don’t need to know about it. As I see it, with the languages which are common today this is the only way for an OS pervaded to do it, make the API’s seem as if it’s serial even if they are not.
Only mentioned OS X because I know more about their API’s than Windows API’s.
In reply to someone mentioning clusters. Clusters are very much different than SMP machines. With clusters there is a separate OS image for each node. Most of the time each node is running the same job just with different data and with little communication between the nodes as bandwidth is so limited. SMP machines have many jobs which share the same resources and it’s this which makes the coding difficult, making sure that resources don’t get messed up because different processes are accessing them at the same time.
“I don’t really see what it could bring to the table regarding performance on multi-core systems. ”
each core will be smaller and smaller. so if you are running the os on 1 core. it could be slower than today’s chips. Especially if you have a lot of old programs that can only use the first core.
the whole problem with chips today, is that they cant get any faster because they are too large. the larger they make them, the larger the pipeline has to be. a large pipeline can kill performance.
intel and amd have dumped a ton of money into research and development to determine what would make computers faster. The answer is multicore chips. microsoft is just going to have to get used to that. and make their os work with it.
instead of jumping on fixing windows. microsoft is going to try to get intel and amd to only make 4 core processors at the most, it would save them money on development.. it’s not going to work. there will be 8 and 16 core, even 64 core processors. it’s only a matter of time. and microsoft better start making windows work with as many processors as possible.
Second, unix system runs on all the biggest SMP systems in the world since a decade now.
So what? It was achieved through the pain and kludges.
Do you know how many programming effort spent on this problem? Do these people now know more than 20 years ago? NO
Thread model is fundamentally flawed. CELL supposed to employ ULTRA FAST async task based model on top of bare metal, but so far, CELL linux suffer from overbloat and thick layers.
That, however, has nothing to do with flawed thread models but a lot to do with Sony not wanting to give direct access to the hardware.
However that doesn’t rule out the possibility of thread models being flawed. Trying to work in parallel serial way on parallel serial systems is bound to give weird results
Sony not wanting to give direct access to the hardware.
Wrong. CELL is directly accessible.
Linux is just not the best OS for CELL.
Trying to work in parallel serial way on parallel serial systems is bound to give weird results
No comments =)
You seem to think that IBM hasn’t already released any product like the Linux QS20 blade. I don’t think that any Cell issues you are thinking of are because Linux can’t do it. It may be that you are thinking only of the Sony Playstations.
Links: http://www.itjungle.com/tlb/tlb091906-story02.html
http://www-03.ibm.com/technology/splash/qs20/
You seem to think that IBM hasn’t already released any product like the Linux QS20 blade.
Check my profile =) It would be surprising if i was not aware of any CELL based product.
I don’t think that any Cell issues you are thinking of are because Linux can’t do it
Well, this is the state of SPU support right now in Linux.
Slow communications and costly context switches.
Seems to be the only OS that has it’s inherrent design based around multiple CPU systems.
Can’t wait for HaikuOS. BeOS was the only OS that scaled well when throwing more CPU’s into the mix.
Yup, they pretty much predicted the future to a tee with the BeOS. It’s a shame most people thought they were full of it, but I suppose that comes with the whole “ahead of its time” territory.
“It’s a shame most people thought they were full of it, but I suppose that comes with the whole “ahead of its time” territory.”
Well, since you’re talking about people that are full of it, don’t forget the profound lack of any business sense of JLG et al.
*cough* Consumer OS’ perhaps. *cough*
Solaris and other operating systems have scaled well for a long time now, even to many processors.
Does this mean we’re going to see a nano or exokernel with various virtual machines running like eggs in a carton as the next windows? Will the next windows come with Dos, Win 3.11, 95/98/ME, and NT/2000/XP/Vista as install options with provisions to install some FOSS OSes? Will the end-user just see one coherent desktop with the various apps running and not know they are all different operating systems, like what parallels does on Mac OS?
Probably, not, but that’s what I’d do.
Now, that would be sweet.
Likely the biggest shift in cpu scheduling is going to be a change from a single run queue with multiple entries into a cpu with with multiple resources waiting to chew on tasks.
Yes, it requires a paradigm shift, but not as destructive as people might think.
and yes, it is interesting how Microsoft totally rewrote Vista and the rewrite basically did nothing to address the future.
Linux and BSD are still very heavily in the game for the newer paradigm, and actually already work very well for it.
Vista was not a rewrite. They rewrote several subsystems, but the whole architecture of the kernel and the OS remains the same. Well, except for graphics which has a totally new architecture based around milcore.dll.
Since Server2K3 and maybe XP (don’t have my Russinovich book handy), WinNT has had per-cpu runqueues with a global load-balancer task that’s periodically executed. You are not going to see any general-purpose OS that is more advanced on the Kernel level for multithreading than Windows. BeOS was cool with all of its pervasive multithreading, but that seems to me like more of a buzzword marketing endeavor than an actually useful feature. On a practical level, NT has fine-grained locking in the Core OS and the underlying design is fit for 32-64 cpus (the limit here is determined by the sizes of the thread cpu affinity masks). With Vista, the kernel devs have eliminated serialization in the driver port layers for networking and storage, so the Vista/2k8 kernel does address the present and near future quite well.
But doesn’t Linux and Solaris already run cluster farms with thousands of processors?
As for multi-threading, GCC 4.2 has OpenMP support.
I think in the future, we’ll see more and more apps in the FOSS and commercial world written to utilize multiple CPU threads.
Cluster farms and SMP are two totally different things. Windows could also run on cluster farms with thousands of processors, so that’s not really a big deal. The tricky part is writing an OS that can run on a single machine with hundreds or thousands of processors. There Windows has a bit of catching up to do (Windows maxes out at 64 CPUs) compared to for example Irix and even Linux.
Now I understood why vista is so-o-o-o damn slow. It needs four core cells at one time
Microsoft need to wipe the slate clean and start from scratch. Windows (inc Vista) has too much crud supporting backwards compatibility. Sure millions of apps won’t work, but do we really care of 99% of the crappy shareware, viruses, trojans and spybots don’t work??
I think Microsoft could easily “start over” while still maintaining a good degree of backwards compatibility.
They could implement something similar to “Classic” on the Mac, using a minimal version of Windows Vista instead of OS 9. However, if they do this, they had better make the code for it very much independent from the rest of the OS, so it will be easy to “lift” when it comes time. Better yet, they should make their Classic clone optional–or not even installed by default.
Edited 2007-05-30 14:55
They could implement something similar to “Classic” on the Mac, using a minimal version of Windows Vista instead of OS 9. However, if they do this, they had better make the code for it very much independent from the rest of the OS, so it will be easy to “lift” when it comes time. Better yet, they should make their Classic clone optional–or not even installed by default.
Microsoft have already done something similar to this at least twice before – first with WoW (Windows on Windows) to provide the Win16 interface on 32-bit versions of Windows, and now with WoW64 to provide the Win32 interface on 64-bit versions of Windows.
AFAIK these systems are installed by default, but not started until/unless they’re actually needed.
Future EULA for new windows
Windows XXX Children Ed – max dual core + 3 concurent window
Windows XXX Students/Teacher Ed- max dual core only
Windows XXX Basic – max quad core only
Windows XXX Premium – max 8 cores
Windows XXX Business – max 16 cores
Windows XXX Enterprise – max 32 cores
Windows XXX Ultimate – max 64 cores
Windows XXX Super – unlimited cores
Windows XXX Pirated Ed – unlimited cores
As someone else pointed out, Vista was supposed to be the big re-write. Add to that the comments from Microsoft that Vista would be the last time they did such a major change in and instead would do more incremental updates. I’m not seeing how they can do “fundementally different” if they are only going to expand upon what they have in Vista.
A number of people have raised with issue of backward compatibility. What would be nice would be to see a brand new, built from the ground up, OS from Microsoft without any legacy holding them back. Then bundle a Virtual Machine with the Vista, XP, Windows 98, 95, MS DOS already included to run inside a VM instance.
Or make those add-ons that can be downloaded for legacy support. If you wanted to run Linux Solaris etc in a VM instance you could pay for a version with support for 3rd party OS’s.
I agree.
Even better, it should be possible to develop a hybrid desktop so that applications running in each VM could all appear in the same desktop. From the user’s POV, she would be using a single OS. Under the hood, multiple OSs would be running in multiple VMs.
And Be Inc. implemented multiprocessor support more then 10 years ago… I think that modern OSes like Windows, Linux and MacOS X can not be really called modern.
Sperry UNIVAC mainframes were multiprocessor in the late 1960’s. Be was *generations* late to the party. ๐
Here’s an example of a program which took advantage of such features in 1972…
http://www.fourmilab.ch/documents/univac/fang/
Edited 2007-05-30 17:04
Sorry to burst your bubble, but Windows NT 3.1 was multiprocessor-enabled from the start: however, you did have to install the multiprocessor-enabled kernel when you installed the OS. Yes, this was back in 1993.
….i quickly read the headline as “fundamentally FLAWED”, i only did a double-take as i didn’t see this as news!
anyway, back to downloading fedora 7 ๐
if Microsoft couldn’t redo the file system for vista, how are they gonna redo the whole core of windows to take advantage of multiple processors? historically nt based os’s take a huge performance hit with multiple processors.
I think Microsoft will fail at this. and open source wont.
As a number of people have noted, if a radical re-write is in the cards, virtual machines will be the answer for backward compatibility.
By the time this thing comes around, 8-16GB will be the norm, and it will be no problem to launch independent VMs for each legacy OS. The host OS will of course be heavily optimized for these legacy OSes, and will do all it can to give the apps a native look and feel, but they will be running in their own little sand box. As another poster suggested, they could even bundle entire XP and 95 VMs for even better application compatibility than Vista.
I think the “Time Slice” paradigm will not scale very well on a computer with hundred cores. Cutting the execution time of programs in short intervals and dispatching them is not very efficient: scheduler stress, context switch, …
Why not having multiple specialized schedulers (with dedicated cores) and the OS dispatching the programs to the right scheduler ?
Edited 2007-05-30 18:07
The whole article strikes me as pure hot air. Either before, or after, a major release of Windows we always get some marketing guy telling us how Windows is being fundamentally redesigned – yadda, yadda, yadda.
No one is going to be able to truly make use of multiple cores and processors on a system, unless you have applications that truly already need them – in which case you’ll have multiple processors anyway. The whole concurrency issue always gets in the way as well, because from a logical point of view there are just some things you find out you can’t do concurrently no matter how hard you try.
You are always bound by the slowest task to be finished, so I would advise anyone touting multiple cores and re-writing all our applications to take advantage of them to look up Critical Path some time. The only things you can truly run concurrently are those things which are independent, i.e. processes and different applications on different cores.
There’s really nothing that any operating system, including Windows, can do to make all this any better and there’s just nothing more that can be done right now.
in nearly all multi threaded programming languages you must lock. locking brings overhead so spanning multiple threads might actually slow everything down. this is given you do it perfectly, the bugs which can occur because of dirty reads are almost undetectable and unreproducable.
locking must happen manual since it’s about an intention, when is the developer finished modifying the object and is it ready to be released in the wild? the compiler can guess, but never be sure.
for this reason i really don’t like to do multithreading in java/c/c++/c#. in sql the locking is annoying, but i at least got the feeling the environment is trying to help me.
microsoft did some research on transactional memory (look it up on MSDN Channel 9). now THAT looked really cool and usefull. it’s like a transaction on your objects (including rollback, your own view etc etc). but it’s cool, so it most likely never included in windows.
Speed and number of Processors are not the problem with Microsoft Windows. Their development methods and questioning every action the User asks the PC to do (in some cases more than once!) are. Who cares if Adware, spybots and virii, etc., can take over your machine faster and in a multi-threaded fashion? I know I do not.