Herb Sutter, a software architect from Microsoft, gave a speech yesterday at In-Stat/MDR’s Fall Processor Forum. Addressing a crowd mostly consisting of hardware engineers, he talked about how the software world was ill-prepared to make use of the new multicore CPUs coming from Intel and AMD.
From the pathetic look of the 360 games, I would say Microsoft is the one ill-prepared for multi-core engineering.
The rest of us are doing just fine. But thanks for the concern MS…
Say’s the company who’s desktop OS does not even support SMP.
I think most client-side programs don’t benefit much of the multi cores. Common tasks of word processor, email client, chat client and web browser are quite light and won’t be much faster with multiple threads. Of course, there are image and video processing software that may benefit greatly of multiprocessing but I suppose average user mainly use them for resizing his/her photos…
s/software world/Windows world/
But even on Windows, nobody is running one application at a time. The OS has things to do periodically, most people run real-time virus scanners, spyware filters, etc. They have their IM clients going, email, etc.
As soon as you’ve got more than one application trying to do something at the same time, you’re benefitting from multicore, just like you would from old-style multi-CPU SMP. Assuming your OS can schedule threads across cores, which the NT series (NT, 2000, XP) can.
Most non-trivial applications use threading internally, too… instant multicore boost.
Next question… does XP handle 4 or more cores sanely, without having to buy a “server” edition? Apple’s just proven that OS X handles 2x CPUs with 2x cores each, without changes or drivers or a “server” license. I eagerly await their x86 boxes.
– chrish
“I eagerly await their x86 boxes.”
Uh, why?
Slower and more expensive.
Multithreading is much more pervasive in the Windows world than it is in desktop Unix software. Most nontrivial programs are not multithreaded, and definitely not in a manner in which to exploit parallelism in a high-performance manner. And this is what matters as the ability for increases in clock speed to provide performance increases slows. This isn’t a matter of seeing a nonzero performance increase over the entire span of your computing experience through SMP, it’s no longer finding the explosive growth in the performance of any single task’s performance that you’ve come to expect.
Next question… does XP handle 4 or more cores sanely, without having to buy a “server” edition?
Yes it does. Windows is licensed based upon the physical CPU (not taking into account their VM licensing). Windows can tell the difference between virtual CPUs (i.e., hyperthreading or multiple cores) and the physical CPU that sits in the socket. XP Home is licensed for one physical CPU, but supports any number of virtual CPUs. Likewise, XP Pro is licensed for 2 physical CPUs with any number of virtual CPUs. This extends up through their server editions as well, Datacenter currently topping out at 128 physical CPUs (32 on x86).
“This extends up through their server editions as well, Datacenter currently topping out at 128 physical CPUs (32 on x86).”
Nope, as I stated in an earlier post, Windows Server can only enumerate a max of 64 CPU cores in a single OS instance due to the use of a simple 64bit CPU mask. This can and likely will be changed in the next release.
I don’t have one of these new multicores but I do have a P4-HT w/EM64T and the difference in multi-tasking is simply amazing to say the least. What the future holds for multi-core/SMP is wide open I believe is what this proves.
Not support SMP? SMP has worked in Windows AT LEAST since Windows NT 3.51. And even at that, it is one of the few operating systems that can migrate threads (not just whole processes) between processors.
As for the benefits, while applications can take specific advantage of multi-cores, the benefit will be that I can do more and still retain good responsiveness of they system.
Maybe someone start to see why forcing pervasive multithreading on developers was a good (but damn too early) move from Be Inc.
As BeOS shows, when wholy multithreaded, a system get better responsiveness. When running on a SMP machine, it expand performance too.
which bit of ‘who’s desktop OS’ did you miss. You know, 95, 98, ME, XP Home none of which supported
SMP. For further reference see Linux, BeOS and Mac OSX.
Now why do you think most Windows Desktop Apps are single threaded.
On Linux, you can SERIOUSLY exploit multi-core processors already. What Microsoft is really saying is that their OS and development model doesn’t.
🙂
Yea, Microsoft isnt ready for multi-core… they never have been and never will be. As far back as NT 3.51, their SMP has … blown chunks…perhaps of they switched to a Linux microKernel.. they might have a chance at survival.
As much as I love making fun of MSFT.
This is an outright lie.
>>perhaps of they switched to a Linux microKernel<<
Linux isn’t a microkernel.
Now for SMP support Windows and the software for years have been behind the curve. Now OS X on intel multi-cores will rock. As Mac workstations normally have multiple processors. The software is already coded for it.
NT has the most mature implementation of SMP out of Linux, XNU, and NT. XNU would be in the least-mature slot, despite any hardware configuration decisions on the part of Apple.
NT’s implementation is the most mature, true, but I’d wager Linux’s is slightly to significantly more sophisticated and more scalable. While Linux’s is much newer, they’ve also got some code/design in there from AIX’s very mature SMP implementation. They also seem to have NUMA covered more throughly than NT, at least from what I garner reading tech notes regarding NUMA for Windows software.
BeOS was fully prepared for SMP, years ago. Even the programming API’s were designed for SMP. Years ago…
… and still we are living in this single core world. It’s a shame.
… Um… I can tell you are unqualified to make such a statement, simply because you wouldn’t have said it otherwise.
Windows NT is a great kernel and the SMP is really good. Just because you hate how your Windows XP runs, doesn’t have anything to do with the topic at hand.
Shall I come over and beat you with a clue stick? Maybe just a stick?
I have been running a Dual CPU Athlon MP system for several years on Win XP Pro. Not a single problem, great responsiveness, and if I launch 7-8 programs, everything is still very fluid. Task manager shows processes divided between processors.
Anyone who claims windows does not do SMP obviously has not tried it.
I am so sick of reading the “MS Can’t” bullshit all the time. I have tried Linux, and Mac OS X. I find Mac OS X above windows in features and useability, especially the “little things”. However, its not worth the premium, and the incompatibility with things I use daily. But seriously, why all the hate?
Some of you act like Microsoft pulled you from your cribs as young children and molested you. Its sickening! Get over it, they are a corporate giant, and act like one. Show me one rosy corporate giant, that has never pulled underhanded things. You hate them all like you hate Microsoft?
While it’s evident that there is a lot of MS backlash in this thread, he is telling the truth about programming across the board. If you examine most software, you won’t see threading libraries very often, because most programmers have not been educated on how to develope software in a multithreaded environment. In fact, the only time most developers go the thread route is when they have to retrieve data and it becomes unavoidable that the app will hang until that retrieval process completes; then after being scolded by users, they tend to say, hey let me check out this threading concept I’ve heard about that allows me to perform multiple task, simultaneously. Mutithreaded programming is not intuitive and to develop an entire application using a multithreaded design requires lots of training, thought, and discussion, because you can’t avoid prioritizing every action the application performs. Functional (Linear) programming has been embeded into developers for the past 50 years, so to switch to an entirely different paradign won’t be easy, and he is absoultely being truthfull when he says that as of right now, regardsless of which operating system your running, most of the developers writing the software your using, have little to no clue about how to maximize the benifits of a multicore environment.
It’s not just a matter of education, but a matter of tools and practicality. That is to say that it is easier (especially in imperative languages without inherent concurrency primitives) to write single-threaded code and the most practical decision has been to do so in terms of performance and development time given the niche multiprocessor systems have been. When threading has been utilized it has often been so naively, as one might with BeOS or Java wherein lots of threads are spawned for conceptually distinct tasks as a means of providing “fire and forget” asynchronous behavior.
Lots of libraries are not thread-safe, and the naive manner in which one is to make use of such libraries is either to perform locking when calling into them, or to delegate all interaction with the given library to a single thread.
There are a lot of other things involved here, but I can’t really address them now.
I agree strongly with this and with the statements of
the software architecture ( i did not RTTA , i
don’t use the F work ). It was something I was
thinking about a few days back. Multi core systems
fundamentally introduce a new programming paradigm.
In introduces parellelism into the soup, what tasks,algorithms, functions can parallelise, will they parallelise well, will they scale well to more processors.How much should I do or let the OS figure out when developing on a parallel platform.
The horizons that paralellism opens up are exciting but they will be largely unexplored for a long while yet. ….
You mean all those multi-threaded VB6 applications that I wrote, won’t take advantage of a multi-core system????
Java’s ready and will greatly benefit.
>Java’s ready and will greatly benefit.
HAHAHAHAHAHAHAA
thanks for the laugh
I think what the original author meant to say is that Java has thread support; and, as someone who’s used the language I can tell you that it does, plus it’s not hard to learn how to program with threads in Java.
This is the classic example of the hardware guys pointing the finger at the software guys. As a software guy I object Plus, of course there’s going to be a lag time between the widespread adoption of a technology and the software taking full advantage of it, that’s by design.
I second that. People writing software need time to adjust to significant changes in how software is to be written in order to take advantage of new hardware features, or features which are all of a sudden becomming commonplace. Also I think C++ like many older languages doesn’t have multi-threading support in the standard library, although I may be mistaken about that since I’m still only learning that particular language.
“As BeOS shows, when wholy multithreaded, a system get better responsiveness. When running on a SMP machine, it expand performance too.”
True, one quote amongst many. Multithreaded/multiprocessor ready applications should show performance gains in SMP/multicore machines unless extremely poorly designed. This being said, I’d fully place the blame on MS for not pushing awareness of performance gains of SMP/multicore aware software. After all it has been fairly cheap to build SMP machines for some time now, however windows software, generally, never really took advantage of it as compared to other OSes and applications developed for those OSes. As an end user example, I would have expected games to long since have taken advantage of SMP when available, alas this has never happened as most developers eventually trot out the “…it’s more difficult to debug, so just buy a ‘bigger’ CPU…” excuse.
“windows software, generally, never really took advantage of it as compared to other OSes and applications developed for those OSes.”
Threading is supported since Win32 (around 1994-95, IIRC), when for example Linux and xBSD thread support came a little bit later, not talking about MacOS Classic. Anyway, most if not all *live* OSes support threading at least, so are ready (more or less) for SMP.
What’s important here is what programming model mainstream programmers are used to. And, as a matter of fact, most of them are used to the classic sequencial and the event-driven model.
Win32, Coccoa, Qt, QDK, Java AWT & SWING, all provides fundations for both models and don’t make threading your software nor mandatory nor recommanded. They all provides threading support but it’s up to the developers to chose to use it or not.
This doesn’t means developers don’t used threads sometimes, just they’re not yet fluent in real asynchronous software design.
Which is not *that* easy, BTW, compared to previous programming model. And could quickly become really messy too, thanks to deadlocks.
Just like many IT methods, doing multithreading just for the sake of it is not always the right way to do things. Sometime a “naive” threaded model keep your design and code (and debugging) simpler, sometime sequencial programming is better. It all depends on what you’re after and the compromise you’re ready to accept.
But just for performance on SMP systems, threading is a good way to do it today, sure.
Microsoft Windows had full support for multiple threads and SMP with Windows NT 3.1, released in 1993.
Windows NT and derivatives (depending on the version) have supported from 2-32 processors (that number might have gotten larger); there is/was a single processor kernel and an SMP kernel, and one was installed/booted depending on the machine configuration it was installed on, at least at first.
Where SMP gets fun and suffers a general system performance degradation is during low-level kernel operations in interrupts that require spinlocks. This is why there is/was the single processor kernel and the SMP kernel.
Whether you have more than one processor per package or not, it makes no execution difference for correctness to the programmer, and indeed, it doesn’t even matter if there’s only one processor to the programmer for correctness: an application that uses more than one thread that is written correctly for synchronization will execute correctly on any number of processors, whereas one written incorrectly may appear to work on a single processor machine, or one particular machine, but will be notably less stable when run on a true SMP system where more than one thread can actually be modifying the same data at the same time with a higher probability. Technically, an incorrectly written multithreaded application can suffer the same problem if there are race conditions, but the user may get lucky more often than not, while their luck will degrade greatly based on the number of processors and threads that touch the same data at the same time increase.
Current 64bit versions (x64 and IA64) of NT (2k3,XP) theoretically support 64CPU’s, though only Datacenter Edition is licensed for the maximum supported number. 32bit versions support 32 CPU’s. This is because CPU enumeration uses a simple 32bit or 64bit mask depending on the native word-length of the system. This is an arbitrary data structure and will likely be changed in the next release (Longhorn Server / NT 6.0).
Currently the largest NT based SMP system available is the HP Integrity Superdome 64p, which (running 2003) supports 64 Itanium 2 1.6Ghz CPU’s and 1 TB of RAM. It currently holds the number 3 spot on the TCP-C non-clustered results running Windows Server 2003 Datacenter Edition and SQL 2005 Enterprise Edition 64 bit.
http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=105060604
I’d fully place the blame on MS for not pushing awareness of performance gains of SMP/multicore aware software.
It’s sort of futile to blame Microsoft. Microsoft can’t change the nature of software engineering. Writing parallel code is, well, hard. It’s easier in some languages (Erlang), than others (C), but its still not as easy as writing single-threaded code. Engineers don’t like to do things that are hard, not until tools evolve to make them easy. Current software is fragile enough as it is — can you imagine how much more it would suck if it were aggressively multithreaded too?
Programmers will switch en-masse to writing parallel code when tools are developed that make it nearly as easy as writing single-threaded code. These tools will take the form of language-extensions/new languages, explicit compiler support, explicit OS support, well-designed OS-level APIs, and new algorithms and advances in computer-science theory. The current “a N-way parallel program is just N single C threads” paradigm just won’t cut it.
Unprepared for multicore CPUs? How so… It doesn’t matter the OS, anything contemporary will handle them just fine. Perhaps some operating systems scale better with higher numbers (IRIX > Linux > NT/XP/W2K) but that’s about it.
Multithreaded programming is widely understood and used. I write all sorts of tools and things as a biologist and most of my apps and lots of libraries I use are multithreaded already. Many of my scripts operate multiple concurrent threads (or, very often, multipleprocesses).
I think that scientific programmers have been doing this sort of thing for 20 years in one way or another. Maybe PC application programmers haven’t been at it as long, but they’ve had a number of years to get into it.
Windows developers have many choices and Microsoft provides many ways NOT to use multithreading in Apps.
Some of these are a hangover from early non preemptive versions of Windows that used to run cooperative multitasking around the message loop.
There are asynch sockets calls you can use instead of forking processes. There are windows timers which can fire a regular event into the message loop.
No doubt there are other alternatives to threads.
There is plenty of support for threads in Windows but they may not be the natural thing to do. In BEOS not spawning a thread would generally be a very peverse choice.
I bet most modern apps on Windows are multithreaded, it helps to stop the GUI from locking up for a start.
Listen to Herb Sutter. He’s been saying this for at least 6 months. He was the chair of the C++ Standard committee before he was at Microsoft, and has worked hard to push MS to comply better with the ISO C++ standard. He is a major contributor to comp.lang.c++.moderated and comp.std.c++, and also the C/C++ Users Journal (and C++ Report before it went defunct). He has shown severe problems in the Double-Checked Lock idiom, and polymorphism in the face of multithreading.
Don’t write him off as an MS cheerleader. He has done a lot of work making his name in C++ and software in general. See his website http://www.gotw.ca and his article in DDJ called “The Free Lunch is Over”: http://www.gotw.ca/publications/concurrency-ddj.htm
Seriously, if we as developers ignore this, we’ll be just like people 20 years ago ignoring OO.
I don’t know if a C++ guy is the best one to spearhead this whole movement though. The ISO C++ folks go to comical lenghts to avoid overhead, and like it or not, doing parallel code properly is going to impose significant overhead in the single-threaded case. When the C++ folks attempted to handle automatic memory management, their “no overhead” philosophy resulted in a technique (smart pointers), that manages to be much slower than GC on some benchmarks: http://www.hpl.hp.com/personal/Hans_Boehm/gc/04tutorial.pdf (pgs. 53-55).
C++ won’t cut it for writing good parallel code (Lisp likely won’t either, so there). Languages will need to be redesigned, at the level of their basic computational calculus, to do parallel code support properly. Microsoft is doing some research into their area (Polyphonic C# applies Pi calculus to C#). These efforts may produce the necessary results.
Funny, those are the issues Sutter (and others) are bringing to the C++ committee–that the language will have to address threads explicitly, even if it harms the single-threaded case.
Additionally, his arguments finally convinced me that GC in C++ is a good thing. He has also been one of the people working on a GC model for Managed C++ which separates object lifetime (which can be deterministic) and memory reclamation.
And I’m confident he’s familiar with the C# efforts. Since C# has some problems in the MT arena (all languages do) I don’t see why it has a better chance at addressing MT issues than C++ does.
Funny, those are the issues Sutter (and others) are bringing to the C++ committee–that the language will have to address threads explicitly, even if it harms the single-threaded case.
I’m not talking about threads. Native thread support in C++ won’t make parallel code any easier to write than it is in Java. I’m talking about basing the language on an explicitly concurrent model of computing. The precise computing model to use is still an area of intense research, but Pi calculus (as used in Polyphonic C#), seems to be a popular one.
And excuse me if I don’t trust the same people who came up with auto_ptr (and encouraged shared_ptr) to come up with a proper parallel C++.
Additionally, his arguments finally convinced me that GC in C++ is a good thing.
But you’ll never be able to have a good GC in C++! As long as you can convert an integer into a pointer, you’ll need a conservative GC, and that makes a lot of the really high-performance GC algorithms unusable.
And I’m confident he’s familiar with the C# efforts.
Polyphonic C# is a seperate research project being funded by Microsoft — not a part of their main C# effort.
Since C# has some problems in the MT arena (all languages do) I don’t see why it has a better chance at addressing MT issues than C++ does.
The standard committe is likely to be the primary reason why any concurrency concepts in C# will suck (just as the metaprogramming concepts suck). They refuse to break compatibility with “classical” C++ and C, and are thus stuck to using inferior solutions. Moreover, pointers and guarantees of byte-level object layout will likely become another issue. For the overhead of a truely parallel language to be tolerable, you’ll likely need powerful optimizers, which are exceedingly difficult to do in a langauge that makes as many guarantees about memory layout as C++ does. Hell, C++ can’t even support a compacting GC!
I’m talking about basing the language on an explicitly concurrent model of computing.
Fair enough. I disagree on this point.
And excuse me if I don’t trust the same people who came up with auto_ptr (and encouraged shared_ptr) to come up with a proper parallel C++.
Both of those have their place. If you’re just knee-jerk objecting to them, then I’m afraid your credibility drops to near zero.
But you’ll never be able to have a good GC in C++! As long as you can convert an integer into a pointer, you’ll need a conservative GC, and that makes a lot of the really high-performance GC algorithms unusable.
Hell, C++ can’t even support a compacting GC!
Now you’re just exposing your ignorance. Read up on what has been done in Managed C++. Part of it includes the ability to include or exclude objects from the GC heap. Different rules apply to the GC heap objects. The CLI GC is compacting, and the (just released) VS2005 includes Managed C++ with GC.
Both of those have their place. If you’re just knee-jerk objecting to them, then I’m afraid your credibility drops to near zero.
It’s hardly knee jerk opposition. I speak as someone whose been programming in C++ for years. I think the whole “modern C++” thing is great, but smart pointers are just plain dumb. Even the Boost guys admit they are slow. They don’t fill a niche that needs to be filled — they are slower than GC, and more cumbersome to use.
Now you’re just exposing your ignorance. Read up on what has been done in Managed C++.
Managed C++ isn’t C++. It’s C# dolled up to look like C++.
More ignorance. auto_ptr was made for a specific purpose. It works well for that purpose, and has a deterministic lifetime. That deterministic lifetime is important, and the reason I’ve been won over to the GC model in Managed C++.
Calling Managed C++ “dolled up C#” is ridiculous. It is a collection of conforming extensions which in VS2005 bring true GC to C++. Dinkumware worked with MS to even make the STL work with GC objects.
The C++ committee has paid close attention to the C++/CLI work because it will likely address GC in the future, and the CLI work is a great start.
I must respectfully disagree about the difficulty in writing concurrent code; although more difficult than writing linear code, but it is mostly a matter of learning a few additional concepts and adjusting one’s perceptions slightly.
In contrast, it is far more difficult to introduce concurrency into existing linear code without dramatically curtailing the concurrency. In many cases this cannot be done without significant rearchitecture.
With a few exceptions, C++ is also not a bad language to write concurrent code in. Neither garbage collection nor smart pointers are needed. If it is approached with the idea that no more than one thread “owns” a piece of memory (generally exclusive right to update or delete it), the management is relatively straightforward and far more efficient than the two alternatives given. The greatest problem I have found is managing thread exit; threads must not exit while owning any memory.
If a reasonable threading model is started with, most problems yield relatively easily. Tracking down problems is more difficult, but the basic debugging tools are getting far better and the techniques, although nonlinear, are reasonably straightforward.
I have worked extensively with highly concurrent software for over a decade on Unix platforms, utilizing both concurrency using shared addressing in an interprocess and intraprocess context simultaneously; much of that effort being part of a DBMS engine, and learned to do it without any specialized training other than reading and experimenting.
With some training, any reasonable developer ought to be able to learn and use the techniques in six months, sufficient for most application work.
I must respectfully disagree about the difficulty in writing concurrent code; although more difficult than writing linear code
I would say its much more difficult than writing linear code. Take a look at the effort that has been put into making Linux, BSD, etc, highly scalable on SMP machines. You need sophisticated locking strategies even to do relatively simple things. Moreover, the resulting code is not very amenable to closed-form analysis, since the theory regarding concurrent computations is lacking.
With a few exceptions, C++ is also not a bad language to write concurrent code in.
It’s no worse for writing concurrent code than most existing languages. I don’t think that’s good enough.
If it is approached with the idea that no more than one thread “owns” a piece of memory (generally exclusive right to update or delete it), the management is relatively straightforward and far more efficient than the two alternatives given. The greatest problem I have found is managing thread exit; threads must not exit while owning any memory.
How easy is it to impose that discipline on an entire program without language support? Conceptually, manual memory management is simple too — remember to free very object you’ve allocated. The fact that almost all major software has memory leaks suggests that its not as easy as it sounds.
If a reasonable threading model is started with, most problems yield relatively easily.
Some problems yield more easily than others. Our simulation software at work is highly concurrent, and enforcing concurreny in it isn’t particularly complicated. However, that’s because the problem domain is naturally concurrent (ie: simulate 10,000 portable radios on a battlefield). Even then, I’ve still spent a few weekends debugging race conditions. With other things, compiler algorithms for example, there is just so much interaction between threads that enforcing any kind of discipline is complicated.
I have worked extensively with highly concurrent software for over a decade on Unix platforms, utilizing both concurrency using shared addressing in an interprocess and intraprocess context simultaneously; much of that effort being part of a DBMS engine, and learned to do it without any specialized training other than reading and experimenting.
With some training, any reasonable developer ought to be able to learn and use the techniques in six months, sufficient for most application work.
I think the gaming industry is interesting to consider in this context. They’ve clearly got some good programmers, and they certainly have a need to make maximum possible use of peoples’ hardware. Yet, even Carmack didn’t think it was worth the trouble to make Doom III multithreaded, even thow he clearly forsaw the need and tried supporting SMP in Quake III. If six months was enough for someone to write good multithreaded code, don’t you think developers wouldn’t be as frightened about the multicore future as they seem to be?
“…Industry Is Ill-Prepared for Multicore”
It looks like 3-Prepared here…
Herb is spot on. You’d expect him to be, as he is an engineer of such proven standing.
Most apps are single-threaded. But then again the whole word ‘app’ is about user-interaction.
Demanding computing (scientific, graphics e.g. rendering movie special effects) has gone parallel whereever possible. But that is hardly your desktop ‘app’.
So many apps are annoyingly single-threaded. Microsoft Outlook and Lotus Notes (from personal experience) both ‘hang’ while doing searches, but interestingly not when doing ‘advanced searches’. I have yet to understand why the developers took this shortcut, as any reasonable beta-tester would run into it and complain.
In Java, many basic containers (e.g. Vector) are thread-safe by default. People don’t realise it is a powerformance cost they have been bearing. It will all be worth it. Lets hope that Herb takes the frameworks a lot further, so that a ‘modern’ program can be more easily parallelised (preferrably by compiler, but compilers cannot easily refactor your basic algorithm choices, mores the pity).
Penguins are cute,
Linux is free,
I’ll never use Windows,
nor support a convicted monopoly
Hey, can someone mod down this twitty troll? (I expect this comment will also be modded down as well, being mostly related to pointing out a troll)
Hey, 82.165.235.—, if you can’t say anything that even hints that you read more than “MS” from the headline, why don’t you go back and read so you can at least make a more pointed troll? If you had read the article at all, you’d realize that it was only circumstance that someone employed by Microsoft said a few important things that also relates to how your favorite pet (Linux) is no more immune from reality than what you hate (Microsoft and their products).
But no, being a typical mindless Linux troll that doesn’t bother to recognize anything beyond Microsoft=ATTACK WITHOUT REASON for a response, I guess that’s too much to ask
…that a company whose software is ill prepared for the internet is warning third party developers that their software is ill prepared for the new processors coming.
Not one person has mentioned CSP, or occam family of parallel languages. Ironically the father of CSP Sir Tony Hoare actually works for Microsoft Reseach in Cambridge and I guess he isn’t making much impact on his employer.
Those who work with these languages have no or few problems with conccurency at the language level since CSP is a mathematical model for concurency. And it was also well demonstrated on the Transputer some 20yrs ago on larger clusters and continues to work well enough in x86 & others but AFAIK mostly in single cpus today.
Ironically as a HW engineer I see occam as a hardware like language, since processes describe entities that could be hardware or software. Either way processes are wired up together in a hierarchy just same as hardware modules are. But these par processes can just as well include sequential processes, ie par & seq are treated as equals although par blocks obviously add some overhead and could use some hardware help which is unlikely in current processor designs. That help includes built in message passing support, and an event, process scheduler in hardware.
Also the hardware EEs use concurrent languages all the time, Verilog is based on a mishmash of previous language syntaxes but has C,APL expression and Pascal block level. VHDL draws more from ADA,Pascal.
Since I need a new compiler for a new Transputer and occam syntax is very unlike C, I propose to use C++ classes with processes.
As struct => classes by adding methods to data, the obvious step is to add concurrency to classes by adding ports that are live. Actually it looks alot like Verilog but fully C syntax. Such a language can then easily do what occam could do as well as run massively concurrent programs. In order to write such programs, one would have to wear a hardware hat in decomposing larger processes into smaller ones and then wire them up. But if the processes really are hardware like, they can also be translated back into pure Verilog and synthesized into real hardware. So classes with live port lists can be used for both parallel software and hardware design.
struct sname { int a,bc….
class cname ( int a,b,c… cname(); .. void fn() etc
process pname1 ( in i, out x, event e, int a,…)
( int a,b,c… cname(); .. void fn() etc
// usual C++ stuff, declarations, methods
wire w,x, // hardware wires from Verilog
chan ch1, ch2… // software wires from occam
l2: pname2(p, q, ,foo); instance of 1st process
l3: pname2(foo, y,, ); instance of 2nd process
// instances can’t be recursive though
}
As for Java,C#, their thread models are totally broken, and Java 1.5 only recently attempted to reduce the damage by looking at CSP for more inspiration. Ironically the Java designers knew damn well all about CSP when designing the Java thread model but went for something much older and far less capable. They thought CSP was tooo complicated for software engineers, yet when its implemented properly, its way easier to fathom decomposition of process than these thread objects.
sorry for long thread
transputer guy
aka John Jakson
Well, yes, most apps are not written in the MS platform to be multi threaded but you still gain obvious benefits running multiple CPU’s/cores on Windows especially in CPU intensive applications and multitasking. Try working in a digital Print environment without SMP or multi core CPU’s, it’s a bitch. Running spool jobs down to print servers and trying to Graphic Design at the same time is futile.
That being said, I have also witnessed what a properly SMP aware OS and platform are about in BeOS and you can not compare that experience to anything else on the desktop/workstation. Nothing comes close. At this point in time it is still the pinacle of desktop OS functioning. Nothing has come close to reproducing it (Haiku and Zeta excepted as they are based on BeOS parentage). It will take the likes of Apple and MS years to fully ramp up to SMP aware operating environments where developers write their code multithreaded not single threaded as is the majority case now.
Next.
“how the software world” Read Microsoft developers.
Most of the linux world, has been working sucessufully with SMP issues since the early kernels.
REF: http://www.uruk.org/~erich/mps.html
“Linux has an ongoing SMP project called Linux-SMP, headed by Alan Cox.” I would bet money on Alan any day.
BeOS has had SMP since day ONE!.
C++ does not nessesarly support message passing or co-routines. It needs to he handled in libraries, and OS Support. ( specifically C lib ) and just for laughs… I have a testbed ( Compaq 5500 ), with Quad Processors. Both NT 3.5.1 and W2K run on it. Stick and floppy in and the machine crawls.
Most Artistic use of quad processors: DistCC. Wow!
GenToo flies off the handle with this setup.
Odd, I seem to remember OS X being designed for multi processor/multi core systems. Maybe bill just means HIS software.
Tiger yes, but 10.3 and earlier had big kernel locks that were problematic on 2 processor systems with some workloads, let alone 4. OSX is the most immature commercial OS, judging by how much it changes with each major release. But Apple are taking the right steps with getting rid of the BKLs and providing a KPI. Its just going to take time…
I’d rather juggle pineapples inside my rectum than use any software from microsoft
“I’d rather juggle pineapples inside my rectum than use any software from microsoft”
Grow up.
“Microsoft Warns Software Industry That It Is Ill-Prepared for Multicore“.
Microsoft may be ill prepared for multicore, but I know for sure that industry isn’t ill prepared for multicore.
Micrsoft yet again has to catch up with industry.
Edited 2005-10-30 07:48
I think what he is trying to say is: software developers should start to focus on inherently parallel algorithms instead of the now largely used linear ones (for example there are sorting algorithms which speed up exponentially with the number of parallel operations but they are different from what we are used to…)