Kurt Hudson explains tricks to optimize applications on multiprocessor computers, even if those applications were developed without such systems in mind.
Kurt Hudson explains tricks to optimize applications on multiprocessor computers, even if those applications were developed without such systems in mind.
…but according to what I know about RAID O, unless both of these core processors can split one big thread evenly all the time and each side run their half, there really isn’t going to be a major performance benefit with dual cores.
One core will be the master and the other may run some things occasionally if the finished result can be sent out of the bus to graphic cards, hard drives or whatever.
Heck even the human body and brain is split just about evenly with just a few shared processes. (heart, stomach etc)
(ok I’ll go to bed now)
What does RAID 0 have to do with multi-core processors/multi-threading?
As was said by another poster, RAID0 has asbolutely nothing to do with multi-proc/cores.
In Windows you can actually say “Ok, this application should run on this processor, and this app should go run on the other processor”. So you really can effectively do something such as encode a DVD on one processor and play a game on another without losing performance on either end (Of course, this type of use is for apps that don’t actually support mutli-proc/core).
It’s worth mentioning that the article is Windows-specific, and from the end user point of view (ie. not a “developer” article as it may seem judging from the title).
Perhaps the Windows-only nature of the article should be added to the summary?
I agree. What makes this article unusual is that Linux along with *nix have embraced 64-bit and dual-core technology at a higher rate than Windows.
How have Linux along with *nix “embraced” dual-core technology at a higher rate than Windows? Explain that.
Windows NT has supported SMP from the very close to the beginning, if not THE beginning.
Because Linux and UNIX have been running on dual
core (and 64-bit platforms) for far longer than NT,
ergo they “embraced” it much earlier.
The first 64-bit general purpose CPU I believe was
the MIPS, which ran IRIX before NT was even released.
IBM’s POWER4, POWER5, HP’s PA-RISC, Sun’s UltraSPARC
were all dual core and running on Linux and UNIX before
any dual core CPUs came out which NT could run on.
Linux and UNIX now run on 8 (Niagara), there are 16
core MIPS devices out there.
Does that clear things up for you?
Umm… No.
You can not say that something has embraced a tech better just because it supported the tech before the the other product was even released.
The fact is that NT has always had multi-proc support. NT can scale up to as much as 64 processors.
You can not say that something has embraced a tech better just because it supported the tech before the the other product was even released.
I didn’t say “embraced a tech better” I said earlier.
The fact is that NT has always had multi-proc support.
Not relevant to the question, which was about dual core
and 64 bitness.
NT can scale up to as much as 64 processors.
Is that all? Linux runs on 1024 CPU machines.
http://www.sgi.com/company_info/newsroom/press_releases/2004/july/s…
dual-core and 64bit are two completely separate technologies.
BTW, remember, NT was on the Alpha (a 64bit proc) in ’96.
Aslo dual-core and multi-proc support are one in the same, all in the package called SMP.
Because Linux and UNIX have been running on dual
core (and 64-bit platforms) for far longer than NT,
ergo they “embraced” it much earlier.
Oh please, NT ran on 64b Alpha AXP ever since the first release in ’93. Moreover – it was developed on i860 (N10) simulator in ’89, which was marketed as 64b (although it wasn’t truly, but that’s Intel’s marketing crap
And more important thing – it was SMP-ready from the start.
When did the linux go fully preemtpive and got rid of lock_kernel() after almost every syscall? In 2002? LOL
And the FBSD of the “Giant” mutex?
Oh please, NT ran on 64b Alpha AXP ever since the first release in ’93.
IRIX was running 64-bit MIPS in 91.
I didn’t say NT was behind ALL unixes in both dual core
and 64-bitness. That would be stupid considering there
are unixes that will never be 64-bit or SMP capable.
Moreover – it was developed on i860 (N10) simulator in ’89, which was marketed as 64b (although it wasn’t truly, but that’s Intel’s marketing crap
And more important thing – it was SMP-ready from the start.
More important? Who said anything about importance?
More important than what? Fixing security holes, or…?
Linux did not support SMP from the start. It would have
been a stupid design choice, given its target market
and resources to begin with.
When did the linux go fully preemtpive and got rid of lock_kernel() after almost every syscall? In 2002? LOL
Yeah I imagine it would have been at least 2 years
before it was running on 1024 CPU systems in ’04.
And the FBSD of the “Giant” mutex?
And WNT of its instability and security holes?
IRIX was running 64-bit MIPS in 91
And System/38 (and later AS/400) had 64b/128b non-VNA architecture more than 10 years before IRIX and NT (or any other OS) ….your point is what?
You claimed that Linux and *nix general embraced SMP much earlier than NT, which supported it as an original design goal? That sounds like trolling to me.
Linux did not support SMP from the start. It would have been a stupid design choice, given its target market and resources to begin with.
Yes, having a big f…. kernel lock for 11 years has nothing to do with incomptence of it’s developers…yeah right.
Yeah I imagine it would have been at least 2 years
before it was running on 1024 CPU systems in ’04.
On penny-stock almost-bankrupt SGI systems? LOL
And WNT of its instability and security holes?
I presume the last time you used Windows was back in the mid-1990s…right?
As for the security: not so long ago, almost everyone could trade linux 0days on undernet..
And System/38 (and later AS/400) had 64b/128b non-VNA architecture more than 10 years before IRIX and NT (or any other OS) ….your point is what?
That unix was 64-bit before NT. Are you incapable of
reading?
You claimed that Linux and *nix general embraced SMP much earlier than NT, which supported it as an original design goal? That sounds like trolling to me.
No, you troll. I said it was running on *dual core*
hardware earlier. Dual core, not SMP, was the original
question.
Yes, having a big f…. kernel lock for 11 years has nothing to do with incomptence of it’s developers… yeah right.
It did not have a big kernel lock for 11 years. I think
you are the incompetent liar and troll.
On penny-stock almost-bankrupt SGI systems? LOL
Yeah, and multi billion dollar companies like HP and
IBM are only able to run NT on 64 CPU systems. LOL
I presume the last time you used Windows was back in the mid-1990s…right?
Is that the last time you read about an NT remote
code execution security bug? Oh yeah I fogot: you can’t
read properly.
As for the security: not so long ago, almost everyone could trade linux 0days on undernet..
The Linux kernel? Remote code execution? Very rare.
Far less often than NT.
Back then, Linux was a hobby OS that was hacked on by
a bunch of people in their spare time and you could
download for free. What’s NT’s excuse?
“What’s NT’s excuse?”
People actually use it as a desktop??!
People actually use it as a desktop
That’s their excuse to have terrible security??!
Can’t say that surprises me too much.
You claimed that Linux and *nix general embraced SMP much earlier than NT, which supported it as an original design goal? That sounds like trolling to me.
I think you’re mistaken, the people I read here talked about Linux being more embraced on 64 bit dual-core CPU.
They don’t talk about SMP but about dual-core CPU. Your kernel working on a 64 bit CPU, or supporting SMP, does not mean it works on a dual-core 64 bit CPU.
Yes, having a big f…. kernel lock for 11 years has nothing to do with incomptence of it’s developers…yeah right.
You should not talk about incompetence of Linux developers, when even with the big kernel lock, the Linux kernel was running circles around other kernels which supposedly do not have one big lock, some kernels you know very well.
Big kernel lock is not the bad thing you imply it is, though it can hurt bad on SMP.
On penny-stock almost-bankrupt SGI systems? LOL
Which does not remove the fact that Linux actually runs on 1024+ CPU …
I presume the last time you used Windows was back in the mid-1990s…right?
I think it was less than 2 years ago that IBM produced the results of several OS/kernels under their test stress, and NT could not finish some of them, while Linux worked flawlessly on all of them. The stress test involved things like running at 100 % CPU and different high loads for one month.
As for the security: not so long ago, almost everyone could trade linux 0days on undernet..
Trade perhaps, working 0days exploit I have far more doubts, except is “not so long ago” is at least 6 years to you.
Don’t come bragging about security when talking about NT kernel, please. Given that Windows is the only OS running on it, and that the kernel security features are not even exploited by this OS, we can’t know the actual security quality of NT kernel, and can only assume it is very bad. Why ? Because the only OS on top of it is actually allowed to do things considered highly dangerous security wise in a Linux kernel (for example).
People often cite numerous NT security features, but given MS products quality and how they are not used by Windows, these are just a list of features, we don’t know if they work as advertised, we don’t know if they even work or are usable.
“What makes this article unusual is that Linux along with *nix have embraced 64-bit and dual-core technology at a higher rate than Windows.”
True…
http://www.win64apps.com
I thought the Windows icon gave it away…
When I first glance the beginning of the sentence I thought I saw “KATE HUDSON explains tricks to optimize applications on multiprocessor computers…“
Remember kids. If your a IT journalist always put the Intel before AMD.
“Both Intel and AMD dual-core processors are showing up in consumer-level servers”
I hate that. If AMD didn’t go dual core, would Intel have gone dual core this early?
agreed… and it is not even because of alphabetic order.
I think it has to do with branding once again.
People always write Coke, then Pepsi, or Adidas then Nike.
Are affinity settings saved after an app has been closed or after a reboot?
http://www.informit.com/content/images/art_hudson5_dualcore/element…
Why can’t you *nix people just go ONE article without mentioning Linux/UNIX/IRIX/BSD? The article is about apps on dual-core/SMP for WINDOWS. Discuss that, and stop turning every thread into a flamewar because you feel the need to mention “well nix did this first/better/more”
XP 64 bit is way more robust in the multithreading department AFAIK. I think Microsoft servers that run their website are all 64 bit. I think they even have a full paper on TechNet with all the nitty gritty details and with all the performance increases they have achieved. I honestly think 64 bit XP is the best OS Microsoft has written to date. Now if only the driver writers would step up and release x64 versions of their drivers and I could totally switch to 64 bit.
It never ends does it? Have an article about ‘windows’ on multiple CPUd systems and get a flame war on who was on 64bit/multiple CPUs first AND OS security? wtf?