Faster clock speeds, smaller die sizes, and more cache are what we’ve come to expect each year from the major desktop CPU vendors—and 2003 didn’t disappoint. As always, AMD and Intel led the charge in the high-end CPU war, with the Intel Pentium 4, based on the Northwood core, and the AMD Athlon 64. Apple partnered with IBM to come up with a new architecture for its PowerPC processors, and with smaller manufacturer VIA, better known in Europe and Asia, which spent last year revving its C3 CPU’s clock speed and trying to gain more ground in the U.S.
This year, AMD and Intel have ratcheted up the battle by doggedly promoting gaming-centric chips, which push the performance curve to the extreme.
Apple and IBM are playing their dual G5 plans close to the vest. Speculation is, however, that they will move to a 90-nanometer (90-nm) process and possibly a dual-core design.
Read the article at PCMagazine.
“As always, AMD and Intel led the charge in the high-end CPU war”
I stopped reading there for good reason.
“I stopped reading there for good reason.”
I should have too, since it included some mis-information and their “benchmarks” weren’t very good (besides totally limiting Apple from them).
“The Athlon 64 launched with the promise of 64-bit OSs from Microsoft, Red Hat, SUSE, and Turbolinux. But only SUSE has a 64-bit version of Linux 9.0 for AMD64 processors.”
As I remember it, TurboLinux actually was FIRST with an AMD64 version. SUSE came in second with SUSE 9.0, Red Hat’s AS and WS both include support for AMD64, Fedora has AMD64 support for Core 1, and many others have ports in the works.
They also barely mentioned that AMD uses HyperTransport links instead of normal Parallel FSB links which increases throughput over the Pentium. They also said that it is Single-Channel only, which is not true since the FX is Dual-Channel.
They did do somewhat of a good job detailing why Celerons suck so much and all the different options for mobile Pentiums (7 different chips, wow!).
Apple and IBM are playing their dual G5 plans close to the vest. Speculation is, however, that they will move to a 90-nanometer (90-nm) process and possibly a dual-core design.
They’ve already arrived:
http://www.apple.com/server/pdfs/L301323A_XserveG5_TO.pdf
What?s New?
? Single or dual 2GHz PowerPC G5 processors
using 90-nanometer process technology
90nm yes, dual core, not yet.
(And ;p to you Bascule, for posting that link to Dual G5 XServe technoporn. I’m salivating for an XServe even though I have no place in my house to put it.)
I’m salivating for an XServe even though I have no place in my house to put it.
It’s a 1U rackmount, you can just tuck it under your bed or something.
Is it just me or did the whole “blade server” hype blow over already? I remember hearing someone trying to put mobile Athlon chips, and then mobile AMD64 chips into blade servers. The Pentium M is finally working its way into desktop boxes, but it’d be ever cooler in a blade. Plus I’m wondering if IBM will but their 970s into blades.
It’s mostly games and some databases that really make use of this extra cache and CPU speed impovments. 3Ghz then the next level 3.2Ghz it’s only a 6% increase in speed. This will be close to unnoticable, unless of course your apps are optimized to stay in the CPU cache, L2 or L3. If it has to go onto the Motherboard’s BUS to get access to RAM even with DRR2 or DDR3 it’s still very slow. This is the reason that cache is increasingly being used to gain performance, Athlon 64 is the prime example. The 3000 and 3200 both run at the same core clock speed, its only the cache that’s different. So if you had some very good small piece of code that would run in 512KB of cache it work run at the same speed on both CPUs! The 3200 to me is almost a scam! But no real apps fit well in 512KB of RAM so it is useful to have more of it. But you can only keep adding cache for so long before it just becomes a waste of die space and a faster motherboard subsystems are required.
So what really needs to be developed are faster subsystems on the Motherboard and Storage devices (HD/DVD/etc). You can easly see that the motherboard is a masive bottleneck. Just look at the amount of RAM on most Graphic cards 128MB+ for any of the good ones and 64MB on the bugets models. I’d happly see stagantion in the CPU market if motherboard would advance at a faster pace. Motherboards are become more paralle, APG, HyperTransport, Dual channel access to RAM. These are what’s really needed.
That article is from the future!
When will someone realize running the RAM at a fraction of the
CPU speed is a stupid thing ?
We need progress on the ram and bus speed !! Not just the CPU.
Not sure what you mean. Athlons up to Barton core had at most a 266 MHz DDR FSB while out on the street we have DDR 333 and 400 RAM. Only now with AMD64 does the FSB catch up or even pass the RAM speeds.
And as for Intel chips, the P4 was designed to be coupled with RDRAM, then Intel decided to go DDR SDRAM anyways and yes, for a while the RAM was running slower than FSB and system bus.
PPC chips like the G3/G4 in Macs were never meant to use DDR RAM yet they put it in there anyways. Then all of a sudden the G5s come out and now it makes sense to put DDR RAM in there.
A few incomplete points:
– with same surface area, will see smaller processors (sub 0.9nm), mean that in the first generation, simple multicore dual processor approaches like intel HT, since this is the easiest thing to do with an existing design -> just duplicate it on the core and cheaply glue together
– next step beyond this is to achieve multi-core more intelligently, better co-operation between cores, and to push other low-level O/S features into the core (e.g. continue to improve context/thread switching, virtual machines and memory protection), so there’s more intelligent co-operative behaviour
– other factor is to include novel market specialisation techniques, e.g. VIA AES encryption on core, and soon someone will offer firewall and sub-network offloading on core, etc. this buys some niche market areas
– also asynch clocking to come into effect, etc
it’s not about speed any more, it’s about making between use of the fabbing technologies.
I suspect in the next generation of operating systems, what we’ll see are things like very thin microkernels like L4 (e.g. pistachio) running across a couple of tightly coupled processors, and the operating system sits on top of that. The L4 provides very low level os mechanisms that in some processors will be handled in the core, but not in others (i.e. so the L4 emulates it). we’ll need to see a standard for this very lowest level of co-operative multiprocessor environment: it also provides virtual machines btw. this means that it can host single or multiple operating systems, all of which are provided with strong and safe virtual machines by L4 or the core, helping the problem we have with security as well: we really need very low level or core support for vms.
Looking forward to the day when I walk into work, boot my machine up, and it clusters (over gigabit) with a gigaflop pool of processors in the server room. To my machine, it just looks like a large pool of available cycles: and in the same way the there are distributed file systems and distributed file quotas, there are also distributed proc systems and distributed proc quotas.
Then basically any software you write is automatically distributed, because the OS provides it inherently for you: you just need to engineer you software to make use of it through concurrency, job scheduling and so on.
> Plus I’m wondering if IBM will but their 970s into blades.
Done. JS20. http://www-1.ibm.com/servers/eserver/bladecenter/js20/more_info.htm…
“When will someone realize running the RAM at a fraction of the CPU speed is a stupid thing ?” – Anonymous
Everyone knows that. You can have RAM that runs at CPU speed. It’s called cache RAM. It’s one reason why the high-end Xeon and Itanium 2 are so enourmously expensive. With even more money you could make all the RAM out of cache memory. No one could afford it.
>Everyone knows that. You can have RAM that runs at CPU speed. >It’s called cache RAM.
Exactly. But why is it this way ? What I’d like to see is more
recearch in this area, rather than pushing for higher CPU GHz’s,
then maybe it wouldn’t be so expensive after a while, and our computers would be alot faster..