“Intel’s experimental 48-core ‘single-chip cloud computer’ is the latest step in multicore processing. Intel officials say such a chip will greatly improve performance and efficiency in future data centers, and will lead to new applications and interfaces between users and computers. Intel plans to have 100 or more chips in the hands of IT companies and research institutes next year to foster research into new software. Intel, AMD and others are rapidly growing the number of cores on a single silicon chip, with both companies looking to put eight or more cores on a chip in 2010.”
…and flash video will still be slow as hell…
Only on Linux.
Intel can’t fix a problem created and prolonged by Adobe.
He wasn’t blaming Intel, read the damn post, it was prety obvious he was talking about the fact that no matter how much power is thrown at Flash, it still sucks.
He said Flash video, which works fine for nVidia users, and Adobe could make it work fine for everyone using Flash, if they so desired.
If it were about throwing power at it, then it makes no sense, as these 48-core CPUs would be pitiful at that kind of work, even if Flash didn’t suck.
Note that each core is roughly comparable to a Pentium 3. And that the whole thing has about twice as many transistors as Nahalem.
It’s flexible. But not all that powerful.
Its about parallel processing, rather than brute force.
I’ve read that it’s based on P55C architecture, so that would make it more like a massively parallel Pentium MMX.
Isn’t that P5 architeccture essentially what the Atom is based on?
I’ve read that it’s based on P55C architecture,
You couldn’t read about this chip, because there is no public info except of this announcement.
It is not a Larrabee, but different CPU. Althought they could reuse the core.
Isn’t that P5 architeccture essentially what the Atom is based on?
Atom has nothing to do with P5. It is totally different.
Well, German online magazine heise.de says, these are similar to P55C-cores. They do not name sources, tough.
<a href=”http://www.heise.de/newsticker/meldung/Intel-stellt-Single-Chip-Clo…
The major problem now is the software. Most software out their today do not take into account multi-core CPU’s, so it’s as if they run on a single core anyway (mostly single threaded applications).
Yes the OS can do some scheduling, but it’s down to the software applications to take charge of multi-cores and multi-threading.
Erlang @ http://www.erlang.org/ just do that kind of stuff, massive multi-core programming, and with ease and elegance
Kochise
I’ve just not been able to get used to the syntax yet.
This wont be targeted at general consumers (who, for now, most new computers are sufficiently powerful).
This will be targeted at mainframe users who already do complicated distributed processing or server admins who want to consolidate several hardware resources into one host running several virtual machines (each assigned 2 or more cores)
For these users – this type of CPU is the future.
Edited 2009-12-04 14:25 UTC
Depends on what software you are talking about. I guess a 48 core cpu isn’t intended for the flash playing/web browsing user, but for servers and hard core desktops (designers, CAD, gaming). Most desktop and single threaded applications won’t ever need so many cores.
Except when they’re loaded with malware – in which case the more cores there are to run the malware in the background, the better the user experience will be
I have a feeling more malware does not make for a better user experience.
I take intel branding this as a “research chip for cloud computing” was not enough of a hint that this was not intended to be a desktop chip?
Clojure
http://clojure.org
… and that is why this is a research vehicle. Jesus, do some of you bother to even read the article before becoming Capt. Obvious?
…if they took a year off from their current chip designs and try to innovate a little, maybe come up with a chip that is super conductive, or perhaps optic or chemical in nature instead of using electricity as the driving force.
As someone who’s writing C code that compiles to CUDA, you’ll excuse me if the idea of a mere 48 cores on one die isn’t exactly blowing my skirt up… The ‘crappy’ little Ge8800GTS driving my secondary displays has more cores than that – My primary GTX260 has 216 stream cores, and that new ATI 5970 everyone’s raging about has what, 1600 stream processor cores per die? Sure, they are “stream processor cores”, but that’s still basically a processor unto itself (albeit a very data-oriented one) as evidenced by how much can be done in them from CUDA.
I really see this as Intel’s response to losing share in the high performance arena due to things like CUDA – much like ATI’s “Stream” if that ever becomes anything more than stillborn. (since I don’t know ANYONE actually writing code to support ATI Stream) – or more specifically nVidia’s Tesla.
Hell, look at the C2050 Tesla, 512 thread processors on one die – GPGPU’s are threatening Intel’s relevance – increasingly I suspect that as more and more computing is offloaded VIA technologies like CUDA we are likely to see x86 or even normal RISC CPU’s relegated to being little more than legacy support and glorified traffic cops and I/O handlers, with parallel processing GPGPU’s handling the real grunt work.
We have historical precedence for this approach too – look at the move from 8 bit to 16 bit with the Z80 and 68K. Trash-80 model 16 used a Z80 to handle keyboard/floppy/ports as well as running older Model II software, while the included 68K handled the gruntwork of running actual userspace programs. The Lisa and original mac were similarly divided, though they didn’t use their Z80 for legacy… Or even look a few years later in the game console world where the Sega Genesis was a 68K and a Z80, the Z80 used to handle the VDP, two sound chips, and provide legacy support back to the SMS/SG-1000, while new games used the 68k for it’s userland.
As such Intel is late to the party on using lots of simpler processors on one die – time will tell if that’s going to be too little, too late. If they work it out so you could mount it alongside x86 and give it a compatibility layer for CUDA code (which nVidia more than approves of other companies doing) they might make a showing.
If not, it’s going to end up just as stillborn as ATI’s “stream”