The first laptops to make use of the SpursEngine, a multimedia co-processor derived from the Cell chip that powers the PlayStation 3, will go on sale in Japan in July. Toshiba will launch its Qosmio G50 and F40 machines with the chip, which contains four of the “Synergistic Processing Elements” from the Cell Broadband Engine processor. The Cell chip used in the PlayStation 3 has eight of the SPE cores plus a Power PC main processor. The SPE cores perform the heavy number-crunching that makes the console’s graphics so stunning. The SpursEngine SE1000 will work in much the same way in the laptops. The operating system will run on an Intel Core 2 Duo chip and the SpursEngine will be called on to handle processor-intensive tasks, such as processing of high-definition video. This arrangement means the laptop should be capable of some tricks that haven’t been seen on machines until now.
The “SpursEngine”? Does it just sit there not doing anything, pining for the “good old days” and complaining about the “cheats” at Arsenal?
[with apologies to the 99% of you who have no idea what I’m on about…]
If it does work, maybe the Totenham management should embed these chips in it’s players.
๐
Right, so pr0n will render quicker?
My current laptop supports “roll over” and “play dead”.
That was a poor article that didn’t ask/answer the important questions.
1) How does it compare to current GPUs? These are highly programmable and that means they can be used to do all the things the Cell is able to do.
2) Do you need a special API to work with SpursEngine? Or will DirectX and OpenGL work with it? This is important as it allows potential users to know whether there will be widespread software support.
You are right, there should be a more detailed look on this.
To answer your question #2: The Cell processing units are not comparable to a rendering pipeline found on a GPU. They are not capable of an API like OpenGL. Also the Playstation 3 has a dedicated GPU from nvidia.
You need to program these units directly for their specific tasks. For example, there could be a driver for DirectVideo which uses these units for processing of the video signal (filters, stitching). But they are better used for decoding of the video stream rather than bringing it onto screen (which is already accelerated by modern GPUs or could also be done with OpenGL).
I’ve read somewhere that it’s supposed to greatly improve h.264 playback acceleration, but dunno if the various media players/codecs are capable of taking advantage of it.
Have to wait and see, I guess. Wonder if it’s Linux compatible?
There are already linux drivers for the Cell, the should also work with the SPUs without the Power core I presume.
But there are already efforts being made to use the the GPU as a vector unit (see Apple’s OpenCL for example). Add to the fact that the GPU’s shader language is so complex (and Turing complete?), this suggests to me that it should be possible to use the current crop of GPUs in a Cell-like fashion.
I don’t know how exactly the current crop of GPUs differ from the Cell, but from glancing at the tech specs of the latest nVidia and ATI chips (wth, 128 cores? I remember having 4 pixel pipelines and being impressed) I can’t see how they would be different from a programming point of view.
Granted, I haven’t done any graphics programming since DirectX 7 in ’00 and so I could be just talking out of my @rse. But I think I remember your nick from the old Gamedev and Flipcode forums so you’d probably be able to enlighten me
Hah, and it seems like yesterday that the big debate regarding 3d accelerators was “is there any real benefit in getting a card with 8MB of RAM, or should you stick with 4MB?”
Ok, now I was talking about the other way round (specific question from the OP).
Sure you can do many processsing tasks with today’s high end and perhaps even mid-range GPUs. They have several APIs for that already.
But the question is how efficient that is. If you have a powerful GPU lying around in your PC doing nothing special anyway, it is a great idea to utilize it. If you don’t have that per-se, which is especially the case for laptop computers, it’s a different case. Then it could be worth it for several reasons to only use an integrated GPU and instead try these nifty Cell SPUs.
Edited 2008-06-24 16:43 UTC
So basically you’re saying that the only silicon that is seeing any action will be the Core 2. Everything else onboard is just heavily unused marketing fluff.
I wonder where you are drawing that conclusion from.
I see three parts in action there:
– the Core 2 doing all the stuff which other’s don’t
– the Cell parts which can be used for specific
tasks like video decoding, physics calculations,…
– the GPU which brings everything on the screen,
like blitting the video data and OpenGL rendering
The advantage of the cell part is that you don’t need a powerfull GPU to accelerate the video decoding. In most laptops such a GPU is not available, for example because of power consumption. Perhaps the cell units are better suited in that regard.
Also you can have a cheaper, less powerfull, less powerconsuming CPU and still watch HD video.
Wether or not this is marketing fluff or really worth it depends on how well the system is balanced out hardware-wise and how well the Cell parts are really utilized.
Edited 2008-06-24 16:36 UTC
My thinking is that in order for the Cell to be properly utilized, the application has to make specific calls to it. From what you’re saying, the Cell is today’s version of the math coprocessor.
I’m equating it to multi-core procs, unless the app handles multithreading, the rest of the cores just sit there.
I wouldn’t be surprised if this was just an attempt to get this technology in the hands of developers who just want to play with cool technology. The more devs they can have working with such technology, the more some of them might bring it up with management in the companies they work for.
The consumer benefits mentioned in the summary were pretty weak – but getting this into the hands of the technically savvy, so they can do “tricks” seems like a pretty good way to promote the architecture – to get it into the hands of the development mavens if you will, without requiring those users to give up all their x86 computability (these can be frugal people).
Then again, maybe I’m reading too much into this. ๐
“The SPE cores perform the heavy number-crunching that makes the console’s graphics so stunning.”
Obviously whoever wrote this summary knows very little about processors…lol. The SPEs do very little for the system’s graphics, video decompression at best probably. NVidia’s RSX GPU is to thank for PS3’s graphics. The SPEs only do vertex math, so they may not even be useful for heavy number crunching if we’re just talking ints or floats The Cell and it’s SPEs have proven to do a very poor job of living up to the hype they were marketed with years back. The Cell is a joke.
This guy’s been using CUDA and CELL for real time raytracing:
http://eric_rollins.home.mindspring.com/ray/ray.html
http://eric_rollins.home.mindspring.com/ray/cuda.html
You obviously know very little about Cell and the SPEs yourself. And SIMD. And fast local RAM. And RAM access through the PCI/PCI-e bus. I guess googling for “Cell benchmarks” might give you some more ideas about Cell performance.
Every developer review/interview I’ve read discussing the PS3 has made it abundantly clear people are disappointed with the Cell’s performance. Most devs say the 360’s CPU and GPU can both outpace the PS3’s.
As for the real time raytracing…that’s cool, but still years away…and yes, in that case the SPEs would be mighty handy.
Could you post links please? Sorry, but I happen to know both architectures and it just so happens, that the SPEs are much more powerful by themselves than both Xbox 360 CPU and GPU. Of course, if you mean that the Cell PPU (the PowerPC core in the Cell), is a very low-performance CPU, I agree 100%. It’s because the PPU is intended to be used as a controlling CPU for the SPUs and not the sole calculating engine. Of course it’s difficult to code for the platform and take advantage of the SPUs, but being difficult to program is quite different to being slower.
I used to fix laptops for Toshiba. The Qosmio series are among the hardest to fix. So hard in fact, that they do not trust any authorised repair agent to repair them (in Australia and New Zealand at least), they must be sent in to Toshiba directly.
Cables galore, massive motherboards etc. Nice machines, but not nice to pull apart!
Edited 2008-06-23 20:16 UTC
Qosmio have hands down the best speakers I have ever heard on a laptop, and are truely gorgeous machines.
XPS 1730 won me over though with its big beefy RAID-0 250gb 7200rpm SATAIII drives. The funny thing is that I am not even a gamer
actually, this “SpursEngine” is only available when connected in AC instead of battery energy… it consumes WAY TOO MUCH (caps intended) energy [*]… They said that they’ll probably release next models with common GPUs… maybe this SpursEngine as an extra, but mainly common GPUs.
This SpursEngine probably will have better use in TV sets than these Qosmio laptops…
(PS3 does use more power than other current gen consoles too, but I have no idea if its because of this cell design or maybe they both aren’t optimized enough…)
[*] http://www.reghardware.co.uk/2008/06/17/toshiba_rebrands_spursengib…
Can anyone say (way to offload dud ps3 chips?) Only four cores? sounds like toshiba needed a way to get rid of their excess ps3 chips which had busted processing cores. I wonder if anyone can/will shoehorn this tech into running ps3 games. How many ps3 titles actually bother using all 8 cores for vector maths.
No, SpursEngine is a different design than Cell.
“The Cell chip used in the PlayStation 3 has eight of the SPE cores plus a Power PC main processor.”
The PS3 actually uses 7 cores. All the Cell processors come with 8 SPE cores, but 1 is always disabled.. Doesn’t mean it’s necessarily non-functioning, but they do this because the yield for 8cores is something like 20%..