El Reg has one of the first reviews of Ageia PyshX accelerator chip. The four-page review concludes: “The limited number of titles and their disappointing use of the PhysX PPU means that, currently, there’s no reason to spend the GBP 200+ to acquire a PhysX card. The current effects in the supported games aren’t worth the price and potential performance drop. Cell Factor and awesome Unreal Engine 3.0 games, where art thou? Without them, the PhysX hardware is merely a curiosity. But one to watch.”
This kind of PPU could be useful for things like the Kinetics and procedural animation present in Spore. Though support for the card is not here yet, Physics and procedural equations are definitely where the world is heading as far as gaming is concerned.
For example, in a beat-em-up, Physics processing would mean that no punch would ever be the same, with all the kinetics being based on highly accurate collision detection and reaction. Imagine in an MMO, where sword strikes would react to armour and another sword specifically based on the angles and motion, giving realistic fights (rather than just two pre rigged animations coliding with each other).
I’m of the opinion that there’s plenty of CPU performance to spare for game physics. For graphics/animation-only physics effects, the GPU is the place to go, of course. An add-in card just seems like overkill, not just now (as the data points show), but even for the foreseeable future.
Edited 2006-05-04 16:44
If you analyse benchmarks you will find systems with high end graphics hardware are CPU limited. When I have more time I will edit this post with some links (lunch break over!)
… And they said *exactly* the same thing about 3D accelerators when those started to pop up.
PPUs will catch on, in one form or another. I have a feeling that they’ll simply become a part of the GPU, as both NVIDIA and ATI have shown great interest in physics acceleration as of late.
3D accelerators are useful outside of gaming. They’re useful for movies & animation, for engineers doing CAD work, and for scientific researchers. 3D acceleration was developed for those needs, and trickled down to games.
Physics acceleration isn’t useful outside of games. They aren’t going to be useful in scientific work, as the equations they use are approximations optimized for performance rather than accuracy.
The other factor to remember is dual core CPUs are quickly becomming the norm. Quad cores will start being released next year. Why spend $300 on a card that can only do physics when you can spend $100 more to get a 2nd core on your processor that can do anything?
“Why spend $300 on a card that can only do physics when you can spend $100 more to get a 2nd core on your processor that can do anything?”
Because Windows XP Home is not SMP capable. I know it’s a lame reason but it is a fact.
[QUOTE]Because Windows XP Home is not SMP capable. I know it’s a lame reason but it is a fact.[/QUOTE]
Windows XP Home and Media Center do work with dual core. They just have disabled support for Multiple Processors. I’m sure when the quad cores come out they will support those as well.
http://www.microsoft.com/licensing/highlights/multicore.mspx
XP Home is SMP capable because it uses the same NT kernel that XP Pro does. It does not, however, support using more than one physical processor, but that is an artificial limitation.
You can use multi-core chips with XP Home just fine.
I can’t believe that you would have such a narrow perspective of things.
Physics acceleration has a place in movies, animations, engineers doing simulations, and scientific research. A general-purpose x86 processor can only simulate so much physics stuff, but logic designed specifically for physics simulation can do it a lot better.
This is the same reason we don’t use general-purpose CPUs for graphics, sound, Ethernet, inside iPods for decoding MP3 streams, and so on. It’s an issue of performance-per-watt/transistor.
If you had actually read the technical docs about the PhysX chip, you’d know this.
“Physics acceleration has a place in movies, animations, engineers doing simulations, and scientific research. A general-purpose x86 processor can only simulate so much physics stuff, but logic designed specifically for physics simulation can do it a lot better.”
If you read the specs of the PhysX chip, you’d know it only does 32-bit floats. Real engineers and scientists wouldn’t give it the time of day.
I’m not specifically talking about the PhysX chip when I say that physics acceleration has a place in those things.
The parent is denying that physics acceleration is useful to anyone but a few odd gamers. I’m merely opening up my mind here.
I’m not specifically talking about the PhysX chip when I say that physics acceleration has a place in those things.
The parent is denying that physics acceleration is useful to anyone but a few odd gamers. I’m merely opening up my mind here.
Ok so put aside the actual PhysX product and just focusing on the concept of accelerating physical computations… an accelerator such as the PhysX might solve the problem of latency, being able to solve equations quickly. But real scientific simulations compute the same problems over and over.
Take for example the distributed computing projects such as protein folding. That’s a kind of simulation that runs many times over (even if it’s the same work unit) and they solve their problem by distributing the work across many CPUs. Regardless of how quickly each unit is completed, they have infinite computing power and it’s just a matter of time.
Video games however, don’t have the luxury of waiting a few days to get the final result. Calculations have to happen immediately to provide real-time feedback to the user.
Suppose you work in a non-profit organization or an academic institution, and you apply for a grant to conduct a research project, you need to make every dollar stretch. A physics accelerator is just as much a commodity as a general purpose CPU, however the CPU can be outfitted for more jobs than the accelerator.
While researchers are working on ways to accelerate math in the silicon, there’s just as many researchers working on improving the algorithms for faster solutions.
I know I’m just rambling at this point and I apologize. But this product really has the attention of the hardware geek, and the programmer in me. Two conflicting camps.
Your explanation makes sense, but I think you sort of left out an important point …
A PPU is designed for JUST physics, making it an order of magnitude or two faster than a general-purpose CPU for calculating physics. Even though the physics done in engineering simulations vs. games is of a different style, as long as the algorithms make use of the PPU, they will always be many, many times faster than if run on a GP CPU.
It’s exactly like a GPU. A CPU can calculate the same graphics that a GPU can, it will just take exponentially longer because of the design of the CPU vs. a piece of silicon intended for just graphics rendering.
“Physics” doesn’t imply an algorithm. Games and engineering apps use completely different algorithms for computing physics, and just becaues the PhysX chip accelerates one set doesn’t mean it’ll be useful in accelerating the other set. Game physics aren’t physics — they’re approximations designed to look realistic, not behave realistically. It’s not just a matter of precision, it’s a matter of reproducing things like boundary cases accurately.
Let me give you an example. It’s easy to get a thin wire to look like its moving accurately. You see it on Javascript pages all the time — a string seems to follow your mouse cursor around as you wave it. That’s game physics at work. In reality, something as simple as waving a string around is a fricking complicated dynamic elasticity problem. The PPU is designed for accelerating the first sort of algorithm. Scientists need the second sort of algorithm. Not only that, but they need to be able to implement their own versions of the second algorithm.
I understand that perfectly. I’ll re-iterate my points yet again:
– I’m not specifically talking about the gaming-oriented PhysX chip or the PhysX API
– We are embarking on an era of physics acceleration. Just like 3D acceleration really only showed up with a bang once commercial 3dfx released their Voodoo cards, physics acceleration might not make it into real-world/production environments until the gaming world has had a chance to adopt it.
– Just like there are now 3D accelerators available for high-precision/engineering environments, why wouldn’t there be general-purpose “physics accelerators” available for the same environments within a few years?
It’s not people like you that are responsible for great leaps in technological development — you are too boxed in by your own limited thinking. 😛
– We are embarking on an era of physics acceleration. Just like 3D acceleration really only showed up with a bang once commercial 3dfx released their Voodoo cards
3D accelerators were around long before 3Dfx. The technology trickled down from the engineering/science markets into the gaming markets, not the other way around.
Just like there are now 3D accelerators available for high-precision/engineering environments, why wouldn’t there be general-purpose “physics accelerators” available for the same environments within a few years?
The difference is the nature of the output. A 3D accelerator, even a high end one, doesn’t show you exactly what your engine would look like if you built it. It shows you a very crude approximation. That’s okay — you don’t need your 3D modeler to show you exactly how things would look, you need it to help you visualize how it will look. That’s why most engineering 3D packages still render with no texturing and minimal lighting. They’re used for visualization, not to get answers. The algorithms behind even the most complex of accelerators are fundementally inaccurate — vision in real life is nothing like the way a scanline renderer treats it.
A physics code, on the other hand, is different. You want your finite element code not to just give you an approximate answer, you want the right answer (within an aribtrary degree of precision). This is the key difference. Physics has to be precise, graphics does not.
The main element here is the question of what makes a task suitable for acceleration via dedicated hardware? The answer is: when there is a simple, efficient, and widespread algorithm for the process. That’s why 3D hardware has worked. From the first Voodoo to the latest GeForce, almost all GPUs are based on the same algorithm — zbuffered scanline rendering of polygonal shapes (almost always triangles or quads). For many uses of 3D, this has proven to be “good enough”.
The thing is, there is no such universal algorithm for “physics”. Within a given finite elements package, there are literally dozens of different possible elements, each of which embodies a different model of physics. And that’s just within finite elements — there are different CFD algorithms, different circuit simulation algorithms, etc. It is not profitable, from a performance/complexity standpoint, to accelerate all these things in hardware, especially considering that a lot of people come up with their own algorithms.
Of course, you could say that there are some things common to all finite element codes, such as matrix inversion and numeric integration, and you could have a processor that just accelerates those. However, once you factor in support for the functions used heavily for other types of codes, plus make it fully programmable to support custom algorithms, you no longer have a “dedicated physics chip”. You’ve got a general purpose stream processor, and you might as well just use Cell or something of that nature. And you still haven’t addressed the precision issue. If you make it 64-bit on top of everything else, you really haven’t saved much over just having built a regular CPU!
It’s not people like you that are responsible for great leaps in technological development — you are too boxed in by your own limited thinking. 😛
It has nothing to do with being boxed in. It has to do with actually knowing what engineering physics programs do, and the properties that don’t make then amenable to dedicated hardware.
Edited 2006-05-06 05:57
The biggest issue with PhysX is that its not programmable. That IMHO makes it completely useless for engineering or science. Unless the PhysX folks happened to include the algorithms you’re using, the chip is useless.
Physics acceleration has a place in movies, animations, engineers doing simulations, and scientific research. A general-purpose x86 processor can only simulate so much physics stuff, but logic designed specifically for physics simulation can do it a lot better.
I already addressed engineers and scientists. Although I did leave out one part of it – those people are already doing simulations that require gigabytes of RAM, if not tens of gigabytes. You’re not going to be able to get enough RAM on these cards to be useful to them. And before you say that will improve with time, CPUs will grow in memory capability much faster.
As to movies and animations, the physics accelerator won’t be very useful. During the animation process, you’re not working in real time. You don’t need 24 or 60 frames per second, because the animator can’t work that fast. Even with a single core CPU, the processor will be idling the vast majority of the time. There’s no reason to offload work.
Things like Pixar movies are done entirely in CPU as is now. It’s necessary to get the level of control and accuracy required. They also don’t care if it takes a CPU an entire day to render a frame, because they’ll gladly buy more computers.
*sigh*
I’m not going to continue. You guys just have much too limited thinking.
I don’t think that there is enough physical simulation in most current games to even match the volume of calculation which a modern CPU can manage without breaking a sweat.
Physics acceleration isn’t useful outside of games
It could be very useful in animation if the software was written to make good use of it.
3D accelerators weren’t really populars until they got on the videocards.
So yes making it a part of GPU makes sense.
I agree. Even old 8-bit and 16 bit computer games had elements of simple physical simulation. If explosions with perhaps 30 particles were possible in the 16bit era, where is the bottle neck on a modern CPU?
What exactly do people want to simulate?
> Imagine in an MMO, where sword strikes would react to
> armour and another sword specifically based on the
> angles and motion, giving realistic fights (rather than
> just two pre rigged animations coliding with each other).
So you mean now we have to really learn how to swordfight as well as learn how to play the game?
Ugh!
Some day these “PPUs” could be standard as GPUs are nowadays.
But with multi-core CPUs becoming the facto-standard in the near future, there’s more spare CPU power for physics. And the cost of reimplement the physics engine on top of this PhysX API on current game engines could be pretty high I guess!
Let’s see what happens …
if only physics weren’t so difficult, operating systems could use them to animate windows and stuff. Instead of repositioning windows, you could have them float about like repulsive charged particles. You could redo X-designer with spings on forms and make buttons move around under animation providing lovely eye-candy for those who get-off on that sort of thing.
You know your a little sarcastic, but I do wonder if one could implement at least some of the layout of a modern browser using the apis of this card. I mean it’s crazy to say that you need a ‘physics accelerator’ for a modern browser, but if you’ve got the horse power why not?
Forgive my ignorance/laziness, but what exactly can this card do? It reads to me that it is following the GPU principle of vectorising a certain set of useful operations in hardware.
If that is the case, can anyone tell me how this might compare to using generic stream processing for this purpose, thinking in particular of the Cell? Isn’t it possible that just having several vector units availiable all the time would be a more viable way of achieving better physics? After all, unlike graphics there isn’t a point where you could say you had enough hardware for a particular level of performance, as there can be an infinite number of physical elements represented in a system, but there’s only ever so much screen to draw.
why there are no text accelerators beyond those that are included with normal graphics cards. I know it’s not sexxy to be able to render a million glyphs a second to your screen, but it’s got to be much more useful. You can’t tell me that rendering text is as smooth as possible on today’s hardware, just resizing a window shows that. In XP, windows tear and on OSX they stutter when they redraw. Why is there not more acceleration for PDF built in?
XP tears because it do its drawing way too fast.
It’s not really about speed, it’s just that it draws when there is a vertical refresh going on in your monitor. Most games usually wait for vsync signal before drawing to the screen to avoid tearing. This is what XP should do, if you wanted it to avoid such tearing.
I am just going to wait untill the price drops to $50, it will take a few years, but by then there will be plenty of games and plenty of uese out side of games.
Physics acceleration isn’t useful outside of games. They aren’t going to be useful in scientific work, as the equations they use are approximations optimized for performance rather than accuracy.
If you read the specs of the PhysX chip, you’d know it only does 32-bit floats. Real engineers and scientists wouldn’t give it the time of day.
Real engineers and scientists were and still are using GPUs despite them also being optimised for speed not accuracy. (see http://www.gppu.org )
It’s something of an illusion that everyone doing serious work needs 64 floating point.
Real engineers and scientists were and still are using GPUs despite them also being optimised for speed not accuracy. (see http://www.gppu.org )
It’s something of an illusion that everyone doing serious work needs 64 floating point.
I used to work on software for Computational Fluid Dynamics simulations. Primarily on the software for viewing the results in 3D, but I did dabble a little in the computational code.
The 3D rendering was used to gain an understanding of what was going on, but was not used to find specific datapoints. Both the chemists and the programmers knew the graphical rendering would not accurate enough to take measurements from.
Also, we used Wildcat graphics cards rather than ATI & NVidia stuff. The Wildcats are optimized for accuracy and polygon count, and are less concerned with features like shaders and antialiasing. We tried an NVidia Quadro, but it just couldn’t come close to a Wildcat for our needs. Probably would’ve run circles around the Wildcat in gaming though.
As for your 64 bit comment, you’re crazy if you think scientists and engineers don’t do that. I ran tests using different matrix solving engines on small CFD matrixes (~200MB). Even at that size, there were noticable errors in the results when using 32 bit floats. Most of the CFD work done there required several gigabytes of RAM for the matrix. If you tried doing that with 32 bit floats you’d get meaningless results.
He’s referring to the practice of GPGPU. It should be noted that while current GPUs are limited to 32-bit precision that does not mean that the results are strictly limited to 32-bit precision.
Yep, I remember an article on GPGPU where there they used a successive iteration algorithm on the GPU to get results as precise as they had on the CPU and it was 2.5 times faster than the CPU only version, which is quite good.
Of course the problem is that you must be able to put your computation in a certain way to be able to do this, which is not possible all the time plus it requires additionnal efforts.
You get the same thing in finite element codes. Even with relatively small stiffness matrices, you do enough operations over the course of a run that a little bit of lost precision can significantly change your results.
A very few real scientists and engineers are using GPUs. The vast vast vast majority of them are not. Limited floating-point precision can easily kill your accuracy if you’re not careful, and 32-bit floats just make life that much more difficult. Not to mention the fact that GPUs don’t handle full-IEEE FP semantics, which were invented for a reason!
Doing a job poorly but fast isn’t a good alternative to doing it more slowly but correctly!
so that now i can have physicaly correct reactions added to my 3D accelerated desktop
This card is connected through the PCI bus.
132 Mega bytes per second
132 MB per sec /60 fps => ~2200000 bytes / frame
2200000 / 16 bytes per vertex => 137500 vertices per frame
This is the maximum amount of data it can return to CPU. 137 thousands of particles is almost nothing.
Of course, in case of a hundreds of high poly objects are colliding in the air, PPU will be useful.
But it takes a second to fill the device’s memory.
I think what dynamic upload of big objects will suffer from noticeable latency.
—
sorry for bad english =]
I absolutely disagree with your “verdict”. Starting as a gross oversimplification. Just the same way all the 2D graphics vendors treated with the 3D cards, this article overly simplifies the effect physics (and physical interactions) can have on game play, rigid and softbody dynamics and overall improvement realism. The author lacks complete understanding of the limitations of CPU (including present versions of multi-core), graphics processors and therefore grossly oversimplifies the capabilities of the PPU. Regretfully, the original article also overestimates the capabilities of software, which hitherto never could get to such levels. It is easy to slander and difficult to appreciate anything new.