“3D acceleration is a long established standard part of today’s systems yet it started life as an exotic, expensive add on. Last year Ageia announced a new kind of add on, their PhysX chip is a new technology specifically designed for accelerating physics processing in games. Why should a physics chip interest gamers? What does it do? How does it work?”
Can’t believe it’s already a year since this thing was announced. I’m still wondering heavily if this catches on, and won’t just be overtaken by general purpose second cores.
I remember reading about hardware disk-compression. One product was able to compress and decompress on the fly, simultaneously doubling your effective diskspace, and doubling your effective diskspeed. In all respects an awesome product..which failed miserably in the marketplace because right at that time Microsoft started shipping Doublespace with DOS, and harddisks started becoming a lot more spacious and cheaper at the same time.
In other words: is this the right product at the wrong time, destined to be obsoleted by more and more advanced game-engines that get distributed across processor cores and licensed by Valve and id Software? Or will this genuinely advance games to another level in the same way Quake 2 was a major advance on Doom 2 (due to the 3d graphics acceleration)?
Edited 2006-03-16 19:33
Screw gamers. This kind of thing (along side hardware renderers…like GPUs except on crack) could make hollywood’s CG rendering farms obsolete.
Frames that would 10-15 minutes on a dual core Athlon 64, could take just 1 or 2 minutes on a Quadro chip. This is called GPU rendering. It uses pixel shaders to do most of it’s work.
Hardware based physics processors could do the same thing. Computing accurate cloth, soft and rigid body dynamics (physics) on multiple type of 3d geometry can consume a lot of time, this is all done before the render takes place. If you could do the same things that hardware GPU rendering is doing now, there could be a large market for this stuff.
The down side is it’s mostly non-programmable. Pixel shaders have their limits, and so do the in chip renderers. You can’t fix bugs after the fact, and most software renderers can be changed easily, adding new rendering features isn’t difficult, but on hardware based renderers a lot of what goes into it, is hardcoded into the chip.
Worse off, not all chips will render the same image, just like not all physics processors will process the geometry the same. This fine for games, but not for production environments.
It’s difficult to tell where this will go, it’ll either die in the womb, or shortly thereafter…or take off.
‘The PCE is a conventional RISC processor, which processor is completely unknown….’
Does he/she know what he/she is talking about with that statement, as he/she may know how the PhysX works but..?!
But will it support linux and other open-source os’s? Or is it just one more thing helping microsoft keep it’s desktop empire?
Most people I know have at least one problematic piece of hardware. It’s not the oss community’s fault, they are the cure, the problem are manufacturers that just outright don’t give a damn…
Microsoft provides stable API (DirectX) for creation of games. I don’t care if they’re going to provide source code, if API is well documented and supported, people will use it. I don’t think commercial game developers care about openess of the API.
Speak for yourself my friend. Here in N.T.U.A. we have a very big problems with closed APIs. We wanted to buy a network PTZ camera and rejected about 10 candidate cameras, until we end to the AXIS 213 PTZ. Open and useful. Closed source ? Useless, how about your printer not supported in Vista, because some has closed specs and APIs in a proprietary driver in XP?
It’s up to us consumers to support those that care. And it wouldn’t be a bad idea to show our appreciation in a more public way.
“If you scratch my back i’ll scrach yours” would probably help.
The impression I got from the article is that the writer is less interested in what the chip does, than what someone could use the chip to do.
I suspect this processor is just (kinda like the GRAPE processor) a custom processor specifically designed to handle the kinds of calculations physics engines would need to do, as fast as possible.
On the other hand, the article made it sound like the processor would handle the actual physics itself- like someone would put in the parameters of an object and the chip would spit out its new shape/position. (Kind of hard-wired in) To me, that sounds easier to use, but in the long run less useful.
GRAPE: http://uits.iu.edu/scripts/ose.cgi?anaf.def.help
I think your correct. This sounds a lot like it was written by someone who is wanting it to happen, but isn’t helping it to happen.
Of course I have been wondering why This hasn’t been done before. Back when the average GPU came with 32mb of ram, I wondered why we didn’t use more dedicated co-processors.
Why not PPU(Physic’s processor unit),or a BPPU (a bio- protein moddler.), or as well as the old standby Math Co-processor. We have the tech to create it. heck they can even have a standard driver interface so that programming for them is simple. Things like this would increase the rate at which we do things now, simply because a CPU isn’t designed to handle data that way. You plug in a board load up drivers and you can crunch certain data types multiple times faster than before.
The idea that you could do it all on one chip came when i heard of the Cell. Think 2 cores GPU, 2 cores vector, 1 physics core, and one additional CPU core.
It would require lot’s of work to make software work well on it, but software written for it would fly.
That has what to do with writing games using DirectX?
ok this is all great on paper/computer screen but like the article sugest the hardware is worth nothing without the software to back it up. Also there is one other thing that the company is underestimating and this is the complexity that a new chip will add to the code of games. Like multithreading and all the other jazz wasn’t enough to deal with now you actually have to write code for this new physics processor. I am pretty sure that this will be a flop because many users would siply chose not to buy it because it’s extra money for pretty much nothing and the developers would chose to ignore it because of the added complexity. Hell they ignore multithreading for the most part and it actually makes sence.
But I’ll hold my final judgement untill the product is launched or some 3rd party tests come out.
the current 3D engines are limited by how much the CPU can process the shapes and make them do things. GPUs are great at thowing polygons on the screen but they do nothing for deciding WHERE they should be. Sounds like a fun card to get if the prices aren’t too high. Several shipping games supposedly already have the API being used, once the cards ship they should automatically start using them.
If you’re looking for a real-time physics program on a general purpose machine, here’s where you can find one I wrote. http://www.justrighteous.org/page2.html I just uploaded the source code in case anyone’s interested. It uses a runge-kutta45 adaptive step size integrator routine from Numeric Recipies in C.
It simulates masses, springs, thrusters, guns, foils with gravity, electrostatic charges, wind resistence, water with viscosity–pretty general, but not accurate enough for engineering design in some cases. Collisions were a bitch!
Don’t think this is revolutionary.
If it’s affordable then why not.
If it costs more than a specialized vector chip then why have it ?
http://www.blachford.info/computer/articles/PhysX2.html