A new company called AIseek announced what it describes as the world’s first dedicated processor for artificial intelligence. Called the Intia Processor, the AI chip would work in conjunction with optimized titles to improve nonplayer character AI. Similar to the way in which physics accelerators can make a game’s environment look much more realistic, Intia would make the NPCs act more true to life. There goes yet another PCI slot.
I think the most relevant paragraph from that story would be this:
“If you’re hoping that the NPCs in your favorite title will start acting smarter anytime soon, you’ll be disappointed. Right now, it’s unclear whether Intia is anything more than a name associated with a web site designed to attract venture capital funding. There is nothing indicating when Intia will be shipping, when AIseek’s SDK will be available to developers, or what type of hardware will be necessary to use it.”
Anyway, if they expect this to have any chance of wide adoption by game developers, they should provide the SDK for free, with a software fallback.
(edit: forgot to close i tag
Edited 2006-09-05 22:02
It sounds interesting enough… I read the stuff on their own site as well. I did not realize that pathfinding, terrain mapping, etc. were all standardized. If it could work, it would take some load off the CPU, but how much? How much control would you lose? How would it feed back to the software? For instance, what if you had a pitfall at some point in the hidden terrain and the character discovered it during automated movement handled by the chip… how is the software going to know? The map would have to be preloaded down to the chip or in shared memory or something…
just like the physic’s card before it, the technology will either be swallowed up and put into the GPU or into directX (or some other software layer).
Thats not to say that the technology is boring, as modern games are becoming very complex and certain features are now becoming a standard with games without them being frowned upon. For example any modern FPS without Rag doll physic’s is seen as missing a large important feature.
just like the physic’s card before it, the technology will either be swallowed up and put into the GPU or into directX (or some other software layer).
Except the physics chip has not been swallowed up.
The GPUs can so far only do much simpler physics and it’s far too slow in software.
Exactly.
With GPU’s (currently) it’s a one direction modification (a shader program doing it) so CPU can’t find what changes in scene were made by that physics algorithm and game logic can’t interact with results of physics interaction, making it just a nice vertex-morphing GFX effect about which game engine doesn’t know much about.
OTOH physx device returns changed data to CPU so it can ,for example, make a bullet hole in a flag which suddely waved in opposite direction (with wind effect dynamics completely simulated by physx) and intersected a bullet trajectory. Not yet possible if you want to offload that job to GPU (either round trips are required, or more game/engine logic should be moved onto GPU) and such dynamics can be much slower on basic CPU’s, so physx fills that niche.
OK, from the FAQ at AIseek website:
“Does the pathfinding algorithm of AIseek’s Intia processor use heuristics?
No. Unlike today’s software-based approaches (e.g., A*), the Intia’s pathfinding uses no heuristics, thereby guaranteeing that the optimal path will always be found. This optimality also means that the Intia processor avoids the common pitfalls of A*, including failures to find a path when one exists,”
I’m sorry, but that is just plain wrong. A* is complete and optimal. That’s why its called A*. Complete meaning that it will find a path if it exists and optimal meaning that it will find the one with the lowest cost (shortest, “less dangerous”, depending on what info you provide/want to take into account).
http://en.wikipedia.org/wiki/A%2A
This will be swallowed up by the military quietly, because this sort of thing (if it works reliably, and faster than anything they can come up with) is worth lives and a huge amount of money in reducing loss of hardware assets, if they can figure out a way to feed it the required data (which I suspect they can do readily enough, with the aid of drones, though it still won’t be perfect, due to the fog of war).
I’ll slot this card in right next to Ageis’s PhysX card.
(if I ever decide to actually fork over some money for these cards) For a truly immensive gaming experience
Edited 2006-09-05 23:59
I’m I the only one who thought of The Phantom console when I read this?
This is another PhysiX-like approach?
C’mon there is PLENTY of hardware power today to achieve good FX without an add-on like this: multiple cores, programmable and fast GPUs, tons of RAM.
The main problems are two: a) no software packages to exploit hardware parallelism and advanced features, that is, to squeeze every core/CPU cycle
b) lazy and BAD developers that didn’t see the things we’ve done in the ’80s with 64K RAM, (1541-drive OUCH) machines. I don’t mean return to 80% assembly programs, but OPTIMIZE, OPTIMIZE and OPTIMIZE BY HAND. Don’t rely on your compiler alone….
C’mon there is PLENTY of hardware power today to achieve good FX without an add-on like this: multiple cores, programmable and fast GPUs, tons of RAM.
Yeah, lots of hardware power if you’ve got the money to buy it! But for the rest of us, a fast dedicated add-on card would be most likely be a more cost-effective choice. With the price of a high-end processor, motherboard, GPU and RAM, I could get an average system, the add-on card and most likely have some extra money left, too.
C’mon there is PLENTY of hardware power today to achieve good FX without an add-on like this: multiple cores, programmable and fast GPUs, tons of RAM.
Then you simply don’t know how much computational power is required for a GOOD AI. The available algorythms are pretty optimised as is, the problem is just that calculating optimum paths for a hundred units at a time (quite possible in games like Total Annihilation, and soon to be Supreme Commander), takes simply too much CPU time.
You could basically say the same about 3D acceleration. You could say that ‘Doom 3 should run on my P4 without graphics card just fine, if only it were optimised by hand’, but you’d be dead wrong.
dedicated hardware to accelerate specific functions
the principal reason why Amiga’s were so very capable/fast when compared to supposedly much ‘faster’ IBM PC’s.
i’m all for it.
i have accelerated networking on my nForce board.
accelerated graphics on my nVidia card.
i can have accelerated audio and physics.
accelerated video encoding/decoding for PVR functions.
why not accelerated AI.
it will ultimately come down to market acceptance however, something Ageia is battling with currently.
Great, so now we can expect a nVidia card that will take up 6 slots and requires an 800W power supply? Pray tell what happened to micronization that was promised in the 1970’s.
Edited 2006-09-06 14:10
This is a bad idea. Even if this was a great recursion-optimized chip for AI, I doubt it would do as good of a job as running AI on the 2nd (3rd?) core of a multi-core CPU. And everyone will have those, no one will go out today and buy a single core desktop processor. Quad-core is around the corner (Q4 of this year).
Marketing perspective:
A good developer and publisher will create the game that will run well on the most people’s computers in order to have a lot of sales. By implementing instructions that would use 3rd party chip, even if the game runs ok without it, the company risks less sales, especially if it’s competitor is coming out with a similar game that uses 2nd core of the processor that is in everyone’s machine to get the same or even better results. hello.
Slightly off topic, but seems like Physx has been getting bad feedback…
http://www.tomshardware.com/2006/07/19/is_ageias_physx_failing/inde…
I like page 3 especially…
Edited 2006-09-06 14:22
I do believe a card with multiple pipelines for simple calculations like tree-recursion could do a much better job than a CPU struggling to do multiple calculations in parallel.
Besides, a card for AI is a better fit, for more games, than a card for physics.
“Besides, a card for AI is a better fit, for more games, than a card for physics.”
Unfortunately the programmers would still have to know how to make the AI behave intelligently and judging from the sad state of todays game AI, that’s not likely to happen even with this kind of accelerator.
Fair enough. I think part of today’s problem is that people don’t bother since it’s too hard to do right with current computational resources. If I’d try to make my opponents smart and found out that most things I do make the game crawl, then I’m not gonna bother after a while.
I’m pretty sure Nintendo is introducing this technology already with the Nintendo Wii. But I could be wrong!
A4A