Dave Kirk, Nvidia’s chief scientist, was in London recently as part of a European tour. ZDNet UK caught up with him to talk about the future of PC and console graphics, whether they will ever really match mainstream movie quality, and how the company will maintain the performance curve. The interview is indeed interesting and it can be found at ZDNews.
“Did Nvidia’s philosophy change with the purchase of 3DFX?
Not too much–we still want to be profitable and we still want to stay in business–so they haven’t influenced us in that.
heh, well at least 3dfx ppl can say that they aren’t dicks. he’s basically making fun of the fall of a company; one which put high quality gaming graphics on the map. further a lot of those ppl from that company work for him!
to remain profitable is to make your product lines very confusing and re-using old technology with a new product name so consumers think their buying new technology. (Geforce4mx)
3dfx with voodoo will always be the company that really pioneered the 3d accellerated graphics industry and gave consumers a better alternative than the onboard intel and tridet crap.
“So we will be able to render movies like ‘Final Fantasy,’ ‘Shrek’ and ‘Toy Story’ in real time on a PC next year, but of course the movie studios will raise the barrier.”
That sound like bullsh*t to me. How will they be able to render a frame that on a regular PC takes ages (from minutes to hours) to render with the CPU? Off course you will be able to render them, but not with that quality.
Shouldn’t Quadro handle the movie rendering, not GF4?
Well, if the graphics programmers cheat a LOT and modern PC systems suddenly all dump PCI/AGP/whatever for some fast new ultra high bandwidth technology, and a good sound card to relax the CPU, I say…still no.
Now, what we will see are interesting methods of FAKING the effects seen in movies at “normal” resolutions. Talented programmers, artists, and game designers will continue to surprise us graphically.
Movies are rendered at disgustingly high resolutions, and to accelerate everything that Pixar’s Renderman does would require moving up to like 48 bpp or higher graphics (that’s 16 bits for each color), and morphing the GPU into a reprogrammable high performance DSP with multi processing capability (so you could make render farms of them). This is why it takes render FARMS to make movies like Toy Story and Shrek. The I/O bandwidth, disk space, and memory requirements are nuts. A company actually developed a box with dedicated processors for ray tracing at insane (read: movie) resolutions. It costs thousands of dollars. It will be quite some time before you can plug this into your PC or Mac.
–JM
How come only ATI and Matrox really care about 2D acceleration and visual quality? Good color correction and video playback acceleration are nice and fun, too! 3D is not EVERYTHING!!!
I’ve been hearing rumors that Matrox is working on a killer 3d card. Any body else hear anything like that? Is it true? Any details anywhere?
Notice it says they dumbed it down a bit.
They said that last year, they dumbed down Final Fantasy and rendered a scene at 14 to 15 fps, this year on the GF4 they can render that same scene at a full 30fps and that if they undumbed it down (for lack of a better term) it would run at 14 to 15 fps on the GF4.
And of course, you forget that most of the work for rendering is done on the GPU, not the CPU.
Final Fantasy is easier to render as they don’t have to wory about that pesky storyline that all the other cg movies included, but ff forgot somewhere along the way
Hmm… real time movie quality video rendering by next year is unfeasible huh? You think your CPU is faster at rendering 3D images than your graphics card is. I suggest that you go and lookup the specs on a runof the mill 5 year old TI DSP chip. Any of them will probably do. Check out how many FLOPS it can do. Then check out your P4. I think you might be mildly surprised. A CPU is designed to do everything, so many performance compromises have to be made. When a processor only has to do one thing, it is MUCH easier to optimize. His goal might be possible. Perhaps it will take a little longer, perhaps not.
every body can do a CG quality video card. Basded on FPGA and/or DSP.
The real problem is not even the cost… it’s getting develloper using it.
CPUman and Randal Clark,
raytracing that is used in movies is much more advanced than the rendering used for games. There is no way that that kind of rendering can be done in realtime anytime soon.
With raytracing you have to calculate optics, motion blur, field of depth, reflections, basically everything that you can see in the real world. Such things are just faked in games, heck even shadows are mostly faked in games to gain performance. Not to mention that the scenes used in movies are that large and complex that you probably wouldn’t be able to fit them into your RAM.
Yes the GPU is more optimized to render graphics than the CPU, but the techniques that are used in real raytracing are so much different that what’s used in hardware 3D acceleration.
I don’t doubt that the quality of hardware accelerated 3D graphics will improve a lot over the years but it will take ages until it reaches the quality of real raytracing.
Practically worshipped the ground they walked on…
And they they pulled a GeForce4 MX stunt on the public. Oh, getting in bed with Microsoft has corrupted them too much for my tastes.
It was bad enough that they had a bazillion variations of the same hardware, so much so, that you had trouble knowing what was “dumbed down” hardware and which wasn’t. All started back with the TNT2 M2. Baaad!
Well, first off, you will only here wild guesses – Matrox is not at the stock market, hence need not tell anybody what they are up to. What they do said up unitl now is this:
Although everybody is hyped up about GF3/4 etc, this is rather a marginal piece of the market. MOST graphics is sold in large quantieties to OEMs and that is where Matrox has a comfortable position.
It is not even sure whether is it at all desirable to take part in the 3D race. You don’t simply kick out a new GPU, this is pretty expensive. As long as they need not enter high end 3D, they will not. They have always been saying that and so far, acted accordingly..
It is a common misconception that all of the CG movies are rendered using raytracing. Most of these movies (incl. Final Fantasy and all Pixar ones) are rendered with the Pixar Photorealistic Renderman – PRMAN. This renderer does not support raytracing. At least not till the v.11 comes out, which will happen in June and will support raytracing and global illumination for the first time. So for things like shadows, reflections and refractions this renderer uses more traditional approaches like shadow and enviromental maps which are avaiable on todays hardware. Artists fake global illumination by placing lights around the scene. The OpenGL 2.0 shading language is nearly as powerfull as the renderman shading language. The future hardware will support high FP precission – very soon we will have 64 bit color. Sure there are some things that cannot be done without raytracing like caustics and uber-realistic refractions but these are not used in many CG moviews anyway. So movie-quality realtime animation may come sooner than most of you think.
How can you complain there are too many variations of nVidia hardware? DO you now how many model types an average car has? Consumers seem to handle that fine. It takes about ten minutes to go to SharkyExtreme or AnandTech and find out all the specs of a card. If you can’t spend that much time before plunking down two or three hundred bucks on a graphics card, then you deserve buy the wrong product by accident.
BTW> Some people appreciate the “dumbed down” product. It allows them to get great performance without spending $400 on a top of the line model. nVidia has to pay its development teams, after all. Otherwise they couldn’t keep up the pace of development they have been doing. If they have to sell their top of the line cards at $400 to do that, then fine. Just be glad that they make a dumbed down model instead of forcing everyone who can’t afford the high-end stuff to buy last years products.
“A company actually developed a box with dedicated processors for ray tracing at insane (read: movie) resolutions. It costs thousands of dollars. It will be quite some time before you can plug this into your PC or Mac”
You’re partially correct. You’re talking about Advanced Rendering Technology and their AR350 ray-tracing processor. They make stand alone boxes called the Render Drive with arrays of these chips. Several months ago they released a PCI card called Pure that has a few of them. You’re right about the cost though.
I see someone else already mentioned RenderMan, but there’s another point to make. What’s important for film isn’t resolution, it’s color depth. Pixar’s “Toy Story” was rendered at 1536×922 in 48-bit color. (Read http://www.sun.com/951201/cover/cover.html for their blurb on this.)
Realistically, even if “Toy Story on the desktop in under a year” is optimistic, being able to do it in 18-24 months certainly isn’t. Being able to use that technology is another issue. How do you put power like that into a game in a way that really shows it off? Another first-person shooter, with blood from wounds rendered by a fluid particle simulator, realistic down to the highlights on each droplet? Woo, hold us back. Getting the technology on the desktop is only a first step.
I remember reading that the latest fastest super computer (the Japense NEC vector based one) could ray trace render Toy Story in 32 hours. OF course they hav’t tried (AFAIK) so this figure is pretty arbitary, but i highly doubt that nvidia can beat this on 1 graphics card. They are almost certainly talking about rendering it in a different way, and probably not anywhere near as much detail, though if your only going off a tiny computer screen, not a huge digital projector maybe it will still be good enough.