“Nvidia is making a CPU, but the only questions are what kind of CPU, and how the heck is it going to do it. Making an X86 based CPU is not a trivial venture, and there are enough problems to make even a company with the engineering bandwidth of Nvidia cringe. Those problems are mainly called lawyers.”
X86 is an obsolete technology. Today processors would embed a simple RISC core, a GPU and some vector core and would run fine, cooler and cheaper. There’s room for this kind of processors. The future is not only Windows, so no need to run only x86. The console industry would be a good customer.
Edited 2006-10-24 21:19
But is the console industry Nvidia’s target?
“But is the console industry Nvidia’s target?”
They make GPUs, so it would make sense. There are 2 major and a couple of minors x86 providers. I don’t think it’s really wise to start in the x86 field. I would better create a cheap & fast processor and target the embeded and consoles market.
If you look at it that way, it would make sense indeed.
But, yet another console, or try to have their product used in consoles by exsisting console brands.
But still… having a good reputation for GPU’s, they are known at lots of people. So, they would maybe have a chance.
I don’t think so. I think they’re going for the “3d rendering appliance” direction. Make cheap boxen you plug into your 3d workstation for near-realtime, final render quality previews. It’s the dream of 3d artists and studios everywhere.
It’s gold mine if they do it right; and sand trap if they don’t.
Mmmm… they won’t make a CPU for less than a million units.
But is the console industry Nvidia’s target?
Maybe, but then again, these days you can sometimes blur the difference between PCs and consoles for general home use, if you manage to put on it some OS. In the future it should not matter what architecture you have under the hood – at least for average home use, not talking developers here. New CPU, new architecture, doesn’t matter one day or the other Linux will run on it, so Nvidia, please surprise us.
X86 is an obsolete technology. Today processors would embed a simple RISC core, a GPU and some vector core and would run fine, cooler and cheaper. There’s room for this kind of processors. The future is not only Windows, so no need to run only x86. The console industry would be a good customer.
Nvidia’s problem is AMD and ATI merging, they will be making a CPU + GPU hybrid. Intel are going to do the same.
This essentially removes Nvidia’s market, they need to react by building the same thing – or something better.
With x86 being their market the chip by definition needs to be capable of running x86 binaries. With binary translation technology they don’t need to build a processor as complex as a full x86 in order to do this. It also doesn’t need to be as fast as existing processors on single threaded code as everyone is doing multicore now and “minicores” are next.
By the time this chip appears you can expect to see 8+ core CPUs on the market so it Nvidia can do a series of simple cores, a binary translator and a bunch of shaders they could have a decent competitor…
If NVIDIA embraces “Just Say No to Wintel” they could make a MIPS66 desktop with NVIDIA graphics running everything except Win$$$. the dream machine for every developer. And do not forget PASEMI.
There is nothing obsolete about x86. Some of the most high-tech CPU cores in existence are x86 chips. Core 2 is right now the most advanced general-purpose CPU core on the market, and it’ll still run all your old DOS apps.
The real question is: why on earth would a company that got rich making GPUs for the Windows/x86 market suddenly decide to ignore it? In NVIDIA’s market, there really is nothing but x86, and there really is no reason for there to be.
There is nothing obsolete about x86.
Yes, if you look at the market. But if you look at the architecture itself, that’s a totally different question.
It has always been promised that non-x86 CPU’s would become much faster than x86. The contrary has happended. The x86 architecture has not been a serious performance bottleneck and is here to stay.
On the contrary, so called “clean” architectures like the MIPS also lack in power, (no barrel shifter, index operands, rotate instructions, …). These things are exactly the things that keep x86 in the race.
> MIPS also lack in power, (no barrel shifter, index operands, rotate instructions, …).
The first MIPS ISA was a bit lacking yes, but since then there have been quite a few addition (the same thing happened more or less to each RISC ISA), there’s even a MIPS16 instruction set now!
Plus not every RISC is the same: the ARM did have a barrel shifter from the beginning.
As for the x86 being here to say, yes I agree, installed base and software compatibility trumps everything else. I even expect x86 to become dominant in the server space and being well used in the embedded space if Intel realize what they planned: x86 with 1W power usage.
Yes it’s true most advanced in consumer marktet, but the world isn’t a consumer paradise only 😉
And >genereal purpose< isn’t true too, there are many highend cpus out there in workstation market and they would be perfect for consumers too.
Intel is starving the GPU guys out.. after all, CPUs are about as fast as they need to be for a while. Intel’s been buying up graphics and increasingly going their own way. Note that ATI, intel’s preferred partner, even got bit after signing up for “VIIV”. Intel was supposed to co-market with ATI graphics then started shipping lots of GMA950. That’s why they sold to AMD. Nvidia’s in the cold and Intel is trying to steal their mainstream business, worse, shipping lots of kit WITHOUT graphics capable upgrade slots!!! That will really wreck the game market in 2-3 years when no Dell owners can play them!!!
X86 is an obsolete technology.
Yes. That’s why virtually every desktop processor sold today is AMD64-compatible.
(Although whether or not many of them will run in 64-bit mode during Vista’s lifetime is a matter for debate…)
A more interesting option would be a RISC chip, and dynamic translation of x86 instructions for Microsoft Windows, while everyone else uses the native ISA…
That’s the way that Intel’s recent x86 chips already work (where recent is Pentium 4, and possibly earlier).
The x86 acts as some sort of intermediate compressed format, reducing bus bandwidth, but as soon as it gets to the chip, the x86 instructions are converted to micro-ops (usually 3 or 4 of them to x86 instruction) and from there they’re executed.
and entire pc held within a single chip?
Im surprised we havent already seen spmthing similar from intel, i doubt it wil be long untill we do
been there, done that. ever hear of the ‘MediaGX’ cpu by cyrix i think. it had video/audio functions and a few other tihngs like pci controller i think built into a socket7 style cpu.
Cyrix’s MediaGX line still exists; it was eventually purchased by AMD, and renamed to “Geode.”
see: http://www.amd.com/us-en/ConnectivitySolutions/ProductInformation/0…
oh, cool. I did not know thats what the geode was Thanks for the info
Hmm…current video processors have become more and more like specialized massively parallel processors.
I honeslty wouldn’t be surprised to see nvidia come up with something like the Cell processor where the SPE’s are actual programmable shader type units, of course more powerful (or specialized?) than the SPE’s on Cell.
Ahh speculation is fun….
Edited 2006-10-24 21:50
Seems a lot more likely to me that they’d (for instance) make a chip geared towards consoles, or high-end compact devices (such as phones, portable consoles, palmtops).
If for a portable, licensing an ARM core would make sense – Windows CE (and other handheld-oriented OSes) already runs happily on that.
Side note: imagine accelerated 3D on your iPod.
How does Via do it? AFAIK only IBM, AMD and National Semi has the license.
>How does Via do it? AFAIK only IBM, AMD
>and National Semi has the license.
VIA bought Cyrix from National Semi, and the x86 liscence came with it.
>How does Via do it? AFAIK only IBM, AMD and National >Semi has the license.
Note that SIS has an X86 licence i.e. refer to RISE mP6.
To get the whole story, one should read in the following order:
http://www.theinquirer.net/default.aspx?article=33796
http://www.theinquirer.net/default.aspx?article=34461
http://www.theinquirer.net/default.aspx?article=35216
http://www.theinquirer.net/default.aspx?article=35280
Then the linked article.
At least, that’s what I did after traversing the in-article links.
IBM doesn’t have a x86 license. The Blue Lightning were the last x86 CPUs IBM manufactured. IIRC they didn’t renew because, in their plans, PPC was going be the future.
but just why the hell is Nvidia so doomed? Why do they “have” to do something so desperately and different than now? Why isn’t SiS, Ali, and or VIa also doomed?
They design x86 mainboards and integrate their GPU’s already [nForce].
So AMD bought ATI – now they do the same thing nVidia does . . .
So Intel is making better GPU’s . . .
I fail to see why the articles seem to point to nVidia as being boxed into a corner. Sure with Intel stepping up their GPU’s and the ATI buyout I’m sure sales will go down notch, but it certainly doesn’t mean an automagic end for nVidia. Am I missing something?
But in light of all that, they don’t want to lose money so it would make sense that they may look to expand . . .
SO, if there is any truth to the rumors of nVidia producing a CPU – I’d say its IBM and their console clients (Sony, XBox, Nintendo) that should be worried. And as for everyone else: get ready for the best damn gaming system ever! I doubt they would try to take on Intel AND AMD – that’d be suicide. Just my 2 bits.
Edited 2006-10-25 03:19
Long term the dedicated GPU market is probably a dead end.
GPUs are becoming more general purpose, CPUs have specialized GPU-like extensions. Eventually more and more functionality will be grouped together into the core processor.
Added to this is “Moores law” limitations whereby performance won’t always double at the same rate, hence video cards will get cheaper, margins will be squeezed and dedicated companies will have problems.
Given the current power/heat requirements of next/current generation video cards I’d probably say that a rethink isn’t very far away.
That said this CPU could have a number of uses. It could be part of a more generic graphics/physics/scientific coprocessor for rendering, science, consoles, embedded, pc gaming, supercomputing, renderfarms etc.
I would certainly think along this line. While a dedicated video card is a dead end, the IP involved, and future R&D, is not, and nVidia is already a fabless company. That transition should be fairly smooth.
However, now we’ve got the PhysX chip, a special video processor, and fancy shader units, with those sometimes doubling for non-shader games.
So, why not have very complex shaders, and either make a real CPU out of it, or add more general-use stuff in the pipeline, some extra logic units/pipelines, and a management front-end to convert old pixel-pushing stuff, various previous instruction sets, the video processor instructions, etc. into code for these new fancy shaders to use, and open up a generic instruction instruction set, so it’s a CPU that happens to excel at graphics applications.
There have already been uses of video cards for math and logic purposes, where the latency-over-throughput of our CPUs is not needed so much as just lots of number crunching of arrays. If this were enabled, at a level that did not require special IDEs, drivers, or anything of the sort, I’m sure snobbish designers, eggheaded professors, and raggedy hackers alike would show great innovation.
Meanwhile, this would open up the doors for letting the parts of your main CPU be more memory-managing and logic, letting heavy maths be taken care of by the super-number-cruncher that also gives you some eye candy.
But, of course, only time will tell, and we could be way the hell off. They might be aiming at a chip for future set-top media players or something.
Because in the future middle end PC will have the CPU and the GPU integrated into the same chip.
So as NVidia doesn’t produce CPU, there is a risk that their product line becomes ‘splitted’ with NVidia selling high-end GPU and chipset but no middle-end.
With no way to sell today high-end GPU as tomorrow middle-end GPU, the research investments pay-off is reduced. Being squeezed to the high-end is usually not confortable: see SGI as a good example of what may happen.
That said it remains to be seen if CPU+GPU in the same chip will sell, this is a big unknown.
who cares what architecture they use?
also ms is developing a new cpu for next xbox, can be a connection?anyway, i hope they usa a different architecture from x86. (i like ppc, but mips and sparc are wellcome)
With all that is going on in the industry via mergers and aquisitions, We all will have some difficult choices to make. It seems that the big three (AMD/ATI, Intel, & NVidia) will all be racing to manufacture and market the first CGPU. My prediction is that, whatever CPU architecture NVIDIA chooses, all first generation CGPUs will be targeted towards mobile computing, cell phone PDAs, and ITX type form factor computers. I’m just curious as to what price will the CGPUs be,what kind of performance gains will merging the CPU and GPU on a single chip be, and of course how much $$ will it cost me. I’m just surprised that VIA has not announced a CGPU project. The next great race will be System On a Chip.
Many folks speculating about this seem to think that NVidia needs to have direct binary x86 compatibility.
Personally I don’t think so. What they really need for mass acceptance of their new CPU is Windows, and the ability to run almost all common Windows software and games. That does *not* have to involve an x86 compatible processor in the box. There’s a couple of ways this could be done.
The key is to use something like Apple’s Rosetta – a technology Apple bought in…
NVidia could persuade Microsoft to build a version of Windows for their new processor, and build in Rosetta-like x86 compatibility. This is quite possible, since Windows is portable, and Apple have shown that a compatibility layer can work pretty damn well. Microsoft already produces versions of Windows on 4 different platforms today (x86, PPC, ARM and MIPS) – adding better binary compatibility between them would make sense.
In fact Apple have shown us this approach working twice – first with the transition from 68k to PPC, and more recently from PPC to x86. Both times the transition was largely transparent to users and software compatibility was excellent.
Alternatively NVidia could build computers with their CPU in them and have that computer’s BIOS run a Rosetta-like emulator, so that the CPU appears to be x86 compatible to Windows.
It may not actually be too difficult to persuade MS to produce the required version of Windows… There are versions of Windows out there running on 4 different processor architectures in common use today (x86 for desktop Windows, PPC for the XBox360, ARM and MIPS for CE/Mobile.
Microsoft already produces versions of Windows on 4 different platforms today (x86, PPC, ARM and MIPS) – adding better binary compatibility between them would make sense.
I hate to break the news to you but Windows XP doesn’t support any of those processors (other than x86) today. NT was the only version that supported those architectures.
It may not actually be too difficult to persuade MS to produce the required version of Windows… There are versions of Windows out there running on 4 different processor architectures in common use today (x86 for desktop Windows, PPC for the XBox360, ARM and MIPS for CE/Mobile.
These versions of windows are another beast all together and are not binary compatible with XP, so your point is moot. Who wants to run Windows if you can’t run all the applications that go with it?
“”Microsoft already produces versions of Windows on 4 different platforms today (x86, PPC, ARM and MIPS) – adding better binary compatibility between them would make sense.””
“I hate to break the news to you but Windows XP doesn’t support any of those processors (other than x86) today. NT was the only version that supported those architectures.”
No, but a version of Windows NT (same kernel as XP) does run on PPC. They’re used as xbox 360 development kits.
And GP is probably referring to Windows CE in regards to ARM.
I’d wait a bit before I started speculating on this.
BTW, Intel beat out Mips and other RISC players not because of architecture superiority but because of money. Ten years ago, the last time I played in general purpose CPU design, it cost roughly $250 million R&D to do a new CPU. Very few companies can afford to be in that business. Well, OK, Intel and IBM can afford to be in that business. Everybody else had to cut corners, meaning that they slowly fell behind because they couldn’t manage the cash flow necessary to keep competing. Even IBM fell behind.
It would be slightly short of insane for Nvidia to get into the CPU business for desktops or workstations at this point in the market.
Of course, there’s always mobile devices…
On the contrary, so called “clean” architectures like the MIPS also lack in power, (no barrel shifter, index operands, rotate instructions, …). These things are exactly the things that keep x86 in the race.
Well, Windows is the main reason that x86 is still in the race. MS has never made a wholehearted attempt at portability.
Forget about CISC vs. RISC. That battle is over and as always in wars, nobody won. On the other hand x86 could use a good clean up. If not for the beauty of it, then for the cores/die ratio.
So let’s get rid of Windows first. Then CPU designers will get the freedom they need to design without marketeers shouting: “will it be x86 compatible… will it be x86 compatible…”.
>Well, Windows is the main reason that x86 is still >in the race. MS has never made a wholehearted >attempt at portability.
MS Windows NT family is portable enough for MIPS, PPC, Alpha editions.
>So let’s get rid of Windows first.
Highly unlikely i.e. refer to ReactOS (FOSS Windows NT 5.x clone).
>Then CPU designers will get the freedom they need to
>design without marketeers shouting: “will it be x86
>compatible… will it be x86 compatible…”.
Itanium(aka “Itanic”) has IA-32 compatibility.
My guess is that they expect to be bought by Intel, that’s why they “have balls”.
or maybe bought out by MS to push ahead their current plans??
Which plans are you talking about ? I was thinking about Intel because of the x86 instruction set 🙂
My guess he was saying, “We have a ball grid array”, and was suddenly cut off.
I’ll play the dark horse, and say Apple will be running on Nvidia cgpu’s in a few years.
Forget about X86 already; it is doomed. Even Intel doesn’t believe anymore in it. Just look at the Itanium! In fact, I heard a while ago that the next generation CPU from Intel will be based upon Itanium, and that Intel will be releasing a compiler who could simply translate X86 assembly to the CPU native architecture. So, why Nvidia couldn’t make it? It require MS to release a Windows version for it? Do Apple needed MS to release a Windows version to work on their new Mac? No, they released boot-camp! Ho, you gonna say it is X86 too? Well, just think that Apple PCs use EFI and this break the convientionnal X86 platform. So, NVidia could release a new CPU, capable of doing dynamic translation of X86 instruction by using either hardware or software, and be able to execute the converted code (Intel is already doing this with micro-code anyway) on their new architecture. And don’t say this can’t be done. It has been done. And on you run Windows and install special NVidia driver for the GPU, you could unleash much more graphic power than now. Imagine having a emulator-hardware mix over this. Anyway, games are mostly GPU intensive, not CPU intensive, so gamers wouldn’t see a lot of difference. In fact, GPU performence will surely be better. The only stop there would be general X86 computing, but thanks to MS and .Net, this wouldn’t take long to be a bad dream. Just remember that .Net IS portable.
I see pretty much of a seviour in this. If NVidia can bring a new architecture to power and make X86 past, this could be the begining of much more powerfull processor. I say go Nvidia, kick Intel’s butt, and make regret it wasn’t you who bought AMD (shame!).