On Monday at CES 2025, Nvidia unveiled a desktop computer called Project DIGITS. The machine uses Nvidia’s latest “Blackwell” AI chip and will cost $3,000. It contains a new central processor, or CPU, which Nvidia and MediaTek worked to create.
Responding to an analyst’s question during an investor presentation, Huang said Nvidia tapped MediaTek to co-design an energy-efficient CPU that could be sold more widely.
“Now they could provide that to us, and they could keep that for themselves and serve the market. And so it was a great win-win,” Huang said.
Previously, Reuters reported that Nvidia was working on a CPU for personal computers to challenge the consumer and business computer market dominance of Intel, Advanced Micro Devices and Qualcomm.
↫ Stephen Nellis at Reuters
I’ve long wondered why NVIDIA wasn’t entering the general purpose processor market in a more substantial way than it did a few years ago with the Tegra, especially now that ARM has cemented itself as an architecture choice for more than just mobile devices. Much like Intel, AMD, and now Qualcomm, NVIDIA could easily deliver the whole package to laptop, tablet, and desktop makers: processor, chipset, GPU, of course glued together with special NVIDIA magic the other companies opting to use NVIDIA GPUs won’t get.
There’s a lot of money to be made there, and it’s the move that could help NVIDIA survive the inevitable crash of the “AI” wave it’s currently riding, which has pushed the company to become one of the most valuable companies in the world. I’m also sure OEMs would love nothing more than to have more than just Qualcomm to choose from for ARM laptops and desktops, if only to aid in bringing costs down through competition, and to potentially offer ARM devices with the same kind of powerful GPUs currently mostly reserved for x86 machines.
I’m personally always for more competition, but this time with the asterisk that NVIDIA really doesn’t need to get any bigger than it already is. The company has a long history of screwing over consumers, and I doubt that would change if they also conquered a chunky slice of the general purpose processor market.
All the crapiness of non-standardized bootloaders of the smartphone space, now on your desktop and laptop, experience… ARM PC™
Server and desktop tier ARM hardware uses UEFI if it wants to comply with ARM’s own specifications.
Doesn’t seem to apply to laptops though. Neither Snapdragon nor Apple Silicon laptops “just works” with a generic Linux ARM image.
Not sure they are targeting much “desktop” at US$3000 for piece, other than the space it takes up on some Devs desktop!
I get it if you are developing with a rack full Jetson, it might be of interest. But not ever as the alt consumer desktop that some are trying to spin.
NVIDIA’s CEO made a comment about the 5090’s $2000 price tag being affordable because computer desktops costed $10k anyway. Unfortunately I didn’t have the mind to take down the link at the time, but his swing was a hard miss and it shows just how far disconnected he is from consumers. Not that it matters much since normal consumers aren’t where the big money is at.
>”it’s the move that could help NVIDIA survive the inevitable crash of the “AI” wave it’s currently riding”
No company is going to make remotely enough money selling desktop cpu chips in a rapidly declining desktop market to offset the tsunami of losses that are going to overwhelm the tech industry when the AI bubble pops.
Thom Holwerda,
It’s clear that there are applications of AI that will fail. And there will always be AI gismos that we can all point to and laugh at together. But at the same time I worry that way too many people and employees are underestimating AI’s potential to render human employees redundant in the coming years. They are not seeing AI the way employers do. Employers are looking at the exorbitantly high costs of labor, including all the sick time, family leave, health insurance, social security taxes, and so on, and these employers are very eager to cut costs to increase their profits, which is all shareholders care about. AI has room to improve, but once jobs go to AI, humans looking for jobs are going to find it extremely difficult to compete against the AI. I don’t think employees are ready to accept the harsh reality that employers don’t really care about them over AI to do the job cheaper.
The real problem with AI as implemented today (LLMs) is that it makes mistakes in a way that cannot be diagnosed and debugged (in the way rules-engines can) and cannot be held accountable like humans can:
https://yro.slashdot.org/story/23/05/30/2049253/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented
Better keep those humans who can be held accountable employed if you want to avoid fines and other consequences.
kurkosdr
Well, I don’t think that using NN as a source of legal data demonstrates the technology at it’s best. Storing factual information in an LLM is neither efficient nor is it LLM’s strong point, Where LLMs can do well is interact with humans and processing data. LLMs trained on internet sources like chatgpt were never going to be optimal for the applications we’re talking about. However LLMs developed to be an interface for a legal database (or any other database) could prove to be extremely useful and it can provide verifiable citations.
I also expect LLMs to become much more specialized, like an assembly line of AI specialists that work together, each being trained to optimize some task. These AIs will prove valuable because they can analyze millions/billions of case permutations to measure the statistical outcomes even optimizing down to the judge and possibly even jury members. It’s almost scary what can be optimized for when you have enough computing power, but regardless both defendants and prosecutors will find that AI can not only be cheaper than human labor, but may ultimately be more effective too because it’s too much data for humans to process. I’m not saying we’re there today, but I do think law firms that don’t employ AI technology will become increasingly disadvantaged over time.
I’m afraid you are exactly right about this. Just look at what happened to IS/IT/CS jobs long before AI was a thing: It all got outsourced to workers in countries where the companies could get away with lower salaries, no healthcare, no taxes, etc. On the one hand I’m glad there are jobs at all for people in those countries, but it’s not out of any altruism on the part of the corporate world, it’s pure greed driving such decisions. And of course, now AI is poised to take those jobs from the lower paid overseas workers, so they will suffer the effects as well.
I know my job is secure because AI can’t do the physical and social aspects of being a sysadmin for a small business, in fact I could probably leverage AI for the more mundane parts of my job that are too much for a script but not important enough for another human hire. But give it time and even hands-on jobs might be at stake.
Morgan,
Yeah, very similar dynamics are at play with AI. Employees are a means to an end for corporations. Once they find a way to replace employees with something cheaper, they will. And even though some people will make the argument that offshoring yielded worse results, which is a view I sympathize with, it didn’t matter. The quality argument didn’t put a stop the widescale offshoring. And now those jobs that have been offshored are extremely difficult to bring back once they are lost. If we do nothing but deny AI’s potential to displace employees, by the time we realize it’s happening it will be too late to do anything about, we’ll be past the tipping point.
I haven’t jobs in my field go to AI yet, knock on wood…however quite a lot, perhaps even a majority, got offshored.
So, it’s as overpriced as nvidia, and as crappy as mediatek. Great combo
Fairly sure I made a comment recently that they only needed to add a raspberry pi. They listened fast :p
Windows on ARM is so awful.