“We’ve seen IBM and ARM team up before, but this week both companies announced a new joint initiative to develop 14nm chip processing technology. That’s significantly smaller than the 20nm SoC technology ARM hopes to create in partnership with TSMC, and makes the company’s previous work with IBM on 32nm semiconductors look like a cake walk.”
To get around the huge overhead of an x86 instruction decoder Intel’s really only chance against ARM was their manufacturing. I have to hand it to ARM, in the past few years they’ve made important decisive business decisions. They shortened their release cycle, enabled today’s dominant smartphones and now have an excellent fab partner.
Look at
http://appdeveloper.intel.com/en-us/
Intel has introduced a developer program especially for its Atom processors. This program has a million Dollar fund to pay^H^H^H pursuade developers to develop for Atom.
The consumer side looks like this
http://www.appup.com/applications/index
pica
PS I prefer ARM
Edited 2011-01-21 09:56 UTC
Remember the IBM logo of 17 nanometers long ?
… 35 Xenon atoms on nickel.
Now, you know why IBM has invested so much in nano-technology researches.
Less power, more power
… what exactly?
I’m curious to know what applications such small sizes enable that the current technologies don’t.
I hope such smaller chips will provide a better experience than the ludicrous two-hour battery life of usage we get out of smartphones these days.
I also wonder why the “limit” that I heard about almost two decades ago as to the minimal size of chip technology hasn’t been reached yet. It’s like crude oil… in the 80’s, people said there wasn’t much left. Still, the last projections I’ve heard about still say there’s enough for more than two decades. So, clueless experts or miraculous discoveries?
The latter.
Advances in physics and CPU design have made it possible to work around previously perceived theoretical limits.
However, that doesn’t mean that there isn’t a limit. Eventually we will have to switch to an entirely new type of CPU. But for the immediate future, there’s still life left in silicon-based transistor processors.
If memory serve me well, the main problems people used to point were:
– Leakage;
– heat dissipation;
– quantum effects.
The first two were tackled by new materials that had better insulation properties or would produce less heat. If you look at processors today, they also tend to operate at smaller switching frequencies.
The later problem is not solved yet. It is related to statistics and quantum behavior (we want the outcome of a operation with same input to come every time with same results). As it is a “physical barrier” (even though it was greatly exaggerated at that time, I guess) we may need a new approach, computing model or live with that (like we do with thermal machines now).
Well, it’s cool to make transistors that are only a few dozens of atoms wide, and I find it fun to see the last years of Moore’s law before it finally hits the wall. Apart from that, same thing as before…
There will be a slight increase in processing power during some time, which sadly will quickly be compensated by a big increase in software bloat.
Probably not. Hardware evolves, but the biggest killer in terms of power management is software and engineering decisions. When phones have 4″ screens with a capacitive touch layer, and run Java and .Net software on top of feature-bloated kernels and computationally heavy UI layers, there’s not much which power-efficient hardware and more efficient batteries can do for you
As someone else said, we’re reaching a limit, but slower than we initially thought.
After that, we’ll be able to make some even tinier transistors through a qualitative change of chip processing technology, like by using carbon nanotubes or even single molecules in transistors, but at some point we’ll reach the quantum limit where our transistors won’t even be able to compute things with a good probability of being right.
Then there are several paths :
-> Quantum computers (A very, very long path. Recently, an optical quantum computer has managed to compute 15 = 3×5, and there was much rejoicing)
-> More CPU cores, and bigger chips since we can’t make them smaller, to the point where we are forced to ditch the concept of a central RAM altogether and have several independent mini-computers inside our computer. Distributed operating systems taking over the world.
-> Optimizing even further some operations by using other physical phenomena than we usually do. As an example, it’s possible to compute large-scale 2D Fourier transforms at the speed of light (literally) by using analog optical components.
-> Writing lean and efficient software… Okay, I’m dreaming there.
Edited 2011-01-21 17:02 UTC
We forgot to add, processors are operating at lower voltages also, what helps lower the heat created.
Nice post, would like to mod you up but, unluckily, this stupid rule “can not give feedback once you post” is still kicking here. Thorn, may you please change that? If you want a solution for abuse, use the “one moderation per post per user” rule. I think everyone would be more than happy with that.
Cost.
Smaller die sizes reduce cost by more than half. You get more dies per wafer, which means more processors/hour.
In addition to performance, power, etc.