In dueling announcements, Intel and IBM separately say they have solved a puzzle perplexing the semiconductor industry about how to reduce energy loss in microchip transistors as the technology shrinks to the atomic scale. Each company said it has devised a way to replace problematic but vital materials in the transistors of computer chips that have begun leaking too much electric current as the circuitry on those chips gets smaller. Technology experts said it is the most dramatic overhaul of transistor technology for computer chips since the 1960s and is crucial in allowing semiconductor companies to continue making ever-smaller devices that are also energy-efficient.
The New York Times was first to release the news :
http://www.nytimes.com/2007/01/27/technology/27chip.html?_r=1&ref=t…
( I think free registration is required; not sure though!?!)
For those who prefer streaming media news here’s one more link:
http://vstream1.vlsiresearch.com/public/lisa_su_061128/lisa_su_inde…
I enjoyed the stream I watched on my 466 MHz Celeron with Vector Linux 5.8 and SeaMonkey Mplayer plugin ( BTW, it’s *.wmv – Windows Media Video on my Linux PC).
So you might think I can’t care less about Moore’s Law but, no, I like the progress in technology so I can skip all P III, P IV ,P ViiV, Petium D , Core 2 and whole AMD processors family and wait for new 45 nano with my trusty Celeron.
Just kidding!
Moore’s Law, Moore’s Law, Moore’s Law… You know, the more I see the oft-quoted law and how fervently chipmakers are trying to “keep up with it” the more I wish Moore’s Law would fail, so that my computer won’t be slow-by-half within eighteen months*. Maybe 36 months.
If nothing else it bugs me that it’s called a ‘law’, which irritates me because it’s less of a law than most theories in physics. But that’s the scientist in me talking.
No, I’m not saying everything should stop progressing, I suspect software efficiency would suddenly become important again if computers didn’t gain in capacity as fast.
*Yes, I know Mr. Moore was talking about transistors, not speeds…
Edited 2007-01-28 04:08
“You know, the more I see the oft-quoted law and how fervently chipmakers are trying to “keep up with it” the more I wish Moore’s Law would fail, so that my computer won’t be slow-by-half within eighteen months”
Your computer does not get any slower with time. It’s as fast as the day you bought it. Whatever programs it used to run back then, it still runs them just as well today and it will still run them just as well in 10 years from now.
It’s new computers that are constantly getting faster and personally I think it’s wonderful. Computers could never be too fast. Hypothetically, I’d love to be able to download the entire project from worldcommunitygrid.org and crunch it in 2 seconds 😛
Your computer does not get any slower with time. It’s as fast as the day you bought it. Whatever programs it used to run back then, it still runs them just as well today and it will still run them just as well in 10 years from now.
That’s true, but as the rest of the world upgrades to the “next big thing”, it can be hard to avoid having to do the same.
I was happy enough with Office 97, which ran fine on my 166Mhz Pentium with 64Mb RAM. For word processing I was always very happy with Ami Pro, which ran brilliantly on a 486. In fact, a lot of the work I do today could be achieved on my old RISC OS system, with 4Mb RAM and a 25Mhz CPU. Unfortunately none of them are much use when my colleagues use the latest version of Office, and expect me to work on files in its formats.
Try web browsing with 10 year old software and see how many sites can be displayed correctly. Opera is one of the most efficient browsers, but even that needs a lot more resources today than it did back then. When I first started using Opera I was running it from a single floppy disk, with plenty of space left over for cache and saved web pages. I tried Opera because the 386 my office provided for internet access was painfully slow running the latest version of Netscape…
Of course a lot of the software around today requires a fairly recent OS, and there’s no guarantee that the new OS will run your old software. Once you’re forced to upgrade anything, you can end up having to upgrade just about everything. Pretty soon your once high-end system is barely functional and you have to start looking at hardware upgrades as well.
Don’t get me wrong, in a lot of ways things have improved, and for a lot of tasks I really do need the speed provided by modern hardware. I just think it’s a shame how much of that speed seems to be squandered these days.
This is awesome news and a testament to human ingenuity, but I have to say that some part of me was looking forward to transistor tech maxing out and forcing a counter-revolution in the software industry where small, efficient, well written code becomes the standard again instead of these over-bloated high level languages that have come to the forefront over the last decade.
I may be a child of the eighties, but I appreciate the computer science of the 60’s more than I do the sad excuse for ‘computer science’ in the 2k’s.
I know what you mean. When I think of what computers like the Amiga could accomplish with 1 or 2mb of ram and nowdays even the simplest programs seem to take 10’s of megs it makes me sad.
Software systems of yore were small because they punted on correctness. It’s easy to do something fast when you do it wrong. Take something very simple like drawing a line. The correct way to draw a line on any finite-resolution device is to filter it during rasterization. Systems back then couldn’t afford to do things the correct way, so they punted on it, and did things incorrectly but quickly. They also ignored things like memory protection, not because it was any less The Right Thing than it is today, but because they couldn’t afford it. Also, proper internationalization? Please! Processors were far too slow to let non-English speakers read in their native language! And don’t even get me started on security, it wasn’t even until the 1990s that consumer-level systems even tried to do security correctly.
There is a lag in computer science, sometimes a very long one, between when the Right Answer is known and when processors become fast enough to actually use it. For example, consider C’s “int” type. What does “int” stand for? “Integer” allegedly. But “int” doesn’t denote the integer type, does it? It denotes the finite ring of the integers modulo 2^(processor-bit-width). Which isn’t wrong so much as it’s weird — modulo arithmetic is nice and all, but you’d think that real arithmetic would make more sense as the basis of a programming language? Scheme did the Right Thing with numbers decades ago, yet we’re still using languages that think “10 / 3 = 3”! Luckily, the trend has been towards doing things correctly as the CPU power becomes available to do so, although it’s been proceeding at an inexorable pace.
And of course, I can’t let the “small” and “well-written” thing slide. Systems written in low-level languages can be small, but its usually because they do so little, or take shortcuts and do things incorrectly. If you actually try to implement a useful feature set correctly (ie: handling security properly, being graceful with user errors, not corrupting user data when something goes wrong, preserving large inputs, etc) you’re going to find that a program written in a high-level language is going to be both shorter and better-written than a program written in a low-level language. I mean seriously, have you ever looked at a FORTRAN program from the 1970s? Odds are that it might have been fast, but the interface was positively user-abusive and the code was completely unreadable. I have the joy of using some of these programs from The Good Old Days, and I’d trade them for a nice new Cocoa app in half a second.
Edited 2007-01-28 08:37
These articles seem to all be revealing a lot about nothing (go go tech reporting). What exactly are we talking about here? Sometimes this sounds like a description of high-k dielectrics, which aren’t exactly new news…
Ah, we ARE talking about high-K dielectrics here… I guess tech reporting for the masses is just now picking up on this rather old development. I think the real news is that Intel is feeling confident enough about the technology to give it a production run in the near future.
For a slightly more understandable take on this, see Hannibal’s post on Ars and some of the related forum posts:
http://arstechnica.com/news.ars/post/20070127-8716.html
IBM only have their low K advancement which is NOTHING compare to the achieve of Intel to High K and Metal Gate.
IBM only have their low K advancement which is NOTHING compare to the achieve of Intel to High K and Metal Gate.
Both announced High K advances (originally IBM had held this off until the 32nm node but that’s apparently changed).
Intel’s version sits on top of the silicon and has been tested in a working processor.
IBM’s is inside the silicon but it still in research.
It’s a pretty important advance though because we were looking at pretty serious heat problems otherwise.