“AMD and Apple will likely tout that they can deliver 64-bit computing to desktops this year, but Intel is in no hurry. Two of the company’s top researchers said that a lack of applications, existing circumstances in the memory market, and the inherent challenges in getting the industry and consumers to migrate to new chips will likely keep Intel from coming out with a 64-bit chip–similar to those found in high-end servers and workstations–for PCs for years.” Read the article at ZDNet.
The link is broken.
Did Intel also fail to mention that their 64-bit part isn’t compatible with existing 32-bit software? AMD’s solution is.
The 386 is for high-end servers. We don’t anticipate anyone would ever need a 486 on the desktop. Our target market for the Pentium is professional workstations. The PentiumPro is optimized to run the 32-bit code in Windows NT for the server and the workstation….
Once they get their fabs up, and there’s consumer demand, these will start going into desktop machines. It’s just a matter of time. The biggest thing to hold back 64-bit adoption is the lack of native apps for the Itanic.
AMD’s offerings will, for awhile, be like the original 386 — used primarily as fast 286’s. The first 386’s appeared in 1986. The first MS operating system to fully utilize it didn’t come out until 1993 (WinNT).
And, yes, the 4GB ram limit might be an issue in a few years. But, by the same token, I have a machine that I run Win2k on with only 128M, and am reasonably happy with. It only came with 128 when I purchased it in 1999. My PC notebook, which I bought a year ago, came with 256, and that was plenty. I upped it to 512…..save a bit on shipping when I bought more for my iBook.
Intel’s 32-bit processors can also address more than 4GB of memory already.
I used to work at Intel when the Itanium first came out. Right now not too many businesses care about Itanium or Itanium II. Everyone thinks that Intel will come up with a good 64-bit processor but they won’t until the 64-bit chips are widely accepted in the marketplace. The problem with Itanium is that it’s slower than a Pentium 4 or atleast a XEON processor. In addition, Itainium will not run 32-bit code real well. Personally I’m waiting to see AMD’s 64-bit offering.
AMD’s offerings will, for awhile, be like the original 386 — used primarily as fast 286’s. The first 386’s appeared in 1986. The first MS operating system to fully utilize it didn’t come out until 1993 (WinNT).
Microsoft will release an x86-64 version of Windows, and all .NET applications will be able to take immediate advantage of the new ISA.
Other applications may provide both 32-bit and 64-bit builds, like in the early days of Windows 95.
There is a very interesting page outlining the history of microprocessors out there. Its estimation of the x86 isn’t very high. And Intel probably aren’t very fond of it either. But due to IBM, they’re trapped. The market demands more x86 processors, so they will have to continue spending money of further development of an antiquated platform.
The same page notes that Intel aren’t all that incompetent when it comes to processor design, citing the i900 series as an example. But people want XT compatibility, not Intel RISC processors.
Apart from the “my CPU has more bits than yours” issue (pun intended), what is there to gain from a 64bit CPU? I’ve seen some tests and comparisons but they all seem to speak against 64bit platforms.
Also, this reminds me too much of the IPv4 vs IPv6 battle
AMD’s offerings will, for awhile, be like the original 386 — used primarily as fast 286’s. The first 386’s appeared in 1986. The first MS operating system to fully utilize it didn’t come out until 1993 (WinNT).
I think you forget:
Memory managers — these were key to get people a little beyond the 640 k limit. All sorts of device drivers were chewing up base memory and getting a proper configuration was quite difficult. With 386 memory managers that stopped being a problem. More importantly you could access extended memory so you got stuff like large disk caching which has a massive impact on performance easily more than doubling the speed of your computer.
Desqview — multi tasking dos apps was a key feature of the 386. Windows did a so-so job but Desqview did an excellent job.
OS/2 — Finally regarding WindowsNT when the 386 came out Microsoft was still on board for OS/2. It was IBM which stupidly decided to hold back on a 386 only version not Microsoft. OS/2 2.0 most certainly too full advantage of 386 vs. 286.
_______________
I think the above is more than a nitpick. As you can see from the list (which is in historical order) things went:
a) slight advantages
b) virtual environment
c) total break / new environment
On high end machines (zSeries, iSeries, high end suns…) people are already partitioning their environment. I wouldn’t be supriesed to see this be the killer app for 64 bit servers. The ability to run seperate servers in full seperate virtual environments is a major stability and security factor which Wintel servers and Linux servers don’t have right now. The Linux 2.6 kernel will support this so I wouldn’t be suprised if by 2004 the major server oriented distributions do as well. Microsoft is more opaque but they’ve probably noticed it too.
On the desktop video seems to me to be the killer 64 bit app.
Sounded like the article was a little bit of a FUD stirred up by Intel.
The memory issue was a red herring. Like someone else mentioned, you can put more than 4Gig on existing processors. Mind you, accessing it all at the same time is the real issue. For an OS developer, the practical limit is about 1-2 Gigs because after around 1Gig, you have to change the way you reference physical memory in the kernel because virtual address space is limited. Similar issues face the application developer. In Windows OSes for example, you are limited to only 2 Gig of user virtual address space. It doesn’t take long to chew that up if you are working on large data sets (e.g. audio/video editing)
I wonder what the 40 bit address space extension being alluded to in the article was. My guess is that it was some horrible kludge because the natural word size for the machine is fixed at 32 bits. The extra bits have to come from somewhere and I bet it would be really ugly – want to go back to the segmented archicture anyone? BTW, you could access more than 4 Gig in a user program (provided that you also have PAE) through the use of selectors if you *really* wanted to.
Of course they could just be referring to an upgrade of the paging address extensions (PAE) which current processors already have in a limited form – this doesn’t really solve things much for anyone.
My money’s on a true 64 bit processor. Much cleaner all round.
The problem isn’t that people don’t want 64 bits, I think it’s more that Intel botched it a little with the IA64. They took years to develop it and when it finally arrived, it was far too expensive and slower than the latest 32 bit processors. They just couldn’t surf the wave right.
In contrast, the x86-64 chip being just an extension of the 32 bit chip has more going for it because it’s designed to run 32 or 64 bit equally well. It may not be a clean start aesthetically, but the 64 bit mode is a cleaner and more orthogonal one from the compiler’s point of view.
I can testify as to how fast it was to redevelop for the x86-64 chip. It took me less than 6 months to port my kernel to x86-64, and this included converting my compiler to x86-64 and enhancing bochs to support the new chip. The point is that converting existing IA32 tools to x86-64 is far less cost that retooling completely for IA64, which has been the major slowdown if you ask me.
Having said that, there is some legacy stuff that will finally have to go with the move to any 64 bit Windows, and that will be the v86 stuff. The x86-64 doesn’t support v86 mode when running in long mode (64 bit/32 bit compat), and so will likely need to be emulated. The IA64 has even greater problems switching modes and will likely have similar issues.
P
never heard of it.
The 386+ processors have 3 runtime modes:
1) Real Mode – Pretend I’m a 8086
2) Protected Mode – I’m a 32 bit processor
3) Virtual 8086 – I can pretend to be multiple 8086s.
The 3rd mode is what was used to handle the DOS prompt under Windows 95 & 98. It was also used by Quarter Deck to suport program swapping.
It’s basically the same as Real Mode except, the I/O port operations and several other OS level instructions became privilaged and caused an interupt to let the OS handle it. Thus, some instructions were taken away from the user space and reserved for the kernel space. This was the only way to let multiple programs think they were running on a dedicated machine and still keep them from causing problems by giving them unlimited access to the hardware.
The initial killer apps for x86-64 are huge CAD documents and video rendering (both for cheap industrial clusters and home hobbyists like me). There will also be a huge market in the financial analysis sector, and a very rapid follow-on in cheap “commodity” Oracle and DB2 database servers.
What unifies all of these environments is a need to load enormous datasets directly into ram to achieve optimal performance, and an existing initial penetration of Linux into these datacenters and development groups.
What x86-64 offers is a > 4gb flat memory address space at a commodity price. Right now, 4gb+ flat addressing is something that you can only realistically get by purchasing $350,000 or more worth of an enterprise unix host from Sun, HP, or IBM. And, ongoing support for boxes like that costs like a mo-fo, too. (I know, because that’s what I do for a living. )
In six or eight months or a year, when I can pitch implementing a handful of $8500 4-way Opteron Linux hosts with 16gb of ram each instead of $350,000 PA-RISC or SPARC boxes, or even the equivalent $95,000 Itanium 2 servers, and hire some brilliant linux-geek college student to admin them at $65k instead of a ten-year-experienced HP-UX person at $120k, I’m going to win a lot more Oracle, SAS, and Peoplesoft implementation contracts. There’s also going to be a lot of Sun and HP CAD workstations being “upgraded” at a third their original cost, too.
This is exactly the same sort of thing that Sun used to kill the minicomputers that did all of the above types of work 15-or-so years ago. All I can say is, thanks, Intel. Keep up the good work.
Joe
I for one would go looking for a 64-bit chip when I go on my next round of upgrades (probably early/mid 2004).
Intel’s 64bit offerings are going to take a while before they’re useful, whereas AMD’s and IBM’s will be out in strength this year. They’re trying to make it look as if 64bits isn’t so important. Personally, I think AMD will be surprised how many people start buying the ‘server’ 64bit AMD chip when it comes out in a few months, simply because there is such a number of people who feel fairly insecure about themselves unless they’ve got the biggest piece of equipment on the market. I doubt the 4gb limit will ever be as much as an issue as the 640k limit; either a way around it will be found, or people will content themselves with 4gb: 4gb is plenty enough anyway.
You must have a limited number of uses for a computer to think that 4GB (only 2GB available with most OSes) is enought. I am already running up against the 1 GB limit of BeOS for my personal database needs and in a year or two even 2 GB will not be enought. 8 GB is easyily needed in the next four years. Also it is not only the video editors with big memory needs, any graphic artists working on poster size productions are running into memory today if only in the number of undos that they can do before they hit the hard disk and see a slow down in production. I am sure there are other users like OS compilers who would like the extra memory.
I personally don’t think you can too much memory, except when you can’t access it.
Teh biggest draw for 64bit cpus is the added registers.
… and then there is the other ~99% of computer users.