Microsoft has been thinking about Windows 8 for a while now even through the production of Windows 7. Some information has been gathered by our friends over at Ars, and all of this said information points to possible 128-bit versions of Windows 8 and definite 128-bit versions of Windows 9. Update: Other technophiles better-versed than I in this whole 64/128-bit business pointed out that it must be for the filesystem (such as ZFS described in this article) rather than the processor and memory scheme.
It was obvious that 128-bit operating systems would be rolling out sooner or later, but the only question was who and when.First, of course, we’ll need to have 128-bit processors available to the general public, not to mention other compatible hardware and drivers, but there is plenty of time for Intel and AMD (let’s not forget ARM who are making strides in their market) to duke that out on the processor field.
The information found that suggested 128-bit support by Windows 8 and/or 9 was in a LinkedIn profile of a certain Robert Morgan, who happens to be from Senior Research and Development at Microsoft. The information was afterwards taken down, but luckily it’s been preserved on news sites such as OSNews:
Working in high security department for research and development involving strategic planning for medium and longterm projects. Research & Development projects including 128bit architecture compatibility with the Windows 8 kernel and Windows 9 project plan. Forming relationships with major partners: Intel, AMD, HP, and IBM.Robert Morgan is working to get IA-128 working backwards with full binary compatibility on the existing IA-64 instructions in the hardware simulation to work for Windows 8 and definitely Windows 9.
Windows 8 News, which first discovered Morgan’s profile, now claims to have secured an interview with him and is having its readers submit the questions. You, too, can participate, so meander on over before October 11th to do so. There is only one question thus far as of this article’s publication, so go show your OSNews spirit with intelligent queries worthy of science academies worldwide.
Overall, this doesn’t mean we can expect to see Windows 8 to appear with 128-bit support, but it’s for sure in the process. This also brings up the theory that maybe Windows 7 could be the last OS in Microsoft’s arsenal to have a 32-bit version. As Windows 7’s outlook is much better than Vista’s was and still is, it’s not hard to come to the conclusion that the release cycle between 7 and 8 will probably be longer than just three years, so this has plenty of time to brew. Also, who’s to say that Apple doesn’t have anything 128-ish being put together behind their iron curtain of mystery?
Time will tell.
I know 128 = two times as awesome as 64, but, unless the release date for 128 bit windows 9 is decades into the future, I really can’t think of any practical reasoning behind this, there has to be some miscommunication somewhere.
128 *bits* is not 2x 64 bits, it might be as simple decimal numbers but 2^64 is 1.844674407e+19 and 2^128 is 3.402823669e+38—exponentially bigger; 2^64 * 2 is 3.689348814e+19.
65 bits is 2x 64 bits!
Actually 128 bits is 2^64 times as awesome as 64 bits.
The most powerful supercomputer in the world has a total of 100 Terabytes of memory. 64 bit memory addressing allows for 180 thousand times that amount. And that supercomputer isn’t shared memory, so each individual system has much less memory than this.
It is a similar level of expansion from Commodore 64 to current 64-bit power-workstations (64 KB to 12GB).
So if memory expansion happens at the same rate, it will be well over 20 years until we need more than 64 bit memory addressing.
Is this an april fools joke in october?
32bit and 64bit so far measured the size of the address space = the size of the main memory which can be addressed by applications.
64bit already allows for 16 exabytes of memory! In fact current cpus only have 48 wires for addresses as that is really enough for everything possible today.
So 128bit for the next windows version doesn’t make any sense to me…
The reference to IA-128 in the article is probably bogus (written by an incompetent journalist) and such an architecture doesn’t exist. But seeing that several references in the lower part talk about files and file systems, I guess that windows will switch the file system to support 128bits.
For external storage it might be possible that we see storage arrays which such gigantic spaces in the next 10 or 20 years. SUNs ZFS supports 128bit for a few years now, so I assume microsoft doesn’t want to lack in that area…
I’m in agreement on this. I also believe the 128-bit reference was for the file system, not the processor.
Most desktops today don’t even cap out the 4 GB limit of 32-bit processors and no server or desktop on the planet comes anywhere close to maxing out a 64-bit memory space. As was said, even with 48 address pins that is 281,474,976,710,656 bytes of addressable memory (281 TB, or 281,000 GB). That’s a lot of memory, even for data miners.
That’s what I thought, as well. 128-bit in the CPU also won’t do much good until there is some idea of how hardware and common languages will be implementing 128-bit ints (not that they would do it strangely, but it could be a pain to rewrite code for the actual hardware/language standards, when your data type or keyword or semantics guesses end up wrong). However, NTFS is getting long in the tooth, and while they’ve got the whole, “we can use this RAM stuff,” pretty well down in 7, being stuck at 64-bit for the FS will likely become a problem. Even before getting too close to the edge, performance may suffer.
With embedded x86 machines getting 1/4-1GB RAM these days, Atoms and Nanos (and whatever AMD may come up with) replacing POSes in the near future (like Cyrix-based Geodes), and desktops 2-12GB (not even getting into big server setup where big storage may actually be used), going from 8 to 16 bytes to declare a small space in the FS will not do harm.
Microsoft needs these kinds of things ready to go before anyone really needs them.
Edited 2009-10-07 21:51 UTC
I also agree that this is probably for the file system and not the processor.
However with a 128 bit virtual address space you can do some pretty interesting things, where a 64 bit space may be too limiting.
One thing is for sure, we will never see computers that will physically run out of a 64bit memory space.
However with a 128 bit space you can give each person, machine, thing, CPU, program and library each own 64bit virtual address space and never run out.
This means you do not need relocatable binaries (ever), and even hourly re-compiles will never run out of available spaces (this might happen in a 64 bit space).
Also opens up some interesting possibilities in DRM and security like programs and data will only be available in your own personal address space. Or your address space is relocatable in a cloud computer space and will move around yet always be yours. Sending messages (like an email) could be something like pushing data into the recipients address space in the cloud etc. etc.
A lot of this could be possible in a 64 bit world but you would need to manage it tightly. In a 128 bit world you do not need much management.
I can definitely see why this is interesting from a research perspective.
You are mistaken and the ‘update’ to this posting is also mistaken.
The original linkedin job description of Robert Morgan clearly stated that he is working on …
‘Research & Development projects including 128bit architecture compatibility with the Windows 8 kernel and Windows 9 project plan’
… this clearly implies a 128 bit CPU architecture. The addition of 128 bit instructions – probably in the Kittson timeframe – logically progress the architecture and is by no means esoteric (the Dreamcast, back in the day, had a 128 bit FPU).
]{
Well you already have 128bit registers today in the SSE units. I’m quiet sure that 128bit for integer values wouldn’t make sense for the big majority of application (in fact even in 64bit programs most values which are not pointers are still 32bits because that simply saves memory).
There might some uses where 128bit floating or maybe even 128bit integer makes sense. But it would be a minority of applications and the additions would be more like an instruction set extension like SSE is…
This story based on a (now removed!) LinkedIn profile looks very dubious to me. IA-128 is also very odd name. IA-64 is the name for the Intel Itanium architecture which intel isn’t developing further in favor of AMD64 (or EM64T like Intel calls it).
And well if you want more bits wait for Larabee – it will have 512 bit registers 😉
Would the purpose of 128bit be to address more memory?
Maybe the enlightened readers among us could fill in the gaps for me, as I am not seeing a rush to 64 bit. So , are we talking 4-6 years down the road? I read a lot of comments in linux to avoid the 64 bit versions like the plague , as most software doesn’t really take full advantage of it (flash, java especially), but backwards compatibility is improving. I’m writing this on a 32 bit hp mini, and have had 64 bit in the past, but never had more than 4 gigs of ram. Not much experience with 64 bit, so I’m just curious to the benifits of a true 64 bit system.
You’re obviously talking about desktop, where the need for 64 bits hasn’t really been that great (apart from complex video editing I guess). The only part of java which wasn’t 64-bit was the browser plugin, and that has been fixed recently. The server JVM has been able to run in 64 bits for ages, and that’s where Java counts.
Flash has also been available in 64-bit form for a while now, so no excuse there.
not seeing a rush?
take a look at the CPU you are running… it is a 64 bit CPU if it was manufactured in the last 4 years.
if you are talking about software…. desktop OSs are moving to 64 bit in such a rush, that the certified for windows 7 systems have to be 64 bit systems. 4 Gigs of memory will be common place in a few years.
If you are talking about programs…. so what if it is not a 64 bit program? Most programs don’t need more addressable stack space than 32 bits. 64 bit is really for the OS and a few specialized programs (probably games as well in the near future) everything else can live in 32 bit world until the end of time.
Well, sorry mate but even nowadays computers come already with 6GB of memory, 4GB is the minimum where I’m living and I’m talking about laptops for general purpose.
Anyway, in the server market I’ve seen already systems with 256GB and now with the 8GB dimms I’ll see systems with 512GB of RAM. Of course for memory address space 64-bit will be more than enough for the next 5 to 10 years I believe.
Don’t forgeet about quantum computing … there are already some prototypes and if they are getting the right dose of investigation soon they will be out here and greed for mem and more
For the last 21 years that I’ve been watching, memory in a typical server has been increasing pretty reliably at a geometric rate of 50% per year. Even using your 512GB number for current servers, we’re still good for 22 years. And are these 256GB and 512GB servers you speak of really running Windows?
of course they not running windows but then again, I don’t work with windows at all, I believe the max I seen with windows was 32gb.
Just as a point of note, the maximum physical memory allowed by current amd64 designs is 1TB. The largest server that I, personally, administer has 48GB. So I’m probably fine for about 8 years with the current hardware limit. Of course… the 48GB box is way overspec’d. It would be quite comfortable with 16GB. So really more like 10 years. The full capacity of 64 bit memory addressing will probably be fine for most servers for at least 25 years. But let’s face it. With a little more attention to efficiency in the software, no one should ever need more than about 640GB.
I should probably have also mentioned the the numbers I’m using for physical addressing on amd64 are only 2^52 and not 2^64. It’s a page table limit. The true capacity of 64 bit addressing would get me by for over 50 years. And any current or soon to be 512GB users would be fine for about about 42 years. Deep Thought might have been 128 bit, I suppose.
Edited 2009-10-08 15:03 UTC
The compatibility is all there. It was an issue back when all this 64-bit stuff was new. I run quite a few Windows applications that are 32-bit in Linux, and some FOSS apps that don’t seem to work as 64-bit (Qdu, FI).
Even if applications don’t need 64-bit, it’s nice for those that do, and PAE could cause some apps and drivers problems. FI, I run 32-bit Firefox on Windows 7, because it uses remarkably less RAM (more pointers than actual data?), while I run 64-bit FF on Linux, because it’s noticeably faster. Either way, I can run them with 8GB of RAM, and no swap, yet still never close anything, because there are always a few GB spare.
Does anyone know what the advantage a 128 bit system has over a 64 bit system? I was under the impression that we had not yet even come close to making full use of a 64 bit system, so where would the use of 128 bit system show improvements.
Well, you can address more memory I suppose.
Well and a lot of your data (at least the pointers) are suddenly twice as big: you *need* more memory! Using < 4GB RAM with a 64bit OS gives less usable memory than with a 32bit OS. Only if you have significantly more than 4GB RAM 64bit has advantages memory wise. AMD64 has other advantages though: “better” assembler and twice as much registers -> this property should be faster. But bigger pointers = more memory to copy = slower memory operations. So some things are better with 64bit, some aren’t.
An advantage of a 128 bit system over a 64-bit system would be the ability to natively process 128-bit numbers, which would speed up things like 128-bit file systems. Other than that, you get more potential address space for your programs. Which is dumb considering it would take a very long time for current systems to copy the equivalent of the contents of a full 64-bit address space (even at speeds of 1 TB per second it would take 213 days to read it all).
We seriously need to solve the problem of memory speed before we tackle memory limits higher than 64-bit.
Its funny just thinking about the absurdity of even posting a story about how Microsoft is developing a 128bit OS for Windows 8 or 9 which according to their own plans would be in 6 years at most.
“We seriously need to solve the problem of memory speed before we tackle memory limits higher than 64-bit.”
That would be the primary reason in my mind for a move to >64 bit. Modern GPUs have 256- and even 512-bit wide memory buses (I don’t know about addressing though).
Better from a board design standpoint is to get higher-speed, narrower interconnects as you noted.
Who is asking for this? 128 bit OSes? Really?
I fully expect that we’ll have 128 bit machines eventually, but I’m not aware of any right now (certainly not any x86-based chips), so a 128 bit OS doesn’t make any sense yet. I can see them working on making it work better in Windows for them to increase it to 128 bits when the time comes, but as everyone seems to be pointing out, it really doesn’t make any sense right now.
What I think that it really comes down to at this point is that there’s no 128 bit architecture for them to target. And without the hardware, there’s no point.
Windows still doesn’t have 64 bit apps. Compare to Linux/*BSD amd64 binaries every distro provides. How are they going to convince developers to support 32bits, 64bits and then 128bits?
The only reason for 128 bits is more memory, and that will only benefit the server rooms in the more inmediate time.
If anything, they must be cooking more bloat into their OS that now have to evaluate the posibility to use 128bits.
there is no reason for most applications to be 64 bit. you really think word, etc needs to be capable of having a stack space that big?
64 bit is for the very real problem of memory limits on OSs and for specialized programs that do need to address that much memory just for their working data.
If you ever did *serious* video-editing or numerical calculations, you wouldn’t say so.
Any HD video you’re editing can easily bust the 4GB barrier of 32bit operating systems. If you run guest operating systems within virtual machines, 4GB is just ridiculously small. At work we have servers with 16GBs of memory or more. They run over a dozen virtual machines at the same time.
Only because you don’t need 64bit operating systems it doesn’t mean there is no need for it.
I have been running Debian Lenny/Squeeze natively on amd64 for almost two years now and it works like a charm, I see NO problems at all. Even Skype and Acrobat Reader which are still native 32bits due to the disability of their developers to write a native amd64 version work like a charm. And when it works, why not use it!?!?
Seriously, if science and industry had always had this attention of “this is still enough”, we’d still be messing around with 8bit computers or even worse.
Toast
Let’s look at the practical side of things. Even using 16GB memory modules, it would take 268,435,456 of them to reach the 64-bit limit. Just think of how much power it would take to keep that memory going…
There is a whole lot of need to go above 32-bit… but there is zero need to go above 64-bit as far as memory is concerned (now or at any time in the next 10 years).
Sorry I know this ridiculous but your post made me laugh and then I got to thinking about the consequences.
I am picturing going into your a computer store and asking to purchase 260 thousand 16GB memory modules. And then getting home and realising
A) It is going to take quite some time to plug these things in.
B) You are short a few memory slots.
Good post. Thanks.
And after you get those 260 thousand modules worked out, you realize you have to make 999 more trips…
WTF?
Dude…. try reading my comment.
MOST programs do not need it…. SPECIALIZED ones do… video editing programs ARE specialized and certainly fall outside the scope of the MOST qualifier.
but even casual users are interested in video editing. This isn’t some obscure technical application that only 5 people are interested in.
It is still a specialized application space!!!
Do you have a comprehension issue? I am making the argument that we need 64 bit computing to deal with the memory issue of 32 bit processing. I was simply explaining the problem domain to the fool who thinks 32 bit is fine and dandy for normal users.
Edited 2009-10-11 19:02 UTC
He didn’t say it wasn’t needed. He said it wasn’t needed for most programs, which is absolutely true. He also said it WAS needed for programs which required large working sets (more than about 2GB, i.e. video editing and such), and to give the OS more address space to map processes into.
For virtually any 32-bit piece of software that has a working set less than 2GB and that doesn’t require any 64-bit calculations, there is virtually no benefit to moving to 64-bit. In fact it can and will often do nothing but make it run slower.
That in no way means that moving to 64-bit addressing wasn’t needed – it was absolutely needed, but most of the problem it addressed is isolated to OS memory management. There is simply no compelling reason to port a working 32-bit application to 64-bit unless you need the additional address space or you need to do 64-bit computations.
There are of course cases where a port is needed because of the need to interface existing 64-bit DLLs and executables (i.e. explorer extensions, plugins, etc.) But that is simply a side effect of moving one or the other to 64-bit – it isn’t in and of itself a reason to do it.
Not only 64 bits computations, but the 64 bit mode also includes more registers available, so functions can be optimized to pass parameters through registers and not use the slow stack.
True, but in real world code more often than not the extra registers simply don’t offer enough benefit to compensate for the slowdown caused by doubling the size of pointers… There are times where it helps A LOT of course, and it certainly doesn’t hurt, but it isn’t always effective. Besides, there is nothing magic about have 16 GPRs that makes it possible to use registers instead of the stack – you can use reg calling conventions with just 8 registers in 32-bit code as well – it just depends on how many parameters you are using, their types, and how many local variables you are defining. If your code has a lot of parameters or a lot of local variables then 16 GPRs will probably help, otherwise it will not have any effect or very little effect.
Wha…?
Stupid comment. You could at least write something more elaborated. But I guess I wasn’t too specific. I meant to say most applications developed for Windows are 32 bits. Probably 99% of those available are only 32 bits.
In the business world there are still a lot of 16-bit apps in active use. My company still has a 16-bit DOS application that is used daily as well as a few legacy applications with 16-bit installers. These basically make a move to a 64-bit OS an show stopper unless some form of virtualization is involved. Windows 7 is a far more likely option for us than Vista ever could be in this regard.
Due to the fact that there is a windows XP emulation system that allows for legacy applications to run.
Unless Windows 8 is set to launch in 2050.
perhaps this is more for the NTFS file system. example: ZFS is a 128 bit file system.
having a 128 bit OS makes no sense, it would be like creating a car for a hippo, sure you could do it but it but why would you?
Then there would be no reason to hype it so much. Just a simple phrase like “NTFS will be enhanced with 128 bits to support bigger filesystems” would be all is needed.
I’m pretty sure no one at Microsoft was hyping this as a potential Windows feature, because it’s pretty dumb.. to populate an address space that large and manipulate even 64 bits worth of addressable data would take considerable amounts of energy. Even in the filesystem space, 64 bits is enough.
We already have support for 128 bit numerical processing (in the form of SSE support). In fact, you could say that even Windows XP is a 128-bit OS .
This isn’t about the file system. The Ars article gives a direct quote from a Senior R&D guy at MS… I highly doubt he would say IA-128 when he was talking about file system addressing.
I suspect this is probably an effort to prepare for 128-bit FPU. It is highly unlikely that AMD or Intel would extend the address space anytime soon, but that does not mean they wouldn’t extend the size of FP registers.
Having full 128-bit FPU would be useful for a variety of things, and could pave the way to eventually unifying SSE with x86.
Read this: http://forums.amd.com/devblog/blogpost.cfm?catid=208&threadid=11293…
All completely speculative of course, just saying the file system explanation doesn’t seem to fit very well to me.
I also think the main motivation behind 128-bit support is probably not the file system or addressing more memory but for native support of 128-bit floating point numbers.
Even today, there are a number of number crunching applications which could gain some reasonable computational efficiency by using processors with native 128-bit support. Processing of things like GUIDs and 128-bit encryption keys may also see benefit, no?
From my own experience, I’ve encountered scenarios where the rounding errors as a result of operations with 64-bit floating point numbers were too excessive and I had to use 128-bit values.
Hasn’t there already been specialized (supercomputers) built around 128-bit processors?
Yes, bring the power of 128-bit floating point registers to the people! Why can’t people just crack AES and 3DES @ HOME like they were doing with DES and MD5 hashes?
The security folks needs more things to do anyway…
This comment says it’s something more than just the file system:
Robert Morgan is working to get IA-128 working backwards with full binary compatibility on the existing IA-64 instructions in the hardware simulation to work for Windows 8 and definitely Windows 9.
It may have nothing to do with memory size, the number of bit a processor generally refers to the maximum integer size, not the memory capacity. That said in the past it has been used to refer to the data bus width. The 68K was called a 16 bit machine, it processed 32 bit integers it could access a 24 bit address range and had a 16 bit data bus. What was it?
Just to confuse things, if you count the vector unit you are already using 128 bit processors …and have been since the PowerPC G4!
I’m guessing they are adding support for 128 bit integers. Quite possibly for use in a filing system. The vector unit would be the obvious place to add this.
However, the word “clusters” was also mentioned in the link somewhere. They have separate address spaces but with 128 bits you can treat them as a single machine. Just as the Microsoft’s multikernel mentioned the other day does.
Given computers will almost certainly eventually go down that route, some form of 128 bit addressing makes some sense for large machines.
I don’t really think that doubling everything constantly is better than improving solutions for the problems we encounter today. 32-bit is still problematic at some points, 64-bit is far from being perfect; and the less we try to control our current technology, the less we’re making this situation better.
In overall summary I think that it’s all about quality, not quantity.
Your talking about software quality. That has nothing to do with 32-bit vs 64-bit vs 128-bit or anything in between. Those are attributes of the underlying hardware. Yes, software does need to be changed to incorporate these “new” attributes as they become available, but doing so does nothing to address its quality – it is simply enabling additional modes of operation. Some software can take advantage of increased address space, additional registers, large integers, etc., but frankly for most software it doesn’t matter one bit (pun intended).
I’m only responding to this because while I agree that software quality is definitely an issue, it is a _seperate_ issue. And the software worlds inability to get its collective sh*t together has nothing to do with whether or not moving from xx-bit to yy-bit is actually _useful_. Moving from 32-bit to 64-bit was most definitely useful, regardless of how crappy software still tends to be.
Can’t agree with with that [at least not in one point]. You wrote that :
Don’t you think that this sort of thinking has something to do with this lost of quality in favor of [over]growth of the functionality? Do we really need 128-bit?
My answer is simple – no. And the world doesn’t even ends on the gamers and server market. In fact – these are the separate niches, but one of them is in a position to dictate the “hardware revolutions”.
I agree that introducing 128-bit will somehow improve capability of the modern SW+HW issue, but it will introduce many other problems. Just look at the 64-bit …
Regards
Do you need it? no… do scientists with small grants who might want to buy a eight core 128 bit processor to run high precision modeling software with in a reasonable budget want it? sure. Rendering farms in Hollywood? Military? Those in the chip industry interested in unifying the instruction space? Absolutely.
Will the mom and pop end user see any real benefit? not really but so what?
Unless the processor makers get smarter about how operating systems are structured, there wouldn’t be a technical benefit to going to 128-bit.
On 64-bit systems, you already pay a performance penalty for dealing with pointers that are twice as large. I know this penalty is on the order of 20% in Java, and it’s probably not much better for native code.
About the only only practical uses I could see for 128-bits are for filesystems, or for ultra-high-end compute clusters that run thousands or hundreds of thousands of virtuals.
Im not arguing for moving to 128-bit memory addressing – we are pretty far away from that becoming useful by any stretch of the imagination. But 128-bit GPRs might be quite useful. But you point about 64-bit being slower…
Its not slower because of having to work on 64-bit registers. Doubling the size of registers had virtually no detrimental effect on clock speeds. Its slower simply because you in are doubling the amount of data that needs to be moved around the system. The slowdowns are almost completely isolated to the following areas:
1. Moving data from the CPU to the caches and main memory and back again takes longer because there is simply more bits to move. And since this is already a bottleneck to performance, the effects are amplified.
2. executable images take up more space, so loading them from disk can potentially take longer.
This is sometimes counteracted by the increase in available registers that came with x86-64, but not often. Regardless, as caches get bigger and faster the difference will shrink. That is already happening, i7 still has the same problem – but the difference is much less pronounced because the memory subsystem has gotten so much better. And if you happen to actually need to operate on 64-bit integer values routinely, the increase in performance completely trumps all those concerns.
Yep, they will need 128bits for Windows 8.
Think not?
Just wait until you have seen the memory requirements just to get the damn thing to boot.
Edited 2009-10-08 01:36 UTC
Well, there’s something sad and true in your comment.
I’m affraid that we’ll be completely forced to use the alternatives – whatever it would be [I’m not especially sad about this vision, I just get the impression, that we will loose some of other alternatives, read: Microsoft Windows in favor of *nix-based systems].
There is something sad in both your comments…. that you actually think that they are parodies of a greater truth about Windows is pretty sad.
There is no IA-128. How can you compile for a processor that isn’t even on the drawing board?
This just confirms what I’ve been saying all along: Microsoft’s developers are taking illegal substances.
There is no publicly available IA-128 but you can sure bet that Intel has been working on that one since AMD one-upped them with the 64-bit instruction set.
Going out on a limb… The MS R&D guy’s status:
“Robert Morgan is working to get IA-128 working backwards with full binary compatibility on the existing IA-64 instructions in the hardware simulation to work for Windows 8 and definitely Windows 9.”
He specifically says “IA-64”. Within Microsoft and by almost everyone else, 64-bit x86 is generally referred to as x86-64, x64, or even AMD-64 (as a nod to the original designer of the specification). If you were talking about Intel x86, you would generally use the term Intel 64 or EM64. “IA-64”, however, specifically refers to Itanium – that is the original name given to the ISA by Intel.
So if this guy is using his terms correctly, he is talking about Itanium when he says IA-64. If that is true, then when he says IA-128 he MUST also be talking about Itanium, or the talk of backwards compatibility makes little sense.
Unlike x86, extending Itanium to 128-bit is a lot more straight forward, and since it is targeted only at the very high end of computing it might actually make some sense.
64-bits of physical memory is not limiting in most senses. But having a full 128-bit segmented virtual address space would allow you to write a hypervisor for running 64-bit VMs that would:
1. Require no changes to the 64-bit ISA – Each 64-bit VM would see memory in exactly the same way as before, they simple wouldn’t know about the segment offsets.
2. Let the hypervisor manage memory very, very efficiently. Each VM would simply be assigned an offset address, and the VM would get a full 64 bit address space that is directly mappable into the hypervisors 128-bit memory space.
3. If the hypervisor was implemented to use a 128-bit file system for storage of the VM images, it would allow it to memory map the images directly, no funky stuff required.
4. If ONLY the hypervisor used 128-bit mode, you may not need to do much if anything to the existing computational units. Simply adding an additional 128-bit ADDR (or adding a 128-bit “mode” to the existing ADDR) or an additional address calculation unit might be enough to implement this, since the scope of work for a hypervisor is limited primarily to memory management tasks.
5. 128-bit of virtual address space is simply HUGE. Huge enough that security can be implemented simply by randomizing segment assignments (there would be a 16TB address space JUST for the 64-bit segment addresses). Having that much space would allow for a very simple and secure memory model.
This makes much more sense to me than anything else. The mention of high security fits, and Windows is still one of the more popular choices of OS for Itanium. MS already has shown interest in hypervisors for their OS – making a version of Windows that takes advantage of these (admittedly conjectured) extensions to the Itanium ISA would give MS a competitive advantage in the high end computing space, something they have been desperately trying to get a foot hold in for years now.
Granted, this is pretty out there – but if there is any truth to what that guy posted it is probably closer to this than anything else. The 128-bit FPU thing could fit too, but I’m stuck on his use of the term “IA-64″…
This… makes a lot of sense. Thanks for thinking this through for us.
i’m gonna say they are just starting on this. with the expectation that it may happen in 10 or 15 years.