What planet is this guy from? Dual 128-bit DDR channels is a “good compromise”? The motherboard makers accomplished a major feat just routing dual 64-bit bit channels from the DIMM slots to the northbridge. Dual 128-bit channels is just out of the question with current board manufacturing technology. Just too many wires. A better solution, going forward, is what the Opteron is doing: NUMA. In the Opteron architecture, each CPU has a direct connection to its personal memory bank (which can be dual 64-bit DDR). Further, each CPU is connected to up to two other CPUs over Hypertransport links in a message-passing network. If a CPU needs a piece of data from a remote memory bank, it sends the request through the Hypertransport link, and it is bounced from CPU to CPU until it gets to the target CPU. This CPU sends back the requested data over the Hypertransport link. This whole mechanism is highly optimized, to the point where non-local memory access doesn’t have that much overhead over local memory access.
I may be very ignorant about this subject — but I thought I knew at least a bit up until this whole “Mac goes 64-bit” speil. So let me ask again and maybe someone can explain something to me. From what I thought I knew, weren’t G4’s 64-bit? From what I thought I knew, I recalled that all Modern PPC’s were 64-bit. Wasn’t the PPC 620 64-bit? Didn’t the G4 have a “velocity engine” which boasted something around 128-bit processing? What’s going on… like I said, maybe I’m just ignorant, but I thought Macs were always > 32-bit…??
Depends what you define as “64-bit”. Depending how you think of it, even the Pentium Pro was 64bit. There is no real, “official” definition of what 64-bit is.
The G4 chips are 32 bit chips with a 128 bit enhancement that Apple calls the ‘Velocity Engine’. For as long as I can remember, Apple computers have been 32 bit machines, but this new G5 is a 64 bit chip, with the same 128 bit enhancement as the G4.
the altivec unit may have had 64 bit registers (i dont know), and the newer ppcs may have had 64-bit floating point registers (again, i just don’t know), but this is talking about the address and integer registers.
means you can address more than 4 GB of memory with a single register.
64-bit data path into a processor able to handle 64-bit chunks/64-bit wide registers…. that’s how I would define 64-bit processing … And if the case is that 64-bit processing takes on multiple forms/definitions … what’s the big deal and why is everyone making 64-bit sound like the next big thing. Sure it may be something new on the desktop (or may not be if I’m correct about the G4) — which after looking at a couple things apparently used the velocity engine to process chunks of 128-bit (which for me would define 128-bit processing) aside from the fact that the other two components I mentioned above were still 64-bit if I remember correctly, but if that’s the case then at least it would be considered 64-bit… and thus, a brand new “64-bit Mac” wouldn’t be much to care about, so why the big fuss? Because they made a better 64-bit architecture? doesn’t seem to change the fact that 64-bit existed before and was used before.
Actually, an apple representative told me that Mac OS X won’t be completely 64bit soon. Why? Because it would have way too much overhead, and actually run slower, not faster. They are going to optimise certain parts though, such as the math libraries for example.
Does anyone here have any informed ideas about Apples plans for their 32 bit machines like the G3s and the G4s? How long until they drop support for them in favor of the new 64 bit G5s?
Would I still be able to buy up to date 32 bit versions of Mac OS X three years from now?
again, the 64 bit that people are talking about is the size of the integer and address registers, as that is what the vast majority of cpu time is spent working with.
when was the last time you wrote/saw code that used floating point arithmetic or OpenGL nearly as much as integer arithmetic and flow control?
i am just happy to finally have a hardware integer large enough to assign a unique number to every possible state of a rubik’s cube. (although I need to check this to make sure)
“There is no real, “official” definition of what 64-bit is.”
Well, yes there is a a matter of fact , the width of the integer datapath which allows for real/native addresses of up to XXbits. Where XX is the “bitness” of the machine, in this case XX=64. This is what the industry has always used to define the architecture width. As usual some vendors have tried to get around that definition by using the “fetch” width or Data/Instruction bus width to define the width of their machines. Case in point: Intel which started to brand the i860 as a 64bit machine, when in actuality was a 32bit machine with 64bit busses.
Now to the original poster:
The 620 was indeed a 64bit machine, but it does not make all PPCs built after it 32bit’s. Many PPC’s are geared towards embeded markets and are 32bit. The 603/604/G3/G4 are all 32bit machines and they all came after the 620. The G3 and G4 had 64bit local buses, actually the G4 can have a 128 bit local bus. But that does not make them 64bit, it just means that they can fetch 2 instructions or 2 pieces of integer data or 1 float in 1 clock cycle. The 128bit is the “width” of the velocity engine, which means that you can get a “physical” chunk of 128bits, but “logically” they are seen as 16 independent chucks of 8bits, or 8 chunks of 16bits, etc and the unit operates on them using SIMD operations.
Thanks for all the explainations… this has clarified this quite a bit. I’m aware that just cause the 620 was 64-bit doesn’t mean all that follow were, I was just kinda certain that the G4 was… (not the G3). I’m also aware that the bus size doesn’t necessarily indicate the CPU’s “width” — however, I was not aware specifically of how the velocity engine handled it’s 128-bit chunks. What threw me for the most part was the idea that the velocity engine (according to everything I had read) could process 128-bit chunks. Thanks for all the info. I still remain, however, unimpressed with the idea of “64-bit” computing, as it has been around for so long.
Well, the end of this article points out who the real winner in the 64 bit race is: IBM. If AMD’s Opteron/Hammer/Athlon64/Whatever takes off and becomes the dominate chip, IBM is sitting sweet at least for a while becuase they are the ones fabing the chip. If Apple’s G5 turn Apple’s into the MUST HAVE machines and everyone dumps their PC for them, IBM is sitting pretty for the same reason. If Apple continues to do business like are doing now, maybe a little better and AMD keeps their market share that they have now, IBM will be sitting pretty because it’s fabbing BOTH sides. They don’t have to care who wins between the two chips, they’ll win either way.
Intel is the wild card here. If the Intium 2 becomes a major seller (I know, stop laughing and just keep reading) then IBM doesn’t get great money (because they’re not fabbing Intiums, AFAIK).
Of course, whatever happens, the consumer will likely be better off. We have two great chips bring 64bit computing to us, and then we have Intel who will probably use AMD’s architecture for their desktop 64 bit chips (IMHO, at least eventually), so consumers will benefit.
Yes and No. Each application can have a max of 4096mb of ram. Mac OS 10.3 will address the full 8192mbs and be able to have multiple 32 bit applications in memory with maximum memory allocated. IE: You could have two 32bit apps in memory with each app having 4096mb of memory allocated to each.
I’m just hoping that the 10.2.7 will also support this; otherwise, installing more then 4GBs of ram will be a waste until 10.3 comes out latter.
In answer to the question of “will 10.3 be fully 64 bit optimized”? It should be! The instruction set is the same between 32 and 64 bit modes. The only change is the register size. It seams that IBM uses 32bit sized instructions on their 64 bit processors. Thus the only difference will be how much memory is used for pointers; and this will only affect programs compiled for 64 bit mode.
Huh? Where does this information come from? AMD and IBM are developing some technologies together, but AFAIK nothing more (yet). The only thing I found about this is a *rumour* from last May (http://www.theregister.co.uk/content/3/30652.html ).
SUNNYVALE, CALIF, and EAST FISHKILL, N.Y. — January 8, 2003 — AMD and IBM today announced the two companies have entered into an agreement to jointly develop chip-making technologies for use in future high-performance products.
The new processes, developed by AMD and IBM, will be aimed at improving microprocessor performance and reducing power consumption, and will be based on advanced structures and materials such as high-speed silicon-on-insulator (SOI) transistors, copper interconnects and improved “low-k dielectric” insulation.
The agreement includes collaboration on 65 and 45nm (nanometer; a billionth of a meter) technologies to be implemented on 300mm (millimeter) silicon wafers.
And there’s plenty of additional news, not just rumors, on AMD’s engagement with IBM.
AMD has had big problems with their 130 nanometer process and it’s a crap shoot to predict whether or not 90 nanometers will save AMD or kill them.
So AMD has turned to IBM to help them, not just with the far out stuff, but also small but important parts of making sure AMD doesn’t fuck up with the move to 90nm.
IBM is not *yet* producing chips for AMD. There have been some news articles about AMD pulling out of their long term partnership with UMC and switching to IBM (and AMD’s own fabs, of course). Dresden has been one of AMD’s best fabs, so one would expect it would be upgraded to 90nm, 65nm, etc.
I really do hope AMD improves their fabrication processes so they can move Opteron prices down and processor frequencies up. I cannot see why they are doing a 256K cache CPU (Athlon 64) in this day and age, especially with only a single DDR channel memory controller. Perhaps it is to get more Mhz out of the processor, I don’t know.
AMD made a brilliant move to NUMA with Opteron and I hope they get the market rewards for having produced such an intelligent design.
The important thing is how fast can this run DOS? – my future OS. Will it be 8-bit compatible? Can I run 8 instances of DOS (64-bit/8-bit) in virtual machines off Linux? I could get the raw processing power of DOS with the multitasking capabilties of UNIX.
Agreed, which is what some of the early rumours said would be happening. The north bridge was theoretically going to contain 3 HyperTransport controllers and a dual channel DDR400 memory controller and each CPU would have its own north bridge.
The main way OS X will benefit from a 64-bit CPU is in its enforced systemwide prebinding. A 64-bit address space will be quite beneficial and less complicated than the existing implementation designed around a 32-bit address space.
Comparing RISC vs. CICS prosessors doesn’t really prove anything. They are completly different architectures. Its like comparing an apple and an orange, they aren’t the same.
“Its like comparing an apple and an orange, they aren’t the same.”
Not really inside the Opteron is a RISC machine, all it does is decode the CISC intructions into RISC instruction sequences and execute them. So internally both are RISC, think of the current x86 machines from AMD and Intel as RISC machines with “instruction memory” compression, granted the whole mem to mem ops throw that simplification off the window, but again internally both machines are pretty RISCy/SuperScalarly.
And since both designs have the same market target the comparison is fair… IMHO
Comparing RISC vs. CICS prosessors doesn’t really prove anything. They are completly different architectures. Its like comparing an apple and an orange, they aren’t the same.
I’ve never really understood this argument. Of course I can compare apples and oranges. They are both fruit, they both have a high sugar content and both are nummy in my tummy.
In all seriousness, the PPC is not a RISC chip. The second RISC chips added branch prediction, speculative loading and other features that have the CPU do things the program didn’t ask it to do, it stopped being RISC. In fact, EPIC is a lot closer to the spirit of RISC than current “RISC” chips.
I am sorry but I fail to see why added branch prediction, speculative loading, etc. somehow violates the priciples of RISC. I.e. RISC != control freak….
And then you go on with the whole EPIC, well sherlock, it also has some fancy branch prediction, renaming, etc.. etc.. on top of VLIW. So now that somehow is closer to the spirit of RISC?
Well, that sounds right. There is no real 100% 64bit operating system. An operating system will use the 64bit extensions when and if required. IIRC, Itanium is the only processor hell bent on making the whole dam software line up 64bit.
Btw, from what I have heard, Solaris will optimised for opteron and thus be a 64bit operating system. Lets see what happens then in terms of enterprise servers etc.
Itanium: Bloody expensive and a complete overkill as a workstation. You can point to the chant of Intel “x86 for desktops and Itanium for high end”, but ultimately there has not been ONE company out there who have maintained two competiting architectures. Ultimately the x86 will be killed off, and once it is done, what is its replacement? and over priced, underperforming machine?
SUN has proven that the VLIW is bitch of a thing to make, heck, they spent years making MAJC only to find that people still prefer using hardware based on their UltraSparc line up. Doesn’t that tell you something?
AMD64/Opteron: Nice, however, how long term is the architecture? are they willing to perform a bit of slashing and burning later on to remove the really crusty parts of the chip? are they finally going to replace BIOS with something not-so-crappy such as OpenBoot.
PowerPC/Power4/5/6/etc: Personally, I see IBM and Apple’s relationship strengthening, and heck, IBM migh even start offering IBM global services for companies wishing to deploy macs in a enterprise. As for the architecture itself, after reading the holy bible according to IBM, they seem to have a very long term stratergy in place rather than the ad-hock crap motorolla spews out each microprocessor expo.
seems a bit like pain in the as** if people only can work in tru-64-bit environment with BSD or Linux on that machoine, but not with OS X in near future…
A bit early, the development of this nice piece of hardware.
Apple has bunches of work to do at the platform now.
To optimize the OS on this platform (32 bit) brings also benefits to the future versions of OS X.
It’s PR effective. But nothing more.
Correction: the new G5 is technically better updated than all of the others in the genetic Jobs Apple-Plant.
De facto the 64-bit archticture is more to attract Linux and BSD folks to buy Apple hardware than OS X users getting any benefit from this plattform (man, not even their Filesystem is journalized nor a 64-bit one, which has nothig to do with a 64-bit OS, btw…just a reminder).
This article and the guy who wrote is to confusing. Does he know what he’s talking about? Or is he just quoting a lot of stuff he really doesn’t know himself. I’m surely not an expert myself. I just like fast chips.
Well speech understanding sytems have been around oh atleast 30yrs now. In the early days of Hearsay, Harpy etc, it was estimated that atleast 1Mips would be needed to do all the DSP & non linear dynamic time warping etc etc. Now that was back on a 10MIPS DEC PDP10 a nice little 32b mainframe I used to like quite a bit.
If the Speech problem hasn’t been licked yet, it isn’t due to lack of computing power or databus widths. Not one of the texts I have suggested 64bits would help.
Actually I would rather see some retinal scan work coz my mousy hand is RSI knackered. If BG is working on it, that will be a good score pt for Windows.
As for G5 & Opteron, I look forward to choosing between them & seeing more benchies. I can certainly use the address space too on occasion.
What planet is this guy from? Dual 128-bit DDR channels is a “good compromise”? The motherboard makers accomplished a major feat just routing dual 64-bit bit channels from the DIMM slots to the northbridge. Dual 128-bit channels is just out of the question with current board manufacturing technology. Just too many wires. A better solution, going forward, is what the Opteron is doing: NUMA. In the Opteron architecture, each CPU has a direct connection to its personal memory bank (which can be dual 64-bit DDR). Further, each CPU is connected to up to two other CPUs over Hypertransport links in a message-passing network. If a CPU needs a piece of data from a remote memory bank, it sends the request through the Hypertransport link, and it is bounced from CPU to CPU until it gets to the target CPU. This CPU sends back the requested data over the Hypertransport link. This whole mechanism is highly optimized, to the point where non-local memory access doesn’t have that much overhead over local memory access.
I may be very ignorant about this subject — but I thought I knew at least a bit up until this whole “Mac goes 64-bit” speil. So let me ask again and maybe someone can explain something to me. From what I thought I knew, weren’t G4’s 64-bit? From what I thought I knew, I recalled that all Modern PPC’s were 64-bit. Wasn’t the PPC 620 64-bit? Didn’t the G4 have a “velocity engine” which boasted something around 128-bit processing? What’s going on… like I said, maybe I’m just ignorant, but I thought Macs were always > 32-bit…??
Depends what you define as “64-bit”. Depending how you think of it, even the Pentium Pro was 64bit. There is no real, “official” definition of what 64-bit is.
The G4 chips are 32 bit chips with a 128 bit enhancement that Apple calls the ‘Velocity Engine’. For as long as I can remember, Apple computers have been 32 bit machines, but this new G5 is a 64 bit chip, with the same 128 bit enhancement as the G4.
Hope that helps.
the altivec unit may have had 64 bit registers (i dont know), and the newer ppcs may have had 64-bit floating point registers (again, i just don’t know), but this is talking about the address and integer registers.
means you can address more than 4 GB of memory with a single register.
-hugh
64-bit data path into a processor able to handle 64-bit chunks/64-bit wide registers…. that’s how I would define 64-bit processing … And if the case is that 64-bit processing takes on multiple forms/definitions … what’s the big deal and why is everyone making 64-bit sound like the next big thing. Sure it may be something new on the desktop (or may not be if I’m correct about the G4) — which after looking at a couple things apparently used the velocity engine to process chunks of 128-bit (which for me would define 128-bit processing) aside from the fact that the other two components I mentioned above were still 64-bit if I remember correctly, but if that’s the case then at least it would be considered 64-bit… and thus, a brand new “64-bit Mac” wouldn’t be much to care about, so why the big fuss? Because they made a better 64-bit architecture? doesn’t seem to change the fact that 64-bit existed before and was used before.
Actually, an apple representative told me that Mac OS X won’t be completely 64bit soon. Why? Because it would have way too much overhead, and actually run slower, not faster. They are going to optimise certain parts though, such as the math libraries for example.
Does anyone here have any informed ideas about Apples plans for their 32 bit machines like the G3s and the G4s? How long until they drop support for them in favor of the new 64 bit G5s?
Would I still be able to buy up to date 32 bit versions of Mac OS X three years from now?
again, the 64 bit that people are talking about is the size of the integer and address registers, as that is what the vast majority of cpu time is spent working with.
when was the last time you wrote/saw code that used floating point arithmetic or OpenGL nearly as much as integer arithmetic and flow control?
i am just happy to finally have a hardware integer large enough to assign a unique number to every possible state of a rubik’s cube. (although I need to check this to make sure)
-hugh
“There is no real, “official” definition of what 64-bit is.”
Well, yes there is a a matter of fact , the width of the integer datapath which allows for real/native addresses of up to XXbits. Where XX is the “bitness” of the machine, in this case XX=64. This is what the industry has always used to define the architecture width. As usual some vendors have tried to get around that definition by using the “fetch” width or Data/Instruction bus width to define the width of their machines. Case in point: Intel which started to brand the i860 as a 64bit machine, when in actuality was a 32bit machine with 64bit busses.
Now to the original poster:
The 620 was indeed a 64bit machine, but it does not make all PPCs built after it 32bit’s. Many PPC’s are geared towards embeded markets and are 32bit. The 603/604/G3/G4 are all 32bit machines and they all came after the 620. The G3 and G4 had 64bit local buses, actually the G4 can have a 128 bit local bus. But that does not make them 64bit, it just means that they can fetch 2 instructions or 2 pieces of integer data or 1 float in 1 clock cycle. The 128bit is the “width” of the velocity engine, which means that you can get a “physical” chunk of 128bits, but “logically” they are seen as 16 independent chucks of 8bits, or 8 chunks of 16bits, etc and the unit operates on them using SIMD operations.
Thanks for all the explainations… this has clarified this quite a bit. I’m aware that just cause the 620 was 64-bit doesn’t mean all that follow were, I was just kinda certain that the G4 was… (not the G3). I’m also aware that the bus size doesn’t necessarily indicate the CPU’s “width” — however, I was not aware specifically of how the velocity engine handled it’s 128-bit chunks. What threw me for the most part was the idea that the velocity engine (according to everything I had read) could process 128-bit chunks. Thanks for all the info. I still remain, however, unimpressed with the idea of “64-bit” computing, as it has been around for so long.
Well, the end of this article points out who the real winner in the 64 bit race is: IBM. If AMD’s Opteron/Hammer/Athlon64/Whatever takes off and becomes the dominate chip, IBM is sitting sweet at least for a while becuase they are the ones fabing the chip. If Apple’s G5 turn Apple’s into the MUST HAVE machines and everyone dumps their PC for them, IBM is sitting pretty for the same reason. If Apple continues to do business like are doing now, maybe a little better and AMD keeps their market share that they have now, IBM will be sitting pretty because it’s fabbing BOTH sides. They don’t have to care who wins between the two chips, they’ll win either way.
Intel is the wild card here. If the Intium 2 becomes a major seller (I know, stop laughing and just keep reading) then IBM doesn’t get great money (because they’re not fabbing Intiums, AFAIK).
Of course, whatever happens, the consumer will likely be better off. We have two great chips bring 64bit computing to us, and then we have Intel who will probably use AMD’s architecture for their desktop 64 bit chips (IMHO, at least eventually), so consumers will benefit.
PS, Tom’s has an interview with an AMD guy and got to play with a Mobile Athlon 64 too! Check it out at http://www.tomshardware.com/game/200306281/index.html
Question, can you use all 8192mb of ram using 32bit apps?
Yes and No. Each application can have a max of 4096mb of ram. Mac OS 10.3 will address the full 8192mbs and be able to have multiple 32 bit applications in memory with maximum memory allocated. IE: You could have two 32bit apps in memory with each app having 4096mb of memory allocated to each.
I’m just hoping that the 10.2.7 will also support this; otherwise, installing more then 4GBs of ram will be a waste until 10.3 comes out latter.
In answer to the question of “will 10.3 be fully 64 bit optimized”? It should be! The instruction set is the same between 32 and 64 bit modes. The only change is the register size. It seams that IBM uses 32bit sized instructions on their 64 bit processors. Thus the only difference will be how much memory is used for pointers; and this will only affect programs compiled for 64 bit mode.
Can you imagine what this will do for speech recognition?! Apple has a new convert here.
Huh? Where does this information come from? AMD and IBM are developing some technologies together, but AFAIK nothing more (yet). The only thing I found about this is a *rumour* from last May (http://www.theregister.co.uk/content/3/30652.html ).
OTOH look what the AMD homepage says:
Manufactured In Fab 30, Dresden Germany
(http://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_87… )
IBM ist NOT involved in producing AMD64 CPUs. Thus IBM is not “The Real Winner in 64bits”.
SUNNYVALE, CALIF, and EAST FISHKILL, N.Y. — January 8, 2003 — AMD and IBM today announced the two companies have entered into an agreement to jointly develop chip-making technologies for use in future high-performance products.
The new processes, developed by AMD and IBM, will be aimed at improving microprocessor performance and reducing power consumption, and will be based on advanced structures and materials such as high-speed silicon-on-insulator (SOI) transistors, copper interconnects and improved “low-k dielectric” insulation.
The agreement includes collaboration on 65 and 45nm (nanometer; a billionth of a meter) technologies to be implemented on 300mm (millimeter) silicon wafers.
…
http://www.neoseeker.com/news/articles/headlines/Hardware/2237/
And there’s plenty of additional news, not just rumors, on AMD’s engagement with IBM.
AMD has had big problems with their 130 nanometer process and it’s a crap shoot to predict whether or not 90 nanometers will save AMD or kill them.
So AMD has turned to IBM to help them, not just with the far out stuff, but also small but important parts of making sure AMD doesn’t fuck up with the move to 90nm.
I know that. I’ve written “AMD and IBM are developing some technologies together, but AFAIK nothing more (yet).”
IBM and AMD are developing technologies together, but IBM is not producing the chips for AMD. That’s a significant difference.
Well, we’re on the same page.
IBM is not *yet* producing chips for AMD. There have been some news articles about AMD pulling out of their long term partnership with UMC and switching to IBM (and AMD’s own fabs, of course). Dresden has been one of AMD’s best fabs, so one would expect it would be upgraded to 90nm, 65nm, etc.
I really do hope AMD improves their fabrication processes so they can move Opteron prices down and processor frequencies up. I cannot see why they are doing a 256K cache CPU (Athlon 64) in this day and age, especially with only a single DDR channel memory controller. Perhaps it is to get more Mhz out of the processor, I don’t know.
AMD made a brilliant move to NUMA with Opteron and I hope they get the market rewards for having produced such an intelligent design.
The important thing is how fast can this run DOS? – my future OS. Will it be 8-bit compatible? Can I run 8 instances of DOS (64-bit/8-bit) in virtual machines off Linux? I could get the raw processing power of DOS with the multitasking capabilties of UNIX.
Agreed, which is what some of the early rumours said would be happening. The north bridge was theoretically going to contain 3 HyperTransport controllers and a dual channel DDR400 memory controller and each CPU would have its own north bridge.
The main way OS X will benefit from a 64-bit CPU is in its enforced systemwide prebinding. A 64-bit address space will be quite beneficial and less complicated than the existing implementation designed around a 32-bit address space.
Comparing RISC vs. CICS prosessors doesn’t really prove anything. They are completly different architectures. Its like comparing an apple and an orange, they aren’t the same.
“Its like comparing an apple and an orange, they aren’t the same.”
Not really inside the Opteron is a RISC machine, all it does is decode the CISC intructions into RISC instruction sequences and execute them. So internally both are RISC, think of the current x86 machines from AMD and Intel as RISC machines with “instruction memory” compression, granted the whole mem to mem ops throw that simplification off the window, but again internally both machines are pretty RISCy/SuperScalarly.
And since both designs have the same market target the comparison is fair… IMHO
Comparing RISC vs. CICS prosessors doesn’t really prove anything. They are completly different architectures. Its like comparing an apple and an orange, they aren’t the same.
I’ve never really understood this argument. Of course I can compare apples and oranges. They are both fruit, they both have a high sugar content and both are nummy in my tummy.
In all seriousness, the PPC is not a RISC chip. The second RISC chips added branch prediction, speculative loading and other features that have the CPU do things the program didn’t ask it to do, it stopped being RISC. In fact, EPIC is a lot closer to the spirit of RISC than current “RISC” chips.
nummy in my tummy? you just said nummy in my tummy….
no comment necissary
I am sorry but I fail to see why added branch prediction, speculative loading, etc. somehow violates the priciples of RISC. I.e. RISC != control freak….
And then you go on with the whole EPIC, well sherlock, it also has some fancy branch prediction, renaming, etc.. etc.. on top of VLIW. So now that somehow is closer to the spirit of RISC?
What gives?
Well, that sounds right. There is no real 100% 64bit operating system. An operating system will use the 64bit extensions when and if required. IIRC, Itanium is the only processor hell bent on making the whole dam software line up 64bit.
Btw, from what I have heard, Solaris will optimised for opteron and thus be a 64bit operating system. Lets see what happens then in terms of enterprise servers etc.
Itanium: Bloody expensive and a complete overkill as a workstation. You can point to the chant of Intel “x86 for desktops and Itanium for high end”, but ultimately there has not been ONE company out there who have maintained two competiting architectures. Ultimately the x86 will be killed off, and once it is done, what is its replacement? and over priced, underperforming machine?
SUN has proven that the VLIW is bitch of a thing to make, heck, they spent years making MAJC only to find that people still prefer using hardware based on their UltraSparc line up. Doesn’t that tell you something?
AMD64/Opteron: Nice, however, how long term is the architecture? are they willing to perform a bit of slashing and burning later on to remove the really crusty parts of the chip? are they finally going to replace BIOS with something not-so-crappy such as OpenBoot.
PowerPC/Power4/5/6/etc: Personally, I see IBM and Apple’s relationship strengthening, and heck, IBM migh even start offering IBM global services for companies wishing to deploy macs in a enterprise. As for the architecture itself, after reading the holy bible according to IBM, they seem to have a very long term stratergy in place rather than the ad-hock crap motorolla spews out each microprocessor expo.
I agree with most of his comments, but the points seemed a little scattered. I never really got the main point of the article.. did I miss something?
Can you imagine what this will do for speech recognition?!
Yes, I can: not much.
Can you elaborate?
seems a bit like pain in the as** if people only can work in tru-64-bit environment with BSD or Linux on that machoine, but not with OS X in near future…
A bit early, the development of this nice piece of hardware.
Apple has bunches of work to do at the platform now.
To optimize the OS on this platform (32 bit) brings also benefits to the future versions of OS X.
It’s PR effective. But nothing more.
Correction: the new G5 is technically better updated than all of the others in the genetic Jobs Apple-Plant.
De facto the 64-bit archticture is more to attract Linux and BSD folks to buy Apple hardware than OS X users getting any benefit from this plattform (man, not even their Filesystem is journalized nor a 64-bit one, which has nothig to do with a 64-bit OS, btw…just a reminder).
Actually OSXs filesystem is journalized.
This article and the guy who wrote is to confusing. Does he know what he’s talking about? Or is he just quoting a lot of stuff he really doesn’t know himself. I’m surely not an expert myself. I just like fast chips.
There will be a 64-bit address space available to OS X’s prebinding implementation. This alone will help dramatically.
The VFS ABI is 64-bit, just like in FreeBSD. off_t is 64-bit. There is nothing preventing OS X from having native support of extremely large files.
The kernel will be 64-bit addressable, and for calls like mmap() which can consume enormous portions of a 32-bit address space this is wonderful.
The whole system will be built around a new 64-bit ABI which will hopefully not suffer the same performance issues as the existing 32-bit ABI.
What other features of a 64-bit kernel were you looking for?
Well speech understanding sytems have been around oh atleast 30yrs now. In the early days of Hearsay, Harpy etc, it was estimated that atleast 1Mips would be needed to do all the DSP & non linear dynamic time warping etc etc. Now that was back on a 10MIPS DEC PDP10 a nice little 32b mainframe I used to like quite a bit.
If the Speech problem hasn’t been licked yet, it isn’t due to lack of computing power or databus widths. Not one of the texts I have suggested 64bits would help.
Actually I would rather see some retinal scan work coz my mousy hand is RSI knackered. If BG is working on it, that will be a good score pt for Windows.
As for G5 & Opteron, I look forward to choosing between them & seeing more benchies. I can certainly use the address space too on occasion.
JJ