With the backdrop of the OSX/x86 brouhaha, a story at Low End Mac reminds us of the secret Novell/Apple project to port MacOS to IBM compatible PCs. The team of engineers responsible for the project were successful in porting Mac OS, QuickTime and portions of QuickDraw GX until Apple canned the project and reallocated resources to PowerMacintosh.
I have a copy of NeXTStep (v3) that Apple sent me when I bought a NeXT at a university auction. The CD is for NeXT hardware (Motorola 68k) and Intel. In reality, Mac OS X is a extension of NeXTStep code not Mac OS <=9 code. Remember, Steve Jobs started the NeXT company when the Macintosh was still in a fairly early stage of development.
not really that surprising. i’m sure apple was aware of the overwhelming popularity of x86 and had a plan B for it.
In college in ’97 I bought a copy of OPENSTEP 4.2 and installed it on my PC. It was great, but never much use since I couldn’t get the modem working. It was my first exposure to the Glory that is Interface Builder. I later got a copy (from a coworker) of Apple Rhapsody DR2 and installed it on my computer, and had had fun playing around with it, but it wasn’t much use again without being able to get my modem working. It WAS cool to see an Apple desktop on a PC though 🙂
darwin (Mac OS X’ s core) also ran on x86 before the Apple’s switch to Intel
This make us wonder how different could be anything due an “small” decision.
Yeah, considering that the IIe already had an almost monopoly in schools in ’83.
They gave up the school market and it was all she wrote. Jobs tried to get back in the university market with NeXT, but they were too expensive and by the time they went to x86 and ditched the hardware it was too late.
I remember installing and running an alpha (or beta) version of Novell Netware in the 90s on a Mac server (I can’t remember if it was a 68040 or PPC Mac. This was at Apple. We had a version that Novell and Apple had ported to the Mac and Apple was considering selling Netware-on-Mac as an alternative to AppleShare. (I think this was primarily to sell into accounts that demanded Netware on the server. Of course, this was when Netware was still the dominant server OS.) It never went anywhere, though.
To you guys that say you’ve actualy OWNED previous versions of the Mac OS /x86: I find that remarkable. I can’t defnitively say that’s impossible, but I find it very hard to believe, given the virtual non-existance of any copies ANYWHERE now.
Sure, it was done, but I don’t remember any copies ever being publically available, nor can any copies be found anywhere now, via Usenet, P2P, Ebay, you name it. I can’t preclude being proven wrong but, personally, I won’t believe it till I see it somewhere.
Nobody in the above comments is saying the ran previous versions of Mac on x86.. They say they ran Darwin on x86 (source/binaries avialable for a while now), or NextSTEP/OPENSTEP/Rhapsody (ancestors of Mac OS X) on x86.
The problem with Apple is that they have made some big mistakes in the past, and it is a shame because they have a good team, a good marketing machine, they know how to make a show from a little thing. I really wish the best for Apple, not because they are “cool” or whatever, but because I want to see diversity in the OS industry.
When I said Mac OS, obviously I didn’t mean OS X. Everyone knows about Rhapsody, and that’s what I was talking about.
zephc & Mike Frager both said they obtained copies of it from a 3rd party. IOW, they weren’t an insider at the time the work was going on, and they got a copy (probably on CD) that was released (if only internally) at some point as a test version. The thing is, to the best of my knowedge, there are exactly ZERO of any of those copies (or copies of copies) in circualtion now. The likelihood of such a rare, desireable thing having been released in any form, not being available ANYWHERE now, are slim to none (especially given how easy something like an installable copy of an OS is to copy). I’m sorry, but I just don’t believe that they have a copy or ever did. If they do, I would certainly be willing to pay money for it. If anyone cares to point-out something I’m missing, please do.
I bet there are more people using C64s than Macs.
And I would wager you’re wrong…
xVariable wrote:
>I’m sorry, but I just don’t believe that they have a copy >or ever did.
Why you say this?
OpenStep was not released only internally, but was rare to find and was most used by universities & colleges
correct me if I am wrong
perhaps you should read the comments again. nobody has stated they were running mac os on x86 hardware. see P’s comment.
There were *many* copies of OpenStep running on x86 PCs in the physics department I was in. It was easily bought at the time (circa ’98) and there are still people passing around copies of it to maintain their old NeXTy code.
Check out various torrent sites and you will find that Rhapsody is more than available at this point in time, especially with all the excitement over the coming inevitable leak of OS X Developer Transition DVD for x86.
Well I’ll be damned if I can find it (alright so, truth be told, I’ve only perfucntorily glaced at like 2 or 3 different sources ;-P).
I’ll keep hunting, no doubt it’s only a matter of looking hard enough.
@xVariable
If you want instantly a desktop like OpenStep/NeXTStep why you don’t try a linux with AfterStep or GNUStep as desktop?
I’ve found this on DustyComputing:
http://www.dusty-computing.com/incoming/My Gecko running NEXTSTEP.jpg
HP machine that runs NeXTSTEP.
the correct link of the image is:
http://www.dusty-computing.com/incoming/My%20Gecko%20runnin… NEXTSTEP.jpg
I hope it’s obvious that the *Step LnF in NOT what I’m interested in (not that there’s anything wrong with that, if tht’s your bag).
http://www.openstep.se/hardware/canon/canon_objectstation_4.1.pdf
I don’t know how things work in your part of the world, but around here we don’t call people liars with such a straight face.
You may think something is unlikely, rare, whatever, but there’s no reason to call people liars.
Apple (compatible) hardware was generally much better than x86 hardware in 1993 – scsi hard drives, ADB ports, NuBus etc.
I had a 1991 vintage Mac RasterOps flat screen 21″ monitor and 8 meg video card when 13″ monitors and 256 colours were the norm in the x86 world.
Most x86 machines in 1993 were complete rubbish.
For the registered developpers. I remember my dad installing it on my Pentium 120. I had luck, my hardware was supported (matrox graphic card and creative sound card).
It was a weird experience. Boot in text mode, and aafter that a mix of 0S 8 interface with big part of Openstep UI on it (like preferences.app and finder).
I think it was the DR1. I also got the Apple Yellow box dev kit for NT4.0 and Win 95.
When I was sys admin at Warner Bros Feature animation (london), all of the Dell Optiplex’s and DEC workstations (Dual Pentium Pro’s) that we had ran Openstep – this also applied to the main studio in Burbank, LA. They were used for a specialist animation suite (the name of which I can’t remember) which at the time was only available on Openstep and IRIX. We had no pc’s running windows, the mac’s were running system 7.6 and used for word processing and general office stuff.
Anon
what was wrong with 68k to make apple switch to ppc?
motorola was on the verge of introducing the 68060 and given what intel/amd has down with x86, what was the reason apple and moto decided to discontinue 68k processor line?
i am sure that 68k could have improved to stay competitive with powerpc and/or x86?
it has a larger register set than x86 and it is less kludgey than x86.
Motorola was too slow with the ramp up of the m68k line, and the first PowerPCs blew everything (including the x86s at the time) out of the water. They were simply, at that time, the best thing available compared to anything else out there.
There’s a nice screenshot guided tour of Rhapsody DR2 here:
http://www.pegasus3d.com/rhapsody/rhapsody_screens.html
Motorola was too slow with the ramp up of the m68k line, and the first PowerPCs blew everything (including the x86s at the time) out of the water. They were simply, at that time, the best thing available compared to anything else out there.
I remember skimming through some old computer magazines not too long ago and coming across some articles/benchmarks talking about the earliest PPC Macs running 68k code slower than the fastest 68k Macs at the time. I should dig those up again.
Back in the late 80’s early 90’s it was possible to run Mac source code on a PC.
My friend ported, essentially, the Mac Toolbox from Inside Mac into a C library that would run on PCs so that his company could port their Mac applications to PCs running DOS.
You have to appreciate that this was before Windows, back in the day of DOS extenders, etc.
But he wrote pretty much the entire thing from scratch — Quickdraw, Window manager, Fonts, Resources, Events — the whole kit and kaboodle.
The final applications looked more like OpenView, however, as they didn’t want to clone the Apple L&F, but they wanted their source to port.
i also recall that the 68060 was competitive with the powerpc601 and outperformed the 486 and pentium
Yes, but the 68060 was damn expensive, very few units were produced, and the coldfire more interesting weren’t already produced. I got a 68060 on my computer, it was cool, but it wasn’t clearly the best performer.
@frub
Really wrong what you said
1) Mac OS X ‘s core is open (OpenDarwin)
2) An OS may be cute and good even if is proprietary
(but windows is an exception… )
The 68060 was the fastest INTEGER processor out at that time. The 50MHz 060 beat the 90MHz Pentium, and the 66 MHZ PPC601 hands down. The 601 had better floating point performance, while the 060 and P5 did about the same in FP.
The problem was that Motorola wouldn’t commit to producing the 060 in large enough quantities for Apple. They also wouldn’t commit to improving the 060. As a matter of fact, they didn’t. The 68060 was the last of the 68K family and never surpassed 66MHz.
Motorola eventually stripped down their CPU32 68K based core to make the ColdFire processor for the embedded market, but clearly the focus was now on PPC. This allowed Motorola to split development costs with IBM.
approximately two and a half seconds.
http://darwinsource.opendarwin.org/10.4.1/
” When are you all Mac users going to get it? It is not about how intuitive or cute Mac is. It’s about how open in terms of freedom. An OS that is as closed as OSX will never be an option for someone who enjoys the freedom of Linux. It’s not about GNU running on top of OSX. Rather it’s GNU running on top of an open OS. When was the last time you looked at OSX’s code and were able to modify it?”
Well I sure hope you were just a troll, if not your a complete idiot.
People (real users) could give a rats @ss about being opensource. People buy Mac OS, Windows because they don’t need to mess in the source. They want something that is easy to use, and looks nice. Your right they may never be an option for someone like you, but the good news for MS and Apple is there is maybe a few thousand people in the world who feel like you. And no one cares what you think.
Honestly, I don’t give a damn about source code access. If I must know, I will browse Darwin’s source tree. I can build what GNU solution I want on top of OS X, and that’s all the GUN I need. I want that pretty GUI, and suffice to say, so does a huge percentage of desktop users. Take your GNU chunky looking crap and stick it, because poorly written GUIs complying with poorly composed standards just don’t cut it.
“Really wrong what you said
1) Mac OS X ‘s core is open (OpenDarwin)
2) An OS may be cute and good even if is proprietary
(but windows is an exception… )
”
It is not possible to modify OSX’s internal code(aqua, darwin) and rebuild the system unless you want to get sued.
It is fine if you like OSX and don’t need to modify it, understand how it works, nor share it(hint hint).
My point is, OSX and Linux are two different beasts both technically and politically. Mac afficionados are spreading FUD by assuming that Linux is just an alternative for those who don’t like Windows or Mac/ppc. Linux is more than an OS, believe me.
If people switch away from Linux due to Apple’s CPU swing, it’s only for the better. I think it’s time that the free community gets its bad seeds, free riders with no open source spirit, out of the garden.
peace out.
Actually, you’re both talking about different things. Listen to what frub said:
“An OS that is as closed as OSX will never be an option for someone who enjoys the freedom of Linux.”
You wrote:
“Well I sure hope you were just a troll, if not your a complete idiot. People (real users) could give a rats @ss about being opensource. ”
Most “real users” (aka Joe/Jane Users) don’t care about open source, but then again few Joe/Jane Users don’t currently use Linux. Unix admins and hobbyists use Linux. , Cost-conscious countries or cities use Linux. Countries who want to want to guarentee that their infrastructure doesn’t depend on is controlled by a foreign company also use Linux. But Joe/Jane user doesn’t care.
What I find funny to this whole argument is that YellowDog and a few other distros were doing quite well running on the PPC. Techies switched away from MacOSX to run Linux. So why on earth would it be any different on the x86, where Linux has a strong foothold?
Assuming that Apple can survive the transition (sales of PPC Mac would likely fall like a rock until the new Intel models are out), Apple should do quite well on the Apple-branded PCs. They’d gain more distribution space (especially if they license ApplePC creation to companies like Dell), and likely be able to sells cheaper machines. But they are much more a threat to Windows than they are Linux.
One thing that is important to remember is that this isn’t really just about having x86 as a fallback. You have to consider what the late 1980s and early 1990s were like. At the time, the future looked very different. People thought it entirely likely that MIPS, Alpha, PPC, etc, would all survive, and that the market would embrace a plurality of architectures. The software folks heavily emphasised portable OSs supporting multiple “personalities” as a way of being competitive in this hardware environment. You can see this in the design of WinNT, which at one time ran on all of these Architectures.
So Apple was indeed worried about being tied to 68k, but not because they thought that x86 would continue its dominance. They were worried because they didn’t want to be stuck with a single-platform OS when everybody else could run on many architectures. Porting to x86 was to be just the first step to making the MacOS codebase portable.
rhapsody dr2 on ebay…
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&category=51046&item=5…
so while powerpc will survive in the embedded and game console, isn’t DESKTOP/workstation powerpc dead after apple switches to x86?
that is for the PPC version, not x86…
Does it matter? the desktop is x86, it has been for a decade, and will remain that way for the forseeable future. For the PPC, the money is in the high end. It is weird. Back when PPC came out, nobody, thought that in 2005, x86 would be more dominant than ever.
Agreed. On the hardware side, the war is over, and Intel won. Short of a catastrophy, there will be no further revolutions. Evolution is the name of the game from this point on, with people consolidating on the x86 arch and extending it.
Does it matter?
will intel and amd continue to improve x86 at break-neck pace now that powerpc threat is eliminated?
If people switch away from Linux due to Apple’s CPU swing, it’s only for the better. I think it’s time that the free community gets its bad seeds, free riders with no open source spirit, out of the garden.
I have absolutely no doubt you’re just a troll, but there really are people who think like this, so it’s worth talking about – a little bit, anyway.
Let’s think about users for a minute. Not hardcore geeked out PC enthusiasts that read Slashdot, but people like your Mom or your boss. None of them care about Microsoft’s monopolistic practices, and none of them care about open source or “free as in freedom” or any of that. To the “vast unwashed masses,” 99.99% of current and potential computer users, their computer is a tool and nothing more. They have a lot of things to worry about in their lives, and computer issues are at the bottom of their list. It’s just not important to them, except when it breaks.
It’s not unlike my relationship with my car, when I had one. How the engine worked or the business practices of the company that built it might be interesting, but as a non-gearhead, really didn’t matter a damn to me. The car was a tool to get from point A to point B. If somebody wants to sell me a new car, they’d be wise to concentrate – and talk to me about – things I actually care about: price, mpg, warranty, ease of maintenance, and so on. Because I really don’t give a damn about your dual-cam this or dual-cam that, nor your ‘Ford sucks’ or ‘Chevy rules.’
Ford and Chevy still need drivers like me, because they need to sell more cars than those few gearheads can afford to buy in order to stay in business. And Linux still needs the “bad seed” users you troll about, for a bunch of reasons, not the least of which is the network effect. The more people that use it, the bigger the market is, the more useful it becomes to each individual. Linux on a million servers is more useful than Linux on a thousand. Linux on a million desktops is more useful than Linux on a thousand. If you don’t understand how this follows, you don’t understand network effects.
I saw a sign at the supermarket recently, intended not for me but the employees, that read something like this: “Customers aren’t a distraction from what you are doing, customers are the only reason you’re doing what you’re doing. No customers, no job.”
It’s the same with Linux. No users, no Linux. More users, better Linux. We need those network effects. We need that ecosystem to further develop if we want Linux to further develop.
So much Linux development is focused on infrastructure, and development tools, and meeting the needs of a few geeks here and there. I get it – scratch an itch and all that. But there needs to be better and more focus on the users; not users like Eugenia or me or Joe Random OSNews Geek who thinks they’re just a “user” but is actually a rare, highly specialized user with very particular, geekish needs and wants. I’m talking about Mom, or your boss. I’m talking about the “bad seeds,” with no “open source spirit” – because they neither know nor care what the hell open source is, and they shouldn’t have to. Those of us who care about the future of this OS need to focus on their needs, and thank the developers and companies who do.
will intel and amd continue to improve x86 at break-neck pace now that powerpc threat is eliminated?
First off, the PowerPC “threat” isn’t eliminated – the Mac represented a tiny fraction of Power platform sales. So tiny, it turned out they didn’t have much pull with IBM. Even if Apple had stayed, that fraction was only going to get smaller – with the Cell (PS3) and X-Box both just around the bend…
Secondly, in case you hadn’t noticed, AMD and Intel compete against each other. And little upstart AMD recently gave Intel a huge ass beating with AMD64 (x86-64), with a little help from Microsoft. By refusing to port Windows to Itanium, MS dealt a humiliating multi-billion dollar death blow to that technology, forcing Intel to follow AMD’s lead for the first time… and all the insiders say this had something to do with both the X-Box and the Macintel deal.
In short: no. The competition is just getting started!
“will intel and amd continue to improve x86 at break-neck pace now that powerpc threat is eliminated?”
What, AMD and Intel have not been competing against PPC, at least not in any real way, and certainly not the desktop.
AMD and intel could care very little about PPC for them to need to compete against it, PPC would have to be a threat to them if they didn’t perform, but since PPC desktops were macs, they were safe since people had to be convinced to buy a mac first. Few people buy a mac because it’s PPC, it’s a nice thing to have in it, but if thats your reason, you’re a bit freaky. I really wish Apple was staying PPC, but I understand how it is.
Also AMD vs Intel has pretty well chilled the last few years. There is no target these days 3 ghz or 4 isn’t much a target, 10 Ghz is a target and thats way off (maybe never if the go horizontal into multi cores instead of vertical into clock speed). Neither company has been doing to much at a rapid pace in a while. AMD has been bringing out great chips for a while now with the AMD 64s and dual cores and not much has come of it.
If anyone Apple going Intel could change things. AMD will try to step up and compete more then ever. They will try to get Apple to use their chips which will mean they must get ahead in performance and infrastructure. Intel will step up more because now they directly supply a someone who is bringing a turnkey system and expect them to deliver or they will end up like IBM with their bums to the curb in a few years. They can’t just follow a path compatible with MS now.
There is a different face to things now, because you will have 2 different OS’s on the same architecture but one only working on one brand of chips. Things may never be PPC vs x86 again, but people can still ponder “what ifs” over if OS X was on AMD. AMD will be trying to ensure a Windows box out paces a Intel Mac and now the MS vs Apple is on a bit more level playing field.
Openstep has been run on Sun machines as well as PCs. I would imagine that due to the attention once paid to NeXTstep/Openstep there could be a few other related projects that never hit the streets or the headlines, but still exist despite the fact that xVariable never had his hot little hands on them. With x86 being Apple fiercest competition’s base, they’d have been complete fools not to have made some efforts to explore the possibilities for MacOS or a derivitive of it running there.
IRIX and a couple of other *nix varients are the exceptions to the norm here because little was done to port them to other machines. Even there I may be wrong due to NDAs and black projects little has ever surfaced about.
the first PowerPCs blew everything (including the x86s at the time) out of the water. They were simply, at that time, the best thing available compared to anything else out there.
That’s a load of old Apple marketing rubbish.
The PPC601 in the first PowerMacs was a quick hack that didn’t even fully implement the PowerPC spec.
While it was an out-of-order design, it only had a small instruction window and a single integer execution unit. With its RISC instruction set, that integer unit also had to do address calculations.
The 68060 on the other hand, while being an in-order design, had two integer pipelines and dedicated address calculation units. With that it easily beat the PPC601 on integer performance, which is of course what really counts on the desktop, back then anyway.
That combined with the fact that many programs and even significant parts of the operating system had to be emulated on the PowerMacs meant that the fastest “Macs” around were Amigas with a 68060 card and the ShapeShifter virtualiser.
The only area were the 68060 fell behind was floating-point performance, because its FPU wasn’t fully pipelined and thus could finish an instruction only every three cycles. But one has to suspect that that was some kind of political compromise, so that the PPC601 would be better at something.
The problem was that Motorola wouldn’t commit to producing the 060 in large enough quantities for Apple.
Nah, they would have committed alright if Apple hadn’t already decided to switch to Power. IBM had approached Apple, and the two of them “invited” Motorola to join them in the PowerPC camp.
The real problem was that Motorola was often late to deliver 68k improvements, partly because they’d wasted a lot of effort with their me-too 88000 RISC design.
Yes, but the 68060 was damn expensive, very few units were produced
It was expensive precisely because so few units were produced after Apple had gone elsewhere.
In terms of transistors the 68060 actually was slightly cheaper than the 601 (2.5 million vs 2.8 million).
There was no reason to drop the 68k architecture from a technical point of view. It was all about the RISC hype and Apple/Motorola politics.
If I recall Apple/IBM/Motorola were planning to defeat MS by developing OpenDoc around this time.
However I think OpenDoc died because of its outrageous (for the time) 32 MB memory requirements and lack of enthuisiasm from Motorola and IBM.
will intel and amd continue to improve x86 at break-neck pace now that powerpc threat is eliminated?
That’s like asking if ATI and NVIDIA will continue to improve GPUs now that the S3 threat is eliminated… In the desktop space, PowerPC and x86 were never really competing. I doubt Intel and AMD engineers stayed awake at night worrying about what chip would power the next iMac. So, in the desktop space, of course Intel and AMD will continue to improve x86, for the same reason they’ve improved it until now — they’re competing against each other.
Now, in other markets, there will be competition between POWER or PowerPC and x86. x86 has recently moved up to rather high-end machines, so the Opteron and Xeon do compete with the POWER4/POWER5. So with AMD especially, who depends heavily on Opteron sales, you’ll likely see improvements driven by this competition. And of course the real market for PPC is embedded computers, and since both AMD and Intel have embedded product lines (Alchemy and XScale, respectively), you’ll see competition there too.
will intel and amd continue to improve x86 at break-neck pace now that powerpc threat is eliminated?
=> Eliminated? Huh?
Do you know what’s powering all three next generation consoles? (Xbox 360/PS3/Revolution).
AMD and Intel are going at each other, PowerPC’s influence on the x86 market is crap all. Its like PowerPC is seen as the “third wheel” no one cares about. (Not even Apple now).
I suggested a slow-down in the *desktop* as opposed to server/embedded market due to the fact that amd-intel are now a duopoly so they like nvidia/ati have an incentive to slow down innovation in order to maximize profit. i recall that powerpc caused intel to signifcantly improve x86.
for example, while amd extended amd64 to 16GPR and 16SSE registers, why didn’t they extend it to match powerpc – 32GPR and 32 vector registers? why didn’t they also expand the FPU registers to 16?
i’ve always wondered why the 68060 couldn’t have been used and improved by apple/moto and “By nimble” is interesting and insightful.
do you think risc is “hype”? funny how many mac boosters was saying how superior risc is to cisc and i thought the 68k line was good enough .
if you had 100 million transistors and 90nm process technology at your disposal how powerful would 64-bit 68k compare to G5? i have no doubt it would be more powerful and less kludgy than x86-64
while amd extended amd64 to 16GPR and 16SSE registers, why didn’t they extend it to match powerpc – 32GPR and 32 vector registers?
There’s a number of reasons:
– Extra registers don’t come for free, they require extra bits in the instruction encoding and thus in the cache.
– CISC architectures don’t need as many registers because all instructions can access operands in memory.
– Few algorithms actually benefit from more than 16 registers. Even ARM only has 16 and instead spent the saved bits on instruction predicates in order to avoid branches.
– Originally a large number of registers helped the compiler to avoid false data dependencies and thus pipeline stalls, but these days register renaming does that in hardware, on both x86 and PPC.
why didn’t they also expand the FPU registers to 16?
The x87-style FPU is being phased out in favour of SSE2/3. I think Windows64 doesn’t even preserve the FPU registers across task switches.
do you think risc is “hype”?
The original idea of having a fairly stupid but fast processor that moved a lot of functionality into the compiler was a good one. Early RISC processors did deliver comparable performance to traditional designs on significantly smaller transistor budgets.
The logical way to scale up that idea and make use of growing transistor budgets would have been to have more and more of those simple cores on a chip and have the programmer or compiler parallelise the software. (IBM is trying that will Cell now.)
But RISC development went a different way: techniques like superscalar pipelining, register renaming, out-of-order execution, branch prediction and speculative execution were introduced to exploit instruction-level parallelism in hardware.
But those very same techniques could also be applied to CISC architectures. They still needed more complicated instruction decoding stages, but as transistor budgets grew that became less and less significant. On the other hand, the fact that RISC programs require a lot more space has become more significant because CPU speeds have risen much faster than memory speeds.
These trends were evident even back in the early 90s, but for some reason everyone assumed that RISC was superior anyway. That’s why I called it a “hype”.
if you had 100 million transistors and 90nm process technology at your disposal how powerful would 64-bit 68k compare to G5?
The 68k would win because it could fit more code into its caches.
i have no doubt it would be more powerful and less kludgy than x86-64
Yep. 68k instructions had a two-bit field indicating operand size: 00 for 8-bit, 01 for 16-bit and 10 for 32-bit. 11 was reserved and was obviously intended for a 64-bit extension.
The instruction encoding also had reserved space for additional instructions. Floating-point instructions were introduced that way, and vector instructions could have been added cleanly too.
So no need for prefixes and other ugly tricks there.
given all said i’m surprised intel designed itanium with what 128 registers?
i’m aware of the code size, wouldn’t a compact 16-bit fixed length RISC architecture like superH be the way to go in terms of compact code size and easier decoding?
seems to me that cell will be as much of a success as itanic – amdahl’s law serevely limits gains from parallelizing code.
i take it if you could design a clean-sheat legacy free processor for high performance and low power consumption, it would be CISC rather than RISC or VLIW or EPIC?
given all said i’m surprised intel designed itanium with what 128 registers?
The Itanium went way back towards being a ‘proper’ RISC design where the compiler performs instruction parallelisation and scheduling. Its variable-length register windows with automatic spilling are a good idea.
But you do have to wonder what they were thinking when they encoded three instructions in 128 bits, because that means they have to spend any hardware they saved and more on extra caches.
i’m aware of the code size, wouldn’t a compact 16-bit fixed length RISC architecture like superH be the way to go in terms of compact code size and easier decoding?
Yes, I think things like SuperH or ARM Thumb are very good compromises. Note how they’re quite similar to the 68k in terms of register numbers and how the 68k also had 16-bit instruction words. The difference being that the load/store designs implement complex addressing with separate instructions whereas the 68k did it with extensions words.
seems to me that cell will be as much of a success as itanic – amdahl’s law serevely limits gains from parallelizing code.
Depends on the application, obviously. It should be great for 3D games and scientific number crunching, which can be massively parallelised.
i take it if you could design a clean-sheat legacy free processor for high performance and low power consumption, it would be CISC rather than RISC or VLIW or EPIC?
Given that the memory system is the main bottleneck in today’s computers, I’d go for something with a small instruction encoding. I’m not sure whether a compact RISC or a clean CISC like 68k would work out better.
A stack architecture might be worth considering too, because register renaming and out-of-order execution solve the dependency problems associated with that.
i own a sega dreamcast and i think it’s fine.
i read it uses superH which uses 16-bit but it’s called a 32-bit processor.
if 16-bit fixed length risc is more compact than 32-bit fixed length instructions then why not go one step further and create a 8-bit fixed length risc processor?
i think x86-64 can evolve into something clean.