It has been a long time coming, and it might have been better if this had been done a decade ago. But with a big injection of open source spirit from its acquisition of Red Hat, IBM is finally taking the next step and open sourcing the instruction set architecture of its Power family of processors.
Opening up architectures that have fallen out of favour seems to be all the rage these days. Good news, of course, but a tad late.
There are only two reasons for IBM to do this: 1) this is a last ditch effort to try to keep the Power architecture relevant or 2) they are about to start phasing out support for the architecture and this provides at least partial justification (if you need a Power chip, just roll your own).
You still see Power chips in communications hardware, some specialty servers, and radiation hardened versions that can be flown in space. Outside of those scenarios, I can’t remember the last time I saw new hardware with a Power chip.
I kind of see this as a play against x86 and ARM being proprietary while fending off RISC-V. Mainly as a play against x86 and ARM since RISC-V needs more development. x86 has been hit with high profile bugs lately, and ARM needs more work then POWER does to really be a server ISA, although they don’t really play in the same spaces.
IBM owns Red Hat, and Red Hat is pretty big on POWER. Red Hat famously open sources anything they buy, and Red Hat on POWER could be an interesting sales opportunity for companies wanting a fully open stack.
The wildcard in all of this is MIPS being open sourced. I’m not sure anyone cares about MIPS, but it’s already been where the other RISC chips want to go.
Anyway, POWER has been getting some love lately, and hopefully this increases that love.
Yeah, except SPARC was RISC and open, and … it’s probably ‘more dead’ than POWER at the moment.
SPARC wasn’t open, it was the most closed of them all. At least with Power you could get documentation for the instruction set, for SPARC even that was proprietary and had to be paid for.
Sparc has been open directly from Sun Microsystems since 2005… and as far as I know the instruction set has always been publicly documented. Leon Sparc has been an open Sparc V7/V8 implementation since 1997…
So, you are basically exactly dead wrong… even in the early days there were multiple Sparc vendors and implementations. What used to be expensive is licsensing the right to call your implementation Sparc, but that has been extremely cheap for ages.
What are you talking about? SPARC was an IEEE standard by the early 90s.
It has been one of the most open/documented architectures from the get go, it has academic roots so it is not like most features were even “secret,” as there were plenty of publications from the project at Berkeley.
Being an IEEE standard was the issue. For a long time that meant you had to pay to access anything about it, and no freely available documentation was allowed.
Being owned by an industry standard organization is the least free a standard can possibly get from an open source point of view. That is why we invented RFCs instead.
I don’t think you understand what an IEEE standard is, or much about SPARC really. RFCs are a completely tangential concept, and mostly used by a very specific community within computing field.
SPARC was a pretty open architecture from the get go, it started as an academic project after all. Most of the cost was with licensing issues if you wanted to make your own implementation and be under the SPARC International umbrella. But if you were a programmer, and just wanted SPARC-related documentation, SUN would send you reams of books in some cases for free (they did when I was in Uni in the 90s). Also they even open sourced verilog implementations of early SuperSparc processors I believe. So you could see the whole thing, not just the ISA.
SPARC was a really bad design. It’s good for low clocks and high memory, but the world passed it by well over a decade ago.
low clocks?
Sparc m8 – 32 cores @ 5GHz
Memory bandwidth – 185GB/s per socket.
Why do you think Sparc is a bad design (_really_ bad as you say)?
Yes, they have legacy like register windows and delay slots, but overall it is fine.
The only thing really wrong with Sparc is register windows and delay slots… other than that it isn’t called Scalable Risc ARCitecture for nothing, it is perfectly usable as an ISA for anything from a GPS enabled MCU all the way up to 5+Ghz monster chips.
Sparc without register windows would be remarkably similar to MIPS and or RISC-V and every other RISC.
Your memory is probably tainted by the extra wide and slow CMT machines like Niagra, which isn’t even out of order, and only had one FPU for the whole chip which hamstrung it on any FPU code… but that is just one implementation beforeand after that Sparc holds it’s own very well against other architectures and the M8 architecture is competitive with x86 and Power and has high clock rates.
The register windows, which was basically one of its founding value propositions, ended out being a bad idea for implementing an out of order SPARC microarchitecture.
But other than that it was a pretty vanilla RISC architecture.
Yeah, register windows and delay slots. They were a great idea in the 80s, and most of the 90s, to get around limitations of the times, but there’s not really a need for it anymore. The one thing that made SPARC unique and useful is kind of a vestigial tail at this point.
My memories of SPARC are from before Niagara when Sun was a big deal. Even then, people were saying Sun should drop SPARC, and it had out lived it’s usefulness.
Niagara was really interesting, and it looks remarkable like what the ARM servers are trying to do today.
It’s impressive what the new SPARC chips can do; I remember seeing the announcement of their release. I’ll never see one in the wild, but the specs are impressive.
Basically, SPARC is not how I would design a modern chip, and out of all the RISC architectures, we could leave that one in the dustbin and not really lose anything.
There was an interview of SPARC creators (Bill Joy ?) at the Computer History Museum. The instruction set was designed in an hurry, they hesitated with using MIPS which was about to be ready. A justification for register windows is that they had no compiler with good register allocation (colouring, …) suitable for RISCs at that time. Designing a proper compiler would have added more than one year before being able to ship computers. The register stack is more tolerant to dumb compilers.
(It’s the inverse of Itanium : Let’s hope a genius compiler make something out of that mess…)
The register windows also some sort of affordable cache for the stack, early SPARCs were very simple designs with no on-chip caches, bandwidth was needed for instruction fetch.
Too little, too late. This smacks of ‘yeah, not our problem now’.
I mean, it’d be awesome if this meant new meaningful hardware in consumer or corporate hands, but I seriously doubt that’s what will come from this.
Talos has already has hardware for consumers, and their work predated this announcement.
https://raptorcs.com/
This is good news, and a long time coming.
I’m seriously taking a look at the Raptor Blackbird kit. FreeBSD 12-CURRENT runs on it, and by the time I get around to it STABLE should, too. It’ll be my first desktop in a long while, too.
I’m looking at them to replace an old Lenovo workstation that’s part of my server fleet, which is, coincidentally, also running FreeBSD. 🙂
I’ve downsized my desktops and moved my work to servers, so I don’t need that much power in a desktop anymore.
I once asked the question on the old power.org forum about conditions for having the right to create a PowerPC CPU. Pointing that the official price for the SPARC architecture license is 99$. The answer was that IBM still owned the rights, not the “Power” organization, and that one should contact IBM, but they clearly weren’t interested in small open source cores for FPGAs.
IBM used to own a few embedded PowerPC cores (440, 460…) which would be great as opensourced PowerPCs but AFAIK they sold everything to AMCC.
I think FreeScale makes most embedded PowerPC cores now these days right? NXP owns them now…
I have the impression NXP is slowly getting away from PowerPC ISA. The communication (QorIQ) CPU switched to ARM as well as the Automotive ones.
Only STM keeps on PPC ISA for the Automotive chips.
So I go for 2): IBM is going to end PPC support.
Dunno about you lot? I see new Power gear frequently. Work buys it. I have quite a bit of new stuff at home and then there’s the semi-recent Amiga gear.
Yep. it’s in coms gear. I know some people that’ve gone for the new Power9 Talos/Raptor Computing OpenSource workstations.
Then there’s the retro crowd with older gear they’re passionate about.
Yep.. RPi’s and ARM are everywhere, but I’m seeing quite a bit of Power about the place. I guess it’s the circles we move in?
Yep I think all Microtik routers are either PPC or Mips also…. and they are pretty popular where people need a powerful router. Obviously ARM dominates consumer hardware though and some mid end enterprise.
“A tad late,” but so was IBM’s OS/360. Not to worry, I’m sure today’s zSeries developers don’t care how late OS/360 was.
From a business POV, it’s sad to see that so much R&D went into the Power architectures, only to see the market eventually fizzle. (I take the same view toward Alpha and Itanium.) I think there are some greater reasons for opening an “obsolete” architecture like Power and Sparc.
Why does a “new” product lose market quickly? Well, asking the insider, corporate types (i.e. marketing, R&D, customer support, QA, documentation) will get responses that are restricted/filtered by the corporate and department cultures. After all, nay-sayers tend to get transferred out, or fired.
But product failures do occur, even for the “good ideas.” Eventually, it can be good to step back and ask a few simple questions, focused on “what can we learn from this?”:
1. What challenges of the day did this product attempt to address?
2. Which of those challenges did this product address well? (maybe not perfectly, but we’ll take what we can get)
3. Which of those challenges did this product fail to address?
4. What new challenges did this product introduce?
5. What new/unexpected benefits did this product introduce?
This might seem like basic stuff, and I’m no engineer or designer, but (from what I’ve picked up from others) any designer answers these questions upon every product iteration.
I think all 5 questions are relevant to answer about Power:
1. The architecture did attempt to address at least 2 challenges, thermal and cost, by designing a better RISC (vs. Sparc and Alpha).
2. How did Power succeed? See: PowerMac. Gaining some consumer market also drove down cost-per-unit, so that was another success vs. Sparc and Alpha.
3. How did Power not succeed? Well, we’re having this discussion, right? The Power architecture is no longer profitable, at least for the stockholders.
4. New challenges? One of Power’s was RISC multi-core memory coherence. The Cell architecture explicitly addressed this concern (and introduces some new challenges of its own).
5. Some other people have enough training to see how Power did new things better than the others.
Which is my thesis: “other people can see, better.” The Power architecture was designed by AIM (Apple-IBM-Motorola), and Cell was designed by STI (Sony-Toshiba-IBM). *Five* companies couldn’t make the Power architecture something to depose another big player in the late 90’s or early 00’s.
I don’t think that was IBM’s fault. If anything, IBM was the corp that connected Apple and Motorola (US) to Sony and Toshiba (Japan). The US and Japanese corporations gave feedback to one another, through IBM, but all 5 still couldn’t make Power viable, long-term.
So now it’s time to open the Power architecture for outside evaluation. Gates/cm^2, silicon die, fabrication cost, ops/cycle, later cost/value analysis… the list goes on.
If nobody asked these questions, the corporate types would still be thinking that the Intel 4004 could be a modern desktop CPU.
PPC also had to contend with x86 “stealing” all their good ideas thanks to a far quicker development cycle, and no in fighting. Intel were able to see the advantages of PPC and RISC and combined it with x86 and CISC to the point where the benefits of one over the other were almost indistinguishable.
What good ideas did PPC bring. it was a very late RISC architecture, and frankly not very innovative.
People tend to confuse ISA with microarchitecture. Intel, for the most part, has always been on par with regards to the state of the art in terms of microarchitecture, and in many cases they set the state of the art specially from the mid 90s on.
The general narrative is that it’s IBM’s fault. IBM doubled down on high margin servers, and they ceded the market to x86 when they started on their way to becoming a services company.
They had offloaded most of their server brands and assets to Lenovo, minus the mainframe stuff which is stupidly profitable, by the time the 970 and Cell were produced, so the first and best customer of POWER chips was IBM’s own mainframe and AIX divisions. Through this lens everything makes sense.
The POWER chips are great for throughput, they’re integer heavy, and have massive power budgets. These are great for a server chip and absolutely awful for desktop use; they weren’t designed for this market. Apple would have been happy to keep using PPC chips rather then jump to x86. It’s just that no one was making money on PPC desktops or laptops. They were making money on embedded products or high end servers which left a giant middle ground for x86 to fill.
The POWER resurgence is happening because there are now workloads which the chips excel at. Our always connected lifestyles creates a relatively constant stream of data, and a high throughput chip is needed to process all of it.
It’s the same problem AMD had with bulldozer… the beancounters said we can’t afford to invest in R&D to compete so we are going to go for the high margin server market… then they made faildozer….. which was exactly the same line of mistakes Sun had made with T1/Niagra a few years earlier, when they focused only on the server market and ignored workstations. Just a few years before, my school was FULL of Solaris workstations, with tons of engineering software on them….
The fact is the only way to succeed is to have an architecture that scales from mobile to server, with implementations for each, in AMD’s case they have innovated quite a bit in sharing as much of the design and silicon between each implementation. That’s basically why AMD is on an uptick right now…
I really liked Bulldozer, and it’s derivatives. It powered one of the best Linux workstations I’ve had. 🙂 That thing was smooth.
Yeah, most people buy x86 for the single thread performance and good enough FPU; even in servers. I remember someone at a conference saying, “what you really want is a single, 10,000Mhz processor, which is exactly the opposite of that they’re trying to sell you.”
Those Solaris workstations were probably incredibly expensive, and the school probably saved quite a bit of money by replacing them. Not to mention by the time Bulldozer came around software had shifted to Linux and Windows.
The storm hamstringing Intel doesn’t hurt either. Security vulnerabilities erasing perf gains, not being able to bring new fab processes online, log jam of designs, ignoring how everything is moving to PCIe, and ignoring the need for server density in favor of pricier SKUs.
What does open sourcing the ISA mean anyway? They publish a PDF of the spec, or do they actually cough up the goods with VHDL and transistor level implementation schematics?
This is the real question. It’s one thing to set up a foundation so that people know you won’t sue them for making a compatible chip, but it’s something else to provide the actual chip designs. And even if the designs were open, it’s not clear how this would increase adoption. You’d need huge bucks to take this now-open ISA and build a chip that could command any market share. I’m not sure how this changes anything.
Maybe there’s some other forthcoming news regarding other big players adopting Power that the open-sourcing news is a prerequisite for.
Obviously ISA is the ISA, not an implementation. So they should provide handful of public documents that cover every bit of architecture. Preferably machine-readable, as ARM does.
It basically means that they will not prosecute you for copyright if you decide to make your own PPC implementation, or if you work on a software simulator.
PPC is becoming irrelevant, so this is just IBM letting academic institutions playing with the ISA for the most part. Eventually they will drop PPC altogether, it is becoming less and less economically viable.
IBM makes quite a bit of money of POWER, and so does Red Hat, which is owned by IBM.
It’s for everyone building POWER equipment, which is incredibly relevant in the server space, and keeps the ecosystem growing. The ISA is already paid for by mainframes and AIX, so IBM doesn’t make any more money by keeping it locked up. If the ecosystem grows, Red Hat stands to make more money, and IBM gets to kneecap ARM by being more open and not having licensing fees. They also kind of stick it to RISC-V by being more mature and full-specced.
I think it’s too late for POWER to have any effect against ARM or even RISC-V.
ARM has a huge ecosystem and it is too entrenched, and RISC-V is already an established alternative for the very low power embedded stuff.
It’s good that they opensourced the iSA, but there’s very small chance of any other CPU developer to pick up PPC for anything at this point. But who knows.
I stand corrected:
“In addition to all of this, IBM is providing a softcore model of the Power ISA that has been implemented on FPGAs – presumably from Xilinx, not Intel’s Altera devices – that people can play around with.”
IBM? Right. Another thing they are a little late on is giving me a bunch of free stuff. They are 10 years late on that. If you think about it, that IBM, what the hell do they know? So they have a quantum computing system you can just plug in and go. Who’s going to be working on that in the future? What I’m trying to say is that I know better than… IBM does. That’s why I know they should have released a bunch of free shit to me, at least 10 years ago.
It’s amazing to me that some tech people set up their tiny intellects against a giant like IBM with all of it’s minds (and lawyers) and say something completely arrogant like, “they should have done this 10 years ago.” What else should they have done. Perhaps one of these arrogant tech heads should just walk into IBM and say, “get rid of the CEO, I should be in charge.” You know what IBM would say? “Go back to your cube.” Or, “go back to your Mom’s basement.” Or, “go back to your apartment that has pizza boxes stapled to the wall as art.”
Now if only HP could open source Alpha…
Talos has already has hardware for consumers, and their work predated this announcement.
https://canlitv.center/