The Libre-SOC project, a team of engineers and creative personas aiming to provide a fully open System-on-Chip, has today posted a layout that the team sent for chip fabrication of the OpenPOWER-based processor. Currently being manufactured on TSMC’s 180 nm node, the Libre-SOC processor is a huge achievement in many ways. To get to a tape out, the Libre-SOC team was accompanied by engineering from Chips4Makers and Sorbonne Université, funded by NLnet Foundation.
Based on IBM’s OpenPOWER instruction set architecture (ISA), the Libre-SOC chip is a monumental achievement for open-source hardware. It’s also the first independent OpenPOWER chip to be manufactured outside IBM in over 12 years. Every component, from hardware design files, documentation, mailing lists to software, is open-sourced and designed to fit with the open-source spirit and ideas.
This is an impressive milestone, and I can’t wait until this is ready for more general use. With things like RISC-V and OpenPOWER, there’s a lot of progress being made on truly open source hardware, and that has me very, very excited. This also brings an OpenPOWER laptop closer to being a real thing, and that’s something I’d buy in less than a heartbeat.
180nm?!?!?!?!
What is this, 2001? I hope this chip is competitive against the PIII from Intel and IBM’s own PPC G4…
Check this link: https://libre-soc.org/180nm_Oct2020/
This is a proof of concept chip. Essentially, they have a CPU with all the major instructions they want to support and now have to see if it actually works. They are manufacturing in low quantities and only interested in debugging issues, not in a mass production. Quite exciting stuff.
180nm fabrication now a fairly cheap option. For early prototypes its a good idea. 180nm being 20 years old means you don’t have to be paying patents costs to get access to the silicon design tools. Serous-ally any silicon node under 20 years old before you have even designed you chip you have to be paying out quite fees for the patent technologies you are wishing use. Now for prototypes to validate you design you don’t spending tones of cash per month on you design tools. When I say tones I cash at worst in the half million dollars month for the software. Yes it rental on the software. So you kind of want to have some what of a confirmed functional design before you start spending this money. Confirming functional design takes you back to the 20 year old process what ever it currently is.
oiaohm,
Can you list any of the specific patents you are talking about? Also, would you cite a primary source for this “half a million dollars a month for the software”?
If you were right, these software royalties really put a damper on all open CPUs, but I’d like for you to prove your assertions before accepting your conclusion. If there is really a patent against software generating <180nm lithographic masks then I'd like to see it for myself.
Alfman there is a long list of patents from IBM, TSMC, Intel, AMD, samsung……
Please note I said “at worst half a million a month in software” that is as in worse case I missed to writing the best case from that draft. I had a arguement with osnews and it jerked me around with that post I thought I had a current copy in clipboard to re-post no it was not sorry. –The best case low cost side to use the lower nm its like Synopsys once you have bought all the parts to be able to make validated design you could send to intel/tsmc… fabs in out right license you have need to spend at least half a million dollars this is your start point to use the newer nm. Please note that is half a million one time free not a month– Yes its not fun you are buying I think about 12 different bits of software from Synopsys to be able to produce a design with validation that fab will accept. So its not just here buy X bit of software and you have everything you need. Yes that half million does not include software to make fpga version of your design from. If you buy everything that Synopsys provides in software for asic/fpga design without any of the patented cell or premade design IP stuff you can get to 1 million dollars just in software for a single developer. Yes this is before paying for extra users. Best case is basically your upfront starting cost.
There can be quite a performance difference between the stock nodes/cells fab will give you to produce for them or get open source the ones vs the ones you have to pay extra for that are patented protected.
Basically high performance low nm chip can get really pricey just in the design stage. Yes fab is not going to tap out a chip with unpaid-ed IP properties in it. Yes third party silicon IP and third party patents costs have to faced into you final silicon order price as well. So paid to design with the IP and paid again to get chip produced with that IP in it.
Why we don’t see open source design high performance silicon most you cannot do it. The stuff you are paying for to get a high performance bit of silicon is under NDA and equal.
Yes I hate the NDA as lot of them have if you provide list of patents to third parties you are in breach of NDA. I would like to give examples patents but that would take digging though the everything to make sure its not one of the patents NDA covered so not to have a arguement.
Entry bar is high to the newer nm for average performance silicon at half million to million in once off cost in software. Now if you are wanting top performing silicon at that nm that where you go into the half a million month unless you are a big silicon company with lots of existing stuff. Some of that is horrible that you will pay the fee for a node/cell test it in model see it don’t work well and you don’t get that money back. So a percentage of that half a million a month you are flushing down toilet on useless bits you paid for. Nice little racket here that you can put up a bad node/cell design and still made money.
Please note at 180nm its possible to get everything you need in open source or fab provided at zero cost. So half million-million in up front vs zero and up to half million a month to get high performance also reduced to zero because in 180nm you are not having the care about high perfornmance. If you are not 100 percent sure of your design you are going to tap out at first on 180nm. Once you have a proved design at 180nm may get a company who sees you design as test at 180nm who has paid for the software for other chips they need be interested in providing you with access to take take design to small nm.
180nm is old. You can think of 20 year silicon tech as the proof of concept field. The cost effective proof of concept field is the old silicon. Once you have proof of concept then getting cost effective access to tools comes possible. Going out as a solo and buying silicon design tools to more modern nm is most often than not absolutely not possible just due to cost. But there are always companies out there interested in help people with good designs once they are validated designs on silicon. 180nm production basically ticks a box to get commercial company support. This is not that you expect the 180nm chip to be a great performing item its just a step in the process to making chips in a way a person working as a individual can afford..
Awesome! Thank you for the background!
oiaohm,
That’s not news to anyone though. Can you provide specific examples to back up your assertions? Obviously that’s what I was asking for.
My point is that all these blanket assertions are being provided without any authoritative sources. Having no sources forces us to treat your information as non-authoritative. There’s nothing wrong with discussing your best guess using non-authorative sources, but if you want to suggest it’s more than that then naturally you need supporting evidence! If you have it, then great, please link it!
Obviously as you go smaller, your hardware costs go up, but I remain skeptical of your claims that software is a barrier. You could be right, but you haven’t provided any evidence. Why can’t open source tools work below 180nm? Hypothetically if it is patents as you suggested earlier, then what are some of the specific patents blocking FOSS implementations less than 180nm? Do you know of any specifically? It’s a fair question, so please don’t avoid it.
@Alfman I can’t address @oiaohm’s claims, but the Libre-SOC people do a good job at breaking down the costs ($7-12M) needed to produce a 22nm part, and they are in line with what I know about producing at that scale.
However at 180nm you can produce 100 QFN parts for about $10K with a mostly open source toolchain using https://efabless.com/. QFN limits the effective part size compared to BGA but it gives you some idea of the relative cost differences between 180nm and a more modern process.
jockm,
Thanks, that’s helpful. The page you seem to be referring to has a lot of good information including specific price breakdowns.
https://libre-soc.org/22nm_PowerPI/
Ideally they’d be using FOSS instead of proprietary software, but even so that’s relatively minor compared against the other expenses: labor, mask fabrication, and licensing of various subsystems. A lot of the hardware standards that make up a PC (PCI, USB, DDR, HDMI, etc) are patented.
@alfman doh! Sorry I forgot the link. Glad you found it. After I posted that I found a service that will do a 300-something nm run of 50 tqfp-128 chips for 1800EU.
That brings down the barrier to entry for a proof of concept dramatically. Now anyone who can cobble together a RISC-V implementation with a matrix multiplier can dream of crowdfunding their “AI Processor”
jockm,
Indeed, that would be pretty cool. It’d kind of be hard to justify for modern use cases when faster/more efficient/cheaper/tested processors are available off the shelf, but I can see really useful applications like replicating/repairing older consoles & arcade games. I had a friend who was into pinball machines, and repair parts for those can be very hard to come by. That said though, I don’t know that anyone has the necessary schematics and logic to manufacture new silicon parts for them today. Reverse engineering them would be a lot of work for such a niche market. Hypothetically given reasonable fab costs, there may be opportunity for a small startup to enter the space, but even if they solve the technical challenges, they may face legal challenges with companies not looking kindly at silicon clones. There are interesting possibilities nevertheless.
@alfman I can give you one interesting example for the pinball use case: Apparently a lot of them ran on the Motorola 6809. Unlike the 6502, the 6809 is no longer made (there are similar chips like the 68HC11 but not fully compatible). I have seen a few people make fpga implementations and then put them on carrier boards that can go into the ’09 socket. Making custom chips would a nice solution for the retrocomputing and retro pinball world.
But there is a lot of use for custom silicon, and reducing the barrier to entry is a good thing. 350-130nm is more than useful for a lot of things. Not just proving designs before you go to lower process nodes.
jockm,
There were probably many motorola models, but in my friend’s case the machine used a cyrix processor and something about it failed. Since it’s based on x86, I thought it might be able to run on a standard PC, but it used some kind of proprietary acceleration. He ended up buying a replacement unit, but obviously that gets harder with every decade.
This may have been it…
https://www.pinballspareparts.com.au/parts-by-machine/williams-parts/star-wars-episode-i/pinball2000-mb.html
I’d like to play around with custom silicon, I just can’t justify it for any of my projects. I’d like to think that some day I will…but probably not, haha. I can see myself getting into FPGAs, I’m often put off by proprietary dependencies though. If I could find a 100% FOSS FPGA kit, I’d probably get one. Anybody have suggestions?
@Alfman
The limiter is not the node size, you can’t do a full IC design flow using an opensource tool chain. The opensource EDA tools are mainly limited to some very limited behavioral simulation at best. There are some academic tools here and there. Most opensource EDA tools are fairly limited and mostly for academic/teaching purposes or very small designs.
But to get anything done with a fab you’re going to need a hybrid IC flow with a package of tools from Synopsys/Mentor/Cadence which interfaces with the proprietary mask tools/libraries from the foundry (TSMC/Samsung/etc).
Even at 180nm these folks were probably using commercial design and routing tools, either ancient versions or via the academic institution involved (IMEC)
Once you go into the ASIC fab world, there’s no “free/open” anything. TSMC is probably maintaining that old 180nm for training and academic clients. They may not charge anything for those old libraries, but they still are not going to let the customer look at them without a contract or NDA. Although TSMC should have a very close coupling with IMEC so within the two institutions they should have plenty of visibility for something as ancient as the 180nm foundry flow.
@alfman Claire Wolfe has made a fully open source stack for Lattice FPGAs called icestorm. That got folded in to Yosis which reverse engineered the bitstream for Xilinx 7 series FPGAs. Yosis also supports using proprietary tools for parts they don’t support, but it is perfectly possible to work in a 100% open stack
Before you actually start programming components you should start learning VHDL or Verilog using simulations tools (GHDL, and Verilator respectively are open source), and something like GTKWave to view the results. There are a lot of concepts to get used to, so simulation is a great way to learn, and even if you are programming components you normally simulate first
There are a number of open hardware FPGA boards out there, the TinyFPGA being one of the most well known, and even more boards that have available schematics. The Pano Logic thin client is FPGA based and has everything you need including networking and video hardware. Not suitable for you since Spartan 3 or Spartan 6 based, but the colorlight 5a-75b is a nice cheap Lattice based board (it is designed for lighting control, but is well documented).
@javiercero1 I suggest you take a look at what efabless.com is doing with a fully open source toolchain for making ICs. You submit your Verilog (I wish they supported VHDL) and Validation suite, they run, validate, synthesize, etc, on a 180nm node, $10K for 100 parts which isn’t bad at all
javiercero1,
That was my point, there wouldn’t be a reason your design is attached to a specific node size.
It makes sense that a cutting edge fab would hold the industry hostage to it’s own proprietary software when there isn’t competition. However given that more competition is able to produce the older fab sizes, consumers would naturally have more choices: “If you’re unwilling to work with us, we’ll just go elsewhere”. jockm gave an example of a company who are willing to support open source. And why shouldn’t they?
Anyways, there are a lot of practical applications that don’t require complex execution units. There is value in the simplicity of taking a simple execution unit and repeating it over and over again to gain massive parallelism. In principal you could test your design on cheaper fabs first and then move to better/smaller fabs without having to completely retool the software stack.
@ Alfman
unfortunately. Abstract high level software design assumptions do not work with low level hardware fabrication realities.
Basically you target the node you can afford, just because a physical design works on one node, it does not mean it will work on another. For the functional verification you don’t need to implement the design on an IC, you can just do it in SW or on FPGAs.
@ jockm
That site seems to just be a broker where they take your verilog and take it from there.
The tools they use to do place/routing/material list/masks are most definitively not open source.
javiercero1,
What’s wrong with that though? When you contract with a vendor on a high level design (ie verilog), you can use FOSS tools for yourself. Unless you’re a micromanager, the tools a vendor uses itself to complete their end of the barging is kind of irrelevant.
There’s no question that costs determine what node size you can use. I also agree that you can verify logic independently from node size. There’s no reason we couldn’t use FOSS software to build and even route << 180nm circuits, the big problem is accessibility. The extremely high costs of fabrication creates high barriers to entry that puts off would-be FOSS projects. But hypothetically speaking, if the fabrication aspects of ~15nm became very affordable & accessible, then suddenly we'd see a whole lot more contributions in the FOSS space for the software too.
@javiercero1 the tools they use are absolutely open source.
They host http://opencircuitdesign.com/, where they list the tools the use.
You can find the source here: https://github.com/efabless.
Here is one of their many talks about it: https://www.youtube.com/watch?v=EsEcLZc0RO8
Unfortunately the level of expertise is a bigger barrier of entry for what you’re envisioning.
There’s simply no FOSS tools for most of a competent IC flow. Anything past very light behavioral verilog/vhdl/systemC is going to require proprietary tools; there’s a tremendous level of expertise, access, and validation required. Which will never be in the realm of FOSS projects.
@javiercero1 and yet we are talking about just that
javiercero1,
The hardware is much more difficult and expensive than the software though. There are a lot more barriers to hardware than software and it is completely realistic to achieve it without being a huge company.
@ jockm
Interesting. Although they’re basically a front end for the academic synth libraries from 20 years ago. They seem to basically broker shuttle capacity from IMEC. Pretty cool.
And your point is? All of those “academic synth libraries from 20 years ago” have been actively maintained and continue to be used. You said there were none, and I pointed you to them. Who cares if they are a broker or not? Every fabless semiconductor company either works with a broker or is one themselves. efabless can handle larger orders,
It’s a lot more than a front end to existing tools, they created a completely open toolchain and automated process for synthesis. You don’t really seem to do much research before responding.
My point is that’s basically a 20+ year old flow. Sure it’s cool, but it’s just a repackaging of the old flows we used when we fabbed stuff in academia for undergrad/grad projects. The tools are ancient; timberwolf, the version of spice, I am sure they still do a lot with Magic. That’s all from the 80s/90s. And the libraries are pre-finfet. Nobody has seen/touched most of those since the 90s.
I think it’s a great idea, it’s basically a web front end to broker your design into a bundle to IMEC.
My perception is obviously tainted for the type of work I am involved. This flow to us it’s like a 74TTL circuit breadboard service to this flow in it’s hey day. For the stuff I work with, those tools are incredibly not competent. So I may have been unfairly dismissive I guess.
Who cares if the tools are ancient if they are actively maintained and improved, and they are. You are talking like they were made 20 years ago and never changed. This is not the case. I don’t see why magic (or some other tool) won’t be updated for finfets, and then the cell libraries will follow
Using efabless’ toolchain and globalfoundries OpenPDK you can make parts down to 22nm, aside from high end CPUs, RAM, and Flash the vast majority of ICs in a system are between 300 and 28nm, with 180-50nm being the majority of that. Automotive is largely 300nm, as are other industries.
The real reason we aren’t talking about 22nm for open silicon is price, not the toolchains. 180/130mm is in the sweet spot of being able to produce useful silicon AND be affordable for small runs (for a given value of affordable).
And there is no reason you won’t get lower than that but you have to start somewhere, right?
jockm,
This is what I think as well. The fab costs are prohibitive for all but the largest companies. But really that’s the main barrier and not that FOSS developers are incapable of tackling the software side. This is why I said the barriers to entry exist mostly on the hardware side.
I wonder whether small node sizes (like 22nm and smaller) will eventually become more accessible or if it will remain the domain of our wealthiest companies.
@ jockm
The tools being ancient bounds the type of designs/application of this project. Yes, timberwolf was already outdated by the late 90s, it’s basically a routing/placement project from Berkeley from the 80s.
OpenPDK is an open standard interface/definition for process definition kits, . It’s not exclusive to GloFlo. It is an industry consortium to define standard interfaces between the EDA vendors (Synopsys/Cadence/Mentor/Etc) and the for hire fab partners (NXP/GloFo/etc).
No foundry will consider you a credible customer if you’re targeting their 22nm process with some old tools.
I am afraid I don’t think you understand the problem domain. A 22nm transistor is a completely different beast than the ones in those 180nm libraries.
It’s not just an issue of cost, but technology. The Efabless flow targets 180nm because that’s what it can target in both cost AND capabilities. Anything newer and you are in FinFet/Design for manufacturing/etc even the materials are different. Which are completely outside the scope of those tools.
javiercero1,
You’re completely missing jockm’s point though. Suggesting a project is outdated due to it’s age is a flawed argument. By your logic, linux, windows, C, etc couldn’t be used for serious work because of their age. But it’s often the case that old tools are regularly updated with time. So if you want to be fair then you should find ways to criticize projects for what they are today and not what they were many decades ago.
I think jockm was right about this too. it’s got less to do with your tools and more to do with the economic factor: There’s way more demand than supply and cutting edge fabs are saving their capacity for huge clients with tens/hundreds of millions to spend. That’s the real bottleneck, not FOSS.
@ Alfman
FOSS is tangential to the issue of 180nm vs 22nm design processes and flows.
You could remove the economic component altogether, and the design flow used by that website could still not be able to target a 22nm physical design.
The flow they are using makes perfect sense because it is aligned with the time and space for the fab process. So they have matched EDA technologies/capabilities with the fab process technologies/requirements.
For a newer node, that flow, as it is right now, doesn’t necessarily align and thus won’t produce a manufacturable result.
@javiercero1
Yes I know what OpenPDK is, I was just trying to give a quick example, not explain all the steps, my point is it is possible to adapt efabless’ toolchain to work on any PDK, and GF as an example supports 22nm (amongst others), if you are tenatius enough you can get the data about their PDK to implement to it, etc.
Because I know of projects using some of those tools being fabbed at 22nm. Until you get to finfet like structures, there isn’t a limit on feature size. The projects I have worked on where we made custom silicon the foundry cared far more about the quality of our tapeout and validation data than what software we used.
You came in with one assertion, and I proved you wrong, and you keep moving the goalposts of your point, and I am doing the best to show you are wrong within the bounds of what I am allowed to disclose, but you have to actually prove your points. Show me that Magic can’t do 22nm for example.
@ jockm
OpenPDK is just an interface standard about the expected directory structure for the PDK. That’s it.
Magic is not that useful for anything past 130nm. It doesn’t support FinFet, or any multigate design basically. it’s purely 2D tool, so you can’t control any geometry on the Pitch plane. Which you have needed for basically any digital design after 90nm.
foundries only consider credible customers for their latest nodes. If you’re an organization that is using this flow for a 22nm target, you won’t be considered because you used FOSS software but because you have no clue what you’re doing.
javiercero1,
I’m glad we can agree because FOSS was never the issue.
But keep in mind that tools adapt when the technology becomes more accessible. Fab accessibility is the greater bottleneck, not FOSS developers.
Let’s use a different example. The set of software developers with the skills to write software for a mars mission is far larger than the set of software developers with the financial and industrial assets to build the rocket. If someone comes along and says “hey look, all of the software considered for such missions are written by governments and huge corporations because nobody else could get the job done.” Well no, the statement is false. The main obstacle is the economic barrier to building the rocket and this lack of accessibility keeps most developers from writing space flight control software despite the fact that many have the skills to do it.
But put into perspective, the economic fab barriers are the real culprit. FOSS tools can and would evolve in the same way that Synopsys’ proprietary tools evolved. If we had a way to remove the economic component, we would see a flurry of activity on the FOSS side as it becomes accessible. And that’s the whole point, FOSS is not the bottleneck.
@ Alfman
domain expertise, technical facilities and R&D execution are as big barriers of entry than just cost.
The dynamics cobbling together a half assed python script don’t necessarily map to the realities of semiconductor fabrication.
javiercero1,
I think it’s a large exaggeration to say it’s “as big”. Greater accessibility means a more developers would be able to do the R&D, gain experience, etc. With the space flight example, we would probably have thousands of people doing it successfully if it weren’t for the immense financial barriers to entry. I don’t say this to trivialize domain expertise, but only to point out that costs are by far the greater challenge to overcome. If not for that then many more people would be able to try, and some of them would succeed.
Funny, but ultimately it’s a stereotype that doesn’t have much baring on what people are capable of.
@ Alfman.
Nah man. We’re once again going down the Donning-Kruger rabbit hole of solutions to problems not understood.
Projects like Linux, gcc, etc. are outliers, which succeeded in large part due to large capital investments. The common case for the average FOSS project is to fail.
In any case, the proof is in the pudding: these FOSS projects have been going for decades now and they can still only target circa late 90s fab technology. And the limiter in this situation is not ability to pay for a wafer production.
@javiercero1 I already acknowledged that Magic doesn’t support FinFet, but said that there is no reason it couldn’t be added, you didn’t address that. I explained that I knew what OpenPDK was, and that I was using it as a shorthand, and then explained what I meant. I asked you to prove that magic can’t do 22nm (which you asserted), and you just asserted it again without any proof. Why are you so hung up on finfets when the majority of silicon doesn’t use them? You keep moving the goalposts, and aren’t really engaging…
@ jockm
I already addressed it. By now I am starting to develop the suspicion you are not understanding the response.
You can’t use magic in those nodes, mainly because it doesn’t support a lot of the types of rule checking that have been introduced in the post design for fabrication world.
A lot of the assumptions from the >130nm processes simply don’t work on smaller nodes.
Of all the capital intensive parts of IC fabrication, design tool cost is not the limiter or a principal. Unless you’re some academic project going through IMEC or stuff like that, if you can’t afford a Synopsys license, you can’t certainly not afford any of the rest of the costs to get a 22nm waffer. No foundry with that tech is going to waste time with such amateur hour nonsense.
javiercero1,
You’re still ignoring the fact that it’s an economic barrier.
Alright, however you make it sound like that’s specific to FOSS when it’s really not. Most business startups fail as well, the facts are kind of eye opening if you didn’t know this…
https://review42.com/resources/what-percentage-of-startups-fail/
Well, no you can’t jump from the inability to pay for a wafer production to the inability to write the software to do it. Logically the set of people with the necessary skills is larger than the set of people with the necessary skills and ability to pay for fabs.
javiercero1,
I think we can all agree that the design tool costs are not the limiting factor. Of course anyone who can afford the costs of a 22nm wafer can afford the costs of licensing proprietary Synopsys software. However you’re moving the goalposts from the argument that FOSS cannot make software that works at 22nm. I really think they could if the 22nm fabs were accessible in the first place! The fab costs are the primary barrier to entry.
When the supply is limited and demand is high, the foundries are going to take business from their most profitable clients, period. You may be right that a small FOSS project may not get the time of day from these fabs, but you are wrong to insinuate that FOSS developers can’t also work with 22nm chips. Accessibility is by far the biggest barrier.
What’s needed is more fab competition for small node sizes.
@javiercero1
Given the right technology file magic can work for any process node that doesn’t have 3d structures like finfets. There is no reason magic can’t be enhanced in the future to support finfets, it just doesn’t now.
Also there are magic technology files for FreePDK 45nm (https://vlsiarch.ecen.okstate.edu/flow/). So yes you can use magic for lower than 130 nm. That is the smallest size for which there are publicly available tech files, but I know of people who have created their own 22nm files based on information learned under NDA.
@ jockm
Again. No. Magic doesn’t support the type of DRC or layout technologies expected by those processes. The limitation is not the tech files. Magic it’s pointless there since it’s not going to produce geometries that can be guaranteed to be manufactured.
I am sorry, but I am going to take with a boulder-sized grain of sand your claim that someone did a custom cell layout targetting 22nm using Magic. I don’t think you comprehend what that implies. It’s like claiming you know someone who used a Commodore64 Basic to develop a Windows 10 app.
For anything 45nm and under you’re going to need layouts with double patterning. And tools that are aware of issues like poly spacing, well proximity, diffusion effects, etc. On top of the fact that the interconnects and transistors (even 2D) have changed dramatically from the larger nodes from 2 decades ago.
The PDK for 45nm in the site you linked seems to be for Synopsys.
@ Alfman
You literally are trying to qualify solutions to a problem you don’t understand.
Just because the FOSS model has worked for other areas of application, it doesn’t mean it is going to work on this specific one. The proof is literally in the pudding in this case; the best an FOSS approach can achieve in this matter is basically in targetting old process tech.
I personally think this project is fantastic. And honestly, I doubt projects of this type are going to exceed the capabilities of what the fab partners can offer. 180/130nm seem to be perfectly fine for this type of design. And it is awesome to see projects being able to produce their own silicon.
javiercero1,
On the contrary I do understand it, you’re just annoyed that others disagree with your opinion. The FOSS tools don’t currently support finfet but there’s no technical reason they couldn’t and you certainly haven’t demonstrated otherwise. Unfortunately most FOSS developers who could add the software support don’t have affordable access to the latest fab hardware. If this were to change though, you would certainly see FOSS developers add support.
Honestly this sounds like bias on your part.
You don’t actually provide much proof for someone who likes to say the proof is in the pudding.
You say that you disagree with me, but my position has been consistent: FOSS would prevail if more FOSS devs could afford access to cutting edge fabs. So where is this “proof” that I’m wrong? Do you have any evidence whatsoever that a FOSS developer with the financial means to fabricate <22nm wafers has tried and failed to support finfet using their own code? If not then you must concede that you don't actually have proof and that your opinion is based on hypothetical conjecture.
That’s fine but your numbers aren’t right. Finfet didn’t become dominant until <22nm and the existing FOSS solutions can scale well below 180/130nm already, that is if you have access to a fab.
@javiercero1
The FreePDK45 page I linked to literally mentions Magic being supported:
You keep asserting things and never link to evidence. If you want people to believe you, you have to provide evidence
@ jockm
It’s on the very web page: “FreePDK45nm . . . The SRC version is designed with Synopsys’ Cadabra and allows full-chip synthesis and place & route through CDS Encounter”
Tell you what. You go first. Show me the 22nm custom cells done with Magic.
@ Alfman
No, not really. We’re in yet another of your Dunning-Kruger rabbit holes in which you have so little idea about the matter at hand, that you don’t realize how off the mark some of the stuff you’re saying is.
javiercero1,
In your head you seem to believe you’re above the need to provide evidence, but to others you’re just a random bloke with an opinion and your refusal to provide evidence (over and over again) is telling. To be fair I’m just a random internet bloke too, but I don’t expect people to take my opinions as fact without evidence the way you expect them to.
And you know what, it’s fine that you have a difference of opinion, I’m cool with that. But you’ve got to get a grip on your ego because your unbacked assertions do NOT prove that FOSS devs would be unable to come up with a working solution if they had 22nm fab access.
Since this is a disagreement over a hypothetical, I’m fine with admitting that it hasn’t been proven one way or the other. You should admit it too, but by the way this discussion is going, you don’t seem to hold high standards for proving things, and that is probably the crux of our disagreement. So unless you’re able to provide more compelling evidence than your own opinions, we’re going to have to disagree here.
@ Alfman
We again find ourselves down this path where you know so little about the matter at hand, that you really don’t realize how little you know.
It’s not a mere difference of opinion. You literally have no idea about this matter. It’s not my job to give you a primer in the advances that have happened in the last 2 decades separating a 180nm process and the ones we use now.
You can either leverage someone who works in the field to expand your sphere of understanding, or you can just continue defending your ignorance to the death.
So you do you do. Not like you’ll be doing any Semi design anytime in your life time.
Cheers.
@javiercero1
I have sent a request to the person I know doing 22nm. We will see what they say. I am also sure you noticed I said NDAs were involved which is why are feel sure I can’t deliver
However I asked for you to provide proof long before you asked for that, and turning around and challenging me isn’t how you act when you have the receipts. If you can provide them, then do it and I will be happy to be proved wrong. But I don’t think you can, and are just prolonging an argument to be a troll. Either way I will take my answer off the air…
@jockm
I am afraid you’re severely confused. So far you are the one who has failed to prove their claim, the burden is not with me. I can’t prove a negative, you are the one who has to provide evidence of your positive claim.
I don’t think either of you understand how basic dialectic method works, which makes your mastery of something far more complex, like state of the art semiconductor design and fabrication, a bit suspect.
So far none of the links have supported your claim. And apparently I have to take your word for it that your “friend” did a 22nm custom design on Magic but it’s under an “NDA.”
Listen, I was just trying to help you out, as a person working in this field, to expand your sphere of understanding. You can take it or leave it.
javiercero1,
For the record it really is your job and only your job to provide evidence for your argument. Your hypothesis that FOSS developers wouldn’t be able to solve the 22nm even with access to affordable fabs is no better than my hypothesis that affordable fabs would result in lots of FOSS development in order to support it.
I’m happy to admit there’s a lot for me to learn, there is no shame in that. but that’s not the real reason you’re having a tantrum. You are annoyed because you don’t believe that your opinions should be subjected to questioning and skepticism, but that kind of vanity leads to inside the box thinking and a lack of appreciation for potential solutions regardless of where they should come from. I genuinely believe that Increasing fab accessibility & competition would open up more opportunities for innovation from not only FOSS sources but proprietary sources too.
I really do think FOSS really would make plenty of inroads if 22nm fabs were more accessible. You don’t have as much faith in the FOSS community, which is fine. but it does not make it ok for you to belittle those who do just because we disagree with your opinion. And make no mistake, without stronger evidence your opinion is just an opinion too! I can respect your opinion, but that doesn’t mean I have to agree with it.
So what do you say, can we simply agree to disagree?
@ Alfman
That’s because there has been a fundamental misapprehension of what I have been telling you.
I’m just stating that the FOSS flow, used by these guys works with the intended 180nm being used in the fabrication of this chip, does not work for 22nm targets.
I gave reasons as to why that is. I can’t prove a negative. I y’all want to be douches about it, and just don’t want to listen to someone who works in this topic The burden is with you two to prove that specific FOSS flow has produced successful parts in a current node.
javiercero1,
I’m sorry but your first assertion is not true. I didn’t claim that the FOSS flow was already ready for 22nm & below. If anything I think you had some misapprehension of what I’ve been saying, which is that more FOSS developers would be able to tackle the 22nm challenges like finfet if the fabs were were more accessible. This shouldn’t even be that controversial.
Fantastic, it makes me happy that you have come to this conclusion! So while we have different opinions on whether greater accessibility for 22nm fabs would lead to 22nm support by FOSS devs, I’m glad that we can at least agree that the possibility cannot be disproven.
@ Alfman
It’s like you have a gift to miss the point.
When you say stuff like FOSS could do 22nm flows if Fabs were more open about it. That the example I’m referring to; You literally are saying what solutions to problems you don’t understand. You have ZERO background in this field and matter.
And also you don’t understand what proving a negative is. You’re the one making the claim about the positive (i.e. the possibility), you’re the ones who have to provide the proof.
javiercero1,
I understand your point javiercero1, I just don’t believe it has merit. There is a difference! In any case you keep thinking that ad hominem attacks make good rebuttals, but they don’t.
No, I’m the one who openly conceded above that all I have is a hypothesis: increasing 22nm fab accessibility would lead to better FOSS support for it. I think this is perfectly reasonable to believe. Is it proven? No. Have you disproven it? No you haven’t. That’s it in a nutshell. We both have our unproven opinions about what FOSS would be capable of if fab accessibility weren’t a barrier and that’s it. I have no problem agreeing to disagree, so let’s agree to disagree ok?
LIP6.fr is an extremely well-known and respected University, with huge expertise. they do not use magic, they maintain and develop Alliance / Coriolis2, and HITAS/TAGLE.
their goal is to work up the chain to modern geometries, to replace commercial tools and to help bust the NDAs required to even participate in the development of those tools (Skywater’s 130nm PDK was a crucial step towards that).
the large size of the Libre-SOC ASIC (130,000 cells, 5.1 mm x 5.9 mm) required Jean-Paul Chaput to add features that had previously not been added to coriolis2: buffer fan-out taking into account clock-tree synchronisation, and Antenna diodes.
these are all automatic layout – zero manual intervention.
as they move to smaller geometries, new challenges will be faced. the team at LIP6.fr know exactly what those are: they are extremely experienced VLSI Engineers. it is a matter of time, money, support, and having the right (realistic, achievable) challenges.
jumping coriolis2 from automated layout of 10,000 cells to automated layout of 130,000 cells took 2 years, and was a fantastic achievement. the next challenge will be 250,000 to 300,000 cells, and i would like to see the next ASIC be done in 130nm.
over time we will work our way down the geometries, one incremental step at a time.
it will not happen overnight.
This seems to be a giant of an ISA compared to RISC-V. Effectively four processors in parallel running different classes of operands: from integers to vector floating point, I challenge the idea it is a RISC anything. IBM mentioned in another article on OS News that they had minimized, but not eliminated the need to write code in assembler. I thought this need was ridiculous but can now see how assembly code may be required to really get the most out of this system for compute intense functions. https://ibm.ent.box.com/s/hhjfw0x0lrbtyzmiaffnbxh2fuo0fog0 . Check out page 27 for the origin of big and little endian.
yes, it’s enormous, and there’s good reasons for that (i posted this to bassbeast, it’s relevant here as well) https://news.ycombinator.com/item?id=24459314
however there is an additional aspect: if you entirely take out SIMD you are left with a much more sane and reasonable 214 instructions (the new Scalar Floating-Point Subset – https://en.wikipedia.org/wiki/Power_ISA#Compliancy) where if you look at RV64GC without RVV it’s 165 instructions.
i made the decision that Libre-SOC will NOT be doing SIMD. this is a very easy decision. do we:
(a) implement ONE THOUSAND instructions and destroy all possibility of being able to achieve any kind of silicon and, on the very slim chance of success have something that’s poisonous and toxic (see https://www.sigarch.org/simd-instructions-considered-harmful/) or
(b) decide to embrace the RISC paradigm *properly*.
what we will then do is implement an in-kernel emulator (illegal instruction trap) for those seven hundred SIMD instructions, and, over the next five years, work to rectify the costly mistake that was made by IBM by making SIMD mandatory in EABI v2.0
sigh.
those extra instructions btw, they’re incredibly important. adrian_b’s ycombinator post puts it well: “when i consistently see double the number of instructions being generated by RISC-V for benchmark applications in inner loops, i do not need to do a deep-dive into the ISA”.
adrian_b explains some of the things that are missing, including LD/ST-with-update, carry condition codes, and more. these are things that Power ISA has had *from day one*, and retro-fitting them is extremely difficult, starting with changing the mindset of the Founders of RISC-V and then trying to shoe-horn opcodes into a space that was never designed for those crucially-missing features in the first place: this is not going to happen.
I have a question…as someone who is a layman what technical advantages/disadvantages does POWER have over X86, ARM and RISC? I’m hoping its not just “cuz its open” as I’ve seen so many projects die hard the last decade who had no selling point other than being open and with octocore SOC boards selling for dirt cheap I’m having a hard time trying to figure out what niche they are going for…is it better on power, scaling, more IPC than the competition?
bassbeast,
At the moment, I don’t think any of these open CPUs are competitive because they don’t have scales of economy, which can make and break hardware viability. But I’d really like to own a system with open source system management capabilities. Features like intel AMT/vPro could have tons of innovative potential if they were open. For example I really like for my computers to have remote administration, but intel has seriously botched it up with a poor implementation that’s proprietary, locked down, buggy, and vulnerable. We have open source tools that work much better, but alas they can only run at the OS level, which is limiting compared to something like vPro.
At this point and for the type of CPU Libre-SOC is, instruction sets don’t really matter. Implementations do. What you care about is how well supported the architecture is in terms of compilers, operating systems, etc. In that regard POWER is a good choice because it is actively used in the server space and the embedded space (mostly automotive and telecom IIRC).
It is almost impossible to license the x86/x64 architecture these days (and is harder to implement and be performance competitive) ARM licensing is expensive. You also have to worry about violating patents. So you have to pick something that is either old enough that the patents have expired (like what JCore is doing with the SH-2 and SH-4), or RISC-V (which wasn’t as mature when the Libre-SOC project started), or something else that is an open and non patent encumbered ISA.
Given all of that POWER is a solid choice.
But the proof of the pudding is in the tasting (or in this case, implementation), but it looks like they have good heads on their shoulders so I am optimistic
So from your perspective, I am afraid the answer for you is “cuz it’s open”. If you are looking for a fast cheap board, this isn’t it for you. If you are looking to see a open, hopefully competitive SOC, with a solid plan to get to 7nm, then Libre-SOC is interesting.
I think you are misunderstanding me…what EXACTLY does the POWER instruction set bring to the table that the others do not? Just saying “cuz its open” is gonna mean exactly jack with a side of squat if its a power hog that does less than a 10 year old budget smartphone,
ARM has power, X86 has IPC (although now that both AMD and Intel are going big.little that race may tighten) and from what I understand RISC chips are relatively cheap to manufacture…so what does POWER do well? Does it scale well? Does it have good IPC, I’m gonna assume its not great on power (no pun intended) as both Apple and the consoles abandoned it for X86 and ARM so what does this arch do well? What are its use cases? you mention telecom and automotive but I can’t find any info on anyone using it except IBM for its AI servers.
And all I’ve seen is fanboys harping “free as in freedom” with pretty much zero data or even idea as to what they will use this for and considering the graveyard we have seen this past decade of products whose ONLY selling point was that? If that is the only thing it has its probably gonna go the way of the 8-track. I mean if you are excited good for ya but good luck with getting enough people who care to pay a premium for a product that does nothing well except have a “cuz its free as in freedom” sticker.
As I said, instruction sets (ISAs) don’t matter. They haven’t for a very long time. There isn’t really a performance advantage to one or the other, except maybe in some esoteric niches. Like how well vector and matrix operations are defined, and the like.
They picked it because it is well supported and not patent encumbered. Their initial run is targeted at 180nm because it is relatively cheap to produce and lets them prove out the design. On their site they have a roadmap to get to 7nm, at which point you are talking about something that is much more competitive.
This is why I said in the short term it is only interesting because it is open and we can observe the development. In the future if they do get to 7nm (or whatever) then it becomes interesting because it is a open implementation that anyone can use in the basis of advanced CPUs
The fact that IBM uses it at all means that that the toolchains and OS’s you want are well supported. Who cares what IBM is using it for? PowerPC is used widely in the automotive industry (the two open POWER chips announced on OS news a few months back were aimed at the automotive industry), it also used to be all over telecommunications, I am not sure if it still is
I think jock makes some fair points. Back when I was doing games stuff I based my VM on ideas I ripped off from ARM and Java and SmallC. I didn’t have the budget or skill or depth of experience as the people behind those but you have to start somewhere. I wasn’t too bothered by deep optimisation because that could come later. The thing is we’ve all got a bit old. We’ve forgotten what it is like to be young enough everything was new. We’ve forgotten what it is like to have multiple competitive platforms innovating. The kinds of things jock points towards kind of rekindle that.
While this project fail? Perhaps but 90% of projects fail at the first hurdle and they’ve got past that. 90% of started projects fail to finish. It’s too early to judge this yet. 90% of finished projects fail to be a hit. The thing is with fewer and larger companies concentrating all the wealth and attention a fair few projects get bought up and folded in to products lines or cancelled so never get to stand alone. Who are we to begrudge people making an effort to have the same chances we had when we were younger?
Creating another platform gives other people a voice and can help agitate a dialogue and that can be useful too. Different people, different management structures, different ideas, different principles.
The IPC and performance of a modern processor comes mainly from the microarchitecture, not the ISA.
ISA is pretty irrelevant at this point. A modern x86/ARM/POWER core get basically the same performance for the same power envelope at the same node.
The main thing POWER is bringing this project is lack of legal trouble. It’s an Open Source ISA, and it has a decent development ecosystem.
For this type of projects basically POWER and SPARC is what you want. Both are open sourced ISAs w/o any legal baggage and both have open source cores available.
The POWER ISA is fully opensource. Meaning there are no copyright or patent minefileds to deal with.
Also they’re using an opensourced reference PPC core by IBM.
The only alternative to do something similar would have been SPARC (which is an open standard ISA and has opensource core reference designs), but it is basically dead now.
ARM is commercial, so you have to pay licensing costs for using reference core designs or even just the ISA.
RISC-V only has the ISA as an open standard. And I don’t think there are many opensource core implementations to work with at this time. And the ones that I am aware of are pretty low performance.
x86 is a no-go whatsoever as neither Intel nor AMD will consider licensing it. And using it w/o their blessing would be invoking their hordes of lawyers.
Honestly, the niche they are going for is being able to fab their own chip. So the value added of the project is the journey not the destination. Does that make sense. This is, it’s not the end system that it’s the “product” of the project, but rather the building of the system. Think of it as the linux kernel developers; they get the rocks off building linux.
Does that make sense?
Actually there is a third X86 company nobody talks about and they are still making X86 chips as of 2021 and that is Centaur, while IDK if their contract allow sub licensing I know there is nothing Intel nor AMD can say about it as they were one of the original second source manufacturers along with AMD way back in the 80s. Currently most of their CPUs are being sold in the Chinese market but I’ve seen carputers being sold with their quad core CPUs as they are pretty low power.
But time and time again we have seen that if the ONE AND ONLY selling point a new product has is “cuz its open”? They tend to wither and die, sorry that is just the way things have gone. even Pine with their Open Source Pinephone and Pinebook have the selling points of being cheap and having user replaceable parts. So while I wish ’em luck if that is the one and only thing they have going for them? Well look in the archives of this very website at how many open hardware projects have been announced here that promptly disappeared off the face of the earth.
From TFA:
So not it isn’t just because it is free
Don’t forget Cyrix! Bought by National Semiconductor then sold off to Via. Reading wiki they say Cyrix never had a licence and reverse engineered x86. Intel hated this and kept suig Cyrix who were proven not to tread on Intel’s patents. They eventually came to a cross licencing agreement after Intel was caught ripping off Cyrix’s patents!
I know ARM got tetchy if you used their instruction set but as far as I’m aware there’s nothing to stop anyone from copying it.
Things have moved on and some of the what were problematic areas are no longer a problem as patents expired and law and case law changed.
The issue is patents. The way Intel, AMD, ARM, and many others prevent copying is by patenting the behavior of new instructions. Once the patents expire, nothing stops you from making a clone, but you are talking 20 years at that point.
And there are x86 vendors who do make “out of date” versions of x86 for the embedded market, DM&P’s Vortex86 line is a great example of it.
The J-Core guys are doing the same with the SH-2 and now SH-4 architectures. Wait till all the patents expire and then make a open source implementation of them. They chose SH because it is well suited to embedded, memory lean environments; since that is what their sponsor’s use seems to be
ARM opens up some of their ISA, but not all of it, and does defend their patents to get people to license the ARM ISA or a full implementation.
MIPS and POWER have open API’s and implementations.
The RISC-V set out to make a modern, brand new (ish) risc ISA that wasn’t patent encumbered. Last time I had data, around 10 billion RISC-V CPUs have shipped (mostly for embedded)
I am not saying patented ISAs are good or bad, honestly I haven’t looked at the patents to know how good they are or if they make sense.
jockm,
If they’re anything like software patents, they’ll be total garbage. Seriously, as a software developer if you’re curious about how something works, software patents are the absolute last place you’d want to look because they’re written by lawyers for lawyers in obfuscated legalize with no value to real developers. Their utility in disseminating useful information has long been obsoleted by far better sources online and technical books before that. Society would be no worse off if all software patents just went up in flames, good riddance!
Of course it wouldn’t be such a big deal if software patents just sat there providing no value to the public, they’d be a waste of resources but at least that’s it. But no, the negative costs associated with software patents are extremely high. Patent trolls and big corporations and have become proficient at weaponizing them by patenting useless and arbitrary changes with the intention of using them to block competition and providing lawyers mountains of ammo. It’s a minefield where developers always run the risk of infringing on someone else’s “property” in the course of doing our daily jobs when we write our own code. Patent trolls will deliberately patent all practical ways to solve a problem with the intention of using it in court. So as a developer, even when you’re not guilty of copying anything and you just happen to solve the same problem in the same way, well now you’re infringing.
Anyways, I mostly deal with software, but I wouldn’t be surprised if there were a lot of garbage patents for other domains too. A chip designer may encounter huge roadblocks, not because they’re unable to solve the problems themselves, but because a legal monopoly has already been granted to another company for the next decade 🙁
@Alfman the thing about ISA patents I am working through is that while they can look like software from one angle, they are also physical things describing literal processes going on in hardware. Also the few expired ones I have looked at were describing complex behaviors that I could conceive of as being legitimately patentable, I just don’t know the full width and breadth
When I was at intel, and later when I started a business I was advised by council to never read active patents, because in litigation that could be used to say I was unconsciously influenced. I have continued the habit.
But if look at the description of something like AVX-* you can imagine complicated and novel ways the implementation works, and that they might raise to the bar or patentability
Personally I am not in the patent’s bad or patent’s good camp, I think it has a lot to do with how unique of a process they describe, and how hard it is to replicate. So something like the AT&T dubblebuffering patent was just insanely dumb, whereas something like AT&T’s SUID bit patent makes some sense to me. No OS had ever implemented a feature like that, and it wasn’t obvious from the perspective of the time
jockm,
I’ve heard this as well, taking well known pre-existing algorithms in software and repatenting them as hardware. You may be able to take patent X and then call it patent Y because you can combine it with a phone, wristwatch, car, drone, or whatever. I’m not really a fan of patenting trivial permutations like that.
Well, I haven’t read any of those patents either, but to me it would depend if they actually did something novel and not merely math + silicone = patent. The problem behind granting patents for that is that now every time someone new comes up with a new ISA, they’re going to be dealing with much of the same math even if they didn’t intend to copy the patent holder’s work. They could easily be found to infringe simply by solving the same problems. If we independently developed a vector processor for AI applications, it would very likely infringe on something someone’s already done before and it would suck if we’d have to start looking for less optimal designs just to get around pre-existing patents.
I do get why someone would argue for software patents to protect companies ripping off the original inventor, but the the opposite happens too. Some patent holders are in the business of exploiting the work of others. There was a case where microsoft got sued for half a billion dollars over absolutely stupid XML patents that were not innovative in the least, yet the patents were issued and the patent holder owned the idea of using XML in the same context that microsoft was using it for. IMHO we should err on the side of allowing creators to use their own work and against the monopolization of ideas, especially in cases where the odds of incidental (not copied) infringement are so high.
Most people in CS strongly believe that math & algorithms should not be patent-able, and by extension I believe that implementing basic math + silicon shouldn’t make it patent-able either. I feel instances of math should be removed from all patents to guaranty that anyone implementing math in silicone should be allowed to do so without worrying about patent infringement.
Hypothetically, if someone comes up with exotic new hardware to perform computational tasks (say find a way to perform massively parallel computations with RNA), then that process itself should be patent-able. But old algorithms + new RNA computer is not really contributing to knowledge even though the combination of the two is new. It would just be a patent land grab for monopolies and create tons of inadvertent infringement yet provide no intellectual value.
@Alfman I am not crazy about software patents because most are just so bad and basic, but I think the problem more (in the US at least) that the patent office is overworked and has a massive backlog, and there aren’t enough domain experts. As a result way too many patents get through (hardware or software) get through that either don’t pass the basic novelty and non obvious criteria, or are simply redundant.
jockm,
Yeah, I don’t actually blame the patent office staff or their staff, they didn’t make the rules. And on top of that the job they’re tasking with is inherently non-scalable. Not only are they supposed to be unique within the patent system, but they are supposed to be unique within the industry as well, which is impossible to determine. There’s no way for an overworked under resourced patent clerk to do this work at scale. Consequently we’ve transitioned away from patents being properly vetted before being granted. Now they are granted prematurely and then it’s up to the courts to determine their legitimacy in lawsuits.
Ultimately regardless of if your prosecuting infringement of your own patents or if your defending against infringement, you need to legal-up. This is very expensive and not very useful for society, but the lawyers win, and that’s all that matters, haha.
This project will go nowhere other than as an exercise for people to go through the whole CPU design, fab, and bring up process.
It’s just a joy ride for nerds.
joy ride with JATO rockets attached. woo-hoo! https://www.youtube.com/watch?v=n6B_mVeHfP4
no, we do not “have just one thing going”.
the business model of RED Semiconductors (who will be putting Libre-SOC cores into silicon, just like Qualcomm puts ARM cores into silicon) is to be fully transparent. we want customers – ones with a reputation where trust is absolutely paramount – to be able to inspect the entire ASIC right down to the transistor level. they’ll have to sign a Foundry NDA to do so, but we will not impede them further by licensing 3rd party HDL (or if we do, we will negotiate an Escrow license).
contrast this with Intel, AMD, ARM, and NVIDIA. If you go to them right now with a massive stack of cash and say, “we want to inspect your entire HDL source to find out if there are any spying back-doors” they will turn around and say, straight to your face, “we can’t”. the reason is very simple: THEY DIDN’T WRITE ALL THE HDL.
the last major company that created its own entirely in-house commercial processor without licensing anything from any 3rd party was the days of the Motorola 68020, and likely when MIPS was spun out initially back in the mid 1990s.
there is now a wall of a thousand Lawyers between you and the HDL.
even if Intel or AMD wanted to follow the same commercial business route that RED Semiconductor is taking, it would be 5-8 years because they would have to rip out 50 to 60 pieces of licensed 3rd party HDL, replacing it entirely with in-house designs.
i don’t need to remind you of how important being able to inspect the HDL is, given the security bun-fights over Huawei, and how Supermicro got delisted from NASDAQ for being unable to prove the security provenance of components.
https://openpowerfoundation.org/final-draft-of-the-power-isa-eula-released/
the EULA includes an automatic Royalty-Free Patent Grant from IBM’s portfolio, and also makes it implicitly clear that if you mess with any (small) company implementing the Power ISA, you risk waking the 800lb gorilla.
Intel is so aggressive about x86 that in 2003/4, a famous declaration was made by a judge, effectively saying, “you – Intel and AMD – are idiots. my ruling is: i am NOT going to make a ruling. this is me, bashing your heads together. get yourselves bloody well sorted out, go license each others’ patents, and stop wasting my time”.
now, that did not stop Intel from suing the creators of the Vortex x86 CPU (a 486 clone), nor did it stop them from suing Transmeta. bassbeast, i’m amazed that Centaur (i know of them as VIA) are still going.
good question. it’s slowly being understood that RISC-V is just not stacking up. the best independent comparative analysis i have seen is here https://news.ycombinator.com/item?id=24459314
Power ISA has over 25 years of stable development gone into it, and, yes, it’s now open (where x86 and ARM are not). it’s a proven Supercomputer ISA, basically.