Linux capable RISC-V boards do exist but cost several hundred dollars or more with the likes of HiFive Unleashed and PolarFire SoC Icicle development kit. If only there was a RISC-V board similar to the Raspberry Pi board and with a similar price point… The good news is that the RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring PicoRio RISC-V SBC to market at a price point similar to Raspberry Pi.
I’m 100% ready for fully top-to-bottom open source hardware, whether it’s Power9/Power10 at the high end, or RISV-V at the low end. ARM is a step backwards in this regard compared to x86, and while I doubt RISC-V or Power will magically displace either of those two, the surge in interest in ARM for more general purpose computing at least opens the door just a tiny little bit.
From the second link:
“””PicoRio aims to be open-source hardware as much as possible, with the CPU part being fully open, but the memory PHY, USB 3.0 PHY, GPU, and other I/Os will still be closed source, even though the goal is to eventually have as much IP released under a BSD-like open source license.”””
Even accepting the SoC will be effectively closed source (nobody will be able to create a clone without licencing the GPU hardware), Imagination Technologies doesn’t have a good track record for Linux driver support for their GPUs.
If the software (because of the graphics drivers) is even more proprietary that the Raspberry’s, what’s the point other than having a toy for playing with the RISC-V architecture?
Graphics drivers have been the low point on Linux devices as far as I can remember.
I had the Nokia N800, which was before smartphones or netbooks were prevalent. However as soon as the next version (N810) was released, all its software was abandoned. The primary reason cited was incompatible closed source graphics drivers. Ironically this was supposed to solve the same problem I had with my prior device.
Raspberry had a peculiar issue. The “VideoCore” GPU was abandoned, and could not support more than 1GB of RAM: https://www.raspberrypi.org/forums/viewtopic.php?t=168569. It took them many years to come up with a unique “VideoCore VI” solution. But that came with the cost of a “pending” open source driver availability: https://www.hackster.io/news/eben-upton-announces-official-raspberry-pi-4-videocore-vi-open-source-vulkan-graphics-driver-effort-9af11a00adfd
I think a true open source GPU would not be an easy achievement. Other chips, especially radio (WiFi, Bluetooth, GSM) would be almost impossible.
sukru,
You’re not wrong, but a tiny project isn’t able to transform things overnight. RISC-V CPU is a start, sometimes you’ve gotta take baby steps.
“Other chips, especially radio (WiFi, Bluetooth, GSM) would be almost impossible.”
Wifi and Bluetooth would not be impossible. http://gr-bluetooth.sourceforge.net/
Wifi and Bluetooth can both be implemented as Software defined radios running on any decent cpu type. Yes you still need a radio interface chip/circuit there are some open silicon designs of those used for radio telescopes.
Regulator acceptable phone modem that a different problem.
oiaohm,
I’ve done this myself, SDR on CPU is really cool. You need only look at OpenBTS.to see just how powerful software defined radios are especially for prototyping.
https://en.wikipedia.org/wiki/OpenBTS
But for mass produced consumer devices it is generally too inefficient to use general purpose CPUs for DSP.. For this reason many SDR solutions actually have an FPGA chip to offload the radio processing. I would consider an open source FPGA implementation to be an acceptable alternative. Alas, many FPGAs are proprietary as well.
Let me know if you know of a fully open source FPGA stack.
https://hackaday.com/2020/03/06/mithro-runs-down-open-source-fpga-toolchains/
SymbiFlow is getting close to having full open source fpga tooling.
https://github.com/open-sdr/openwifi
“FPGA chip to offload the radio processing” You find this in open source for fpga for wifi just it done using the closed source tooling at the moment.
There is really no reason with the different sdr stuff that exist that bluetooth and wifi could not be a fpga assisted sdr. Yes someone would have to update those designs to work with open source tooling to make reproduction simple.
“But for mass produced consumer devices it is generally too inefficient to use general purpose CPUs for DSP..”
But we are talking a risc-v designed chip here. A normal high power x86 or something yes absolutely out. Remember custom extends to the risc-v instruction set is possible way forwards. So risc-v+ some special instruction set for DSP work is a possible way forwards.
Lot of ways we are getting really close to may tipping points in tooling for fpga and asic stuff.
oiaohm,
That’s why I said it could be acceptable, but an ASIC is always going to be the most efficient. It’s why ASIC won out over GPGPU and FPGA for blockchain applications when hashes per watt became the most important factor. FPGA and GPGPU are appealing because of their versatility, but those have a higher transistor and power cost.
Of course, if you have the means to fabricate your own chips, you could add special accelerators to any silicon chip that you have access to. Although without scales of economy it’s going to be expensive. It would be difficult for a small operation to be priced competitively with off the shelf commodity chips. Without very large pockets you’re probably looking at older (and therefor slower) fab technology. I appreciate that you have to start somewhere, even if it’s from a disadvantaged position, but it’s just so hard to build up market share in a saturated market. Still, I’d like to see more open technology come to fruition.
“That’s why I said it could be acceptable, but an ASIC is always going to be the most efficient. It’s why ASIC won out over GPGPU and FPGA for blockchain applications when hashes per watt became the most important factor. FPGA and GPGPU are appealing because of their versatility, but those have a higher transistor and power cost.”
Its not as black and white if you are limiting to open source tools.
https://github.com/google/skywater-pdk
If you want to fabricate ASIC using open source tools only due to restricted budget you are currently restricted to 130nm design ASIC. This is 2001-2002 when it came commercial using in x86 cpus 2002-2003.
“Of course, if you have the means to fabricate your own chips, you could add special accelerators to any silicon chip that you have access to. Although without scales of economy it’s going to be expensive. It would be difficult for a small operation to be priced competitively with off the shelf commodity chips. Without very large pockets you’re probably looking at older (and therefor slower) fab technology. I appreciate that you have to start somewhere, even if it’s from a disadvantaged position, but it’s just so hard to build up market share in a saturated market. Still, I’d like to see more open technology come to fruition.”
You are right on a lot of this. But you overlooked how old of asic tech you end up stuck on with small production runs.
130nm asic vs the FPGA we currently have open source tools for. Horrible currently the open source supporting FPGAs are the power effect choice over the 130 asic due to the FPGA being mass produced at a finer nm. 1G lan and 2.4Ghz wifi and bluetooth a lot of the controllers 130 asic because its a cheap production item.
Really I would like to see someone do a RISC-V on FPGA extended for SDR work and then maybe we might see WD or the like make that chip at volume for their products.
oiaohm,
You literally quoted me saying it was older & slower fab technology. Anyways…
I’d rather see the RISC-V as an ASIC with FPGA extensions rather than RISC-V on an FPGA.
Granted if you don’t have access to sufficient ASIC fab, then the later may be the best that you can do on a budget. It would probably be good enough for prototyping, but pragmatically speaking it wouldn’t be very competitive with the performance or cost of off the shelf ARM processors (aka raspberry pi), so it would be a really hard sell for mass production IMHO. Honestly, I don’t have a better idea, newcomers don’t have many good options.
What we really need is a RISC-V equivalent to this…
https://www.marvell.com/company/newsroom/marvell-unveils-the-industrys-most-comprehensive-custom-asic-offering.html
I couldn’t find any information about how much custom 14-5nm ARM chips cost, your guess is as good as mine. I’m not sure if these same fabs would be open to fabricating RISC-V chips, presumably they would be at large enough scales, but it would be a risky investment with uncertain market demand.
I’m not sure if this what you were referring to, but it’s interesting.
https://hackaday.com/2020/06/30/your-own-open-source-asic-skywater-pdf-plans-first-130-nm-wafer-in-2020/
“What we really need is a RISC-V equivalent to this…
https://www.marvell.com/company/newsroom/marvell-unveils-the-industrys-most-comprehensive-custom-asic-offering.html”
We do have that and its http://www.sifive.com
“I couldn’t find any information about how much custom 14-5nm ARM chips cost, your guess is as good as mine. I’m not sure if these same fabs would be open to fabricating RISC-V chips, presumably they would be at large enough scales, but it would be a risky investment with uncertain market demand.”
14nm samsung starts at 100000USD for risc-v this is if you have a finalized design and are willing to take edge of wafer(Yes increased failure rates) . The reality production of chip is not the really expensive bit validating the design is.
The big horrible parts is that 14nm wafer order that you place now will average be 9 months before you in fact get your chips. Yes you have to put down the full production cost 9 months in advance of you getting the chips and you made a mistake you have lost all that money. Note that average 9 months worse case 18 months down the track you finally see your ordered silicon. Now you have to validate it. Can you now see why fpaa/fpga/cpu combination chips can kind of be more popular than your pure asic in particular markets. More chance that minor production errors due to be on edge of wafer and minor design goofs don’t end up with absolutely bricked chips as simply.
The cost of production with silicon is not the worst part is the horrible slow turn around. So you design a chip for today standard wifi/bluetooth/cellular and by the time it gets out of silicon production process that standard can be totally out of date worse particular markets refusing to take it due to security flaws.
https://hackaday.com/2020/06/30/your-own-open-source-asic-skywater-pdf-plans-first-130-nm-wafer-in-2020/
Yes to a point skywater is piggy backing on existing production. So turn around here is not going to be fast.
https://libresilicon.com/ Libresilicon production process can do 130nm. Skywater gives your all important process design kit (PDK) that contains the bits you need to compute if you have validate designs or not.
Basically if you are absolutely insane and willing to mix with some really dangerous chemicals you can do 130 nm at home. 130nm is fully open process. It going to be a while before 90nm is and so on. Basically this is roughly 20 years behind.
So basically anyone who wants to built a 130nm asic plant can. There are no patents or other restrictions on this level technology now. Only restriction now is money and floor space to set it up.
oiaohm,
Hey, that’s very cool and all, but it isn’t the same thing, marvell is fabricating custom ASIC chips, and SiFive is offering an FPGA bitstream, the configurator refers to FPGA in some places and when you read the details it says “Get started with a free Standard Core evaluation, or build your own custom core design, and receive Verilog RTL and FPGA bitstream.”.
I understand that FPGAs are far more accessible to small & medium operations. Clearly the lower difficulties of entry & quicker prototyping are a huge plus with FPGA.
I’m all for RISC-V cpus, but ask yourself as an engineer, is the open ISA really worth the higher costs of an FPGA, lower performance, and higher power usage in your end product? It’s a tough sell is all I’m saying. .
I’m not that insane, haha. But the point still stands: in order for RISC-V to ever be truely competitive against ARM, someone somewhere is going to have to put in the hard work and money needed to fabricate ASIC versions in silicon. And if it is too hard or epensive to fab 14nm ASIC chips, maybe there’s a market for low end parts, although you’d better be able to beat ARM on costs, otherwise what is RISC-V’s strategy to take away market share from ARM?
Nvidia’s already using RISC-V inside it’s proprietary GPUs. This isn’t the future RISC-V ehtusiasts would have wanted, but it may be the future we end up getting where it’s used in proprietary products.
–Hey, that’s very cool and all, but it isn’t the same thing, marvell is fabricating custom ASIC chips, and SiFive is offering an FPGA bitstream, the configurator refers to FPGA in some places and when you read the details it says “Get started with a free Standard Core evaluation, or build your own custom core design, and receive Verilog RTL and FPGA bitstream.”.–
Marvell custom and sifive cloud are the same thing. Both are fabless as in don’t have their own fabs so basically sell you design services to get your designs ready to send to someone else fab.
https://www.design-reuse.com/news/44851/tsmc-oip-ecosystem-enabled-in-the-cloud.html
Yes sifive as been for while doing up risc-v design to ready to fab by their cloud services then transferring those to the fab companies directly. This avoid you having to sign NDA with fab company and having to run the fab companies tools yourself.
So it is Marvell on Arm the equal on Risc-V is sifive. Sifive has deals with 8 different fab companies so when you have your design done in Sifive cloud you can get 8 different fab providers quote for production yes billing will all be though sifive because they will be in fact placing the order with the fab. Of course sifive is different to Marvell is they will give you fpga code to run on your own fpga test boards to make sure what you have done is right. Marvell due to the arm license cannot do this.
The big advantage of sifive is that if you have a big enough fpga you can test out your design really quickly to make sure the basics are right where Marvell due to arm license you are stuck waiting to check out the basics until after the first chip comes off the fab and gets to you that could be 9-18 months latter. Yes after you have spend the fab money with marvell you can find out that you placed a output pin in the wrong place. Yes you see some very stupid PCB with custom marvel chips that are basically take pin from 1 side of socket to the other because it was in the design the wrong place and this is caused by not being able to get your design early on fpga to check for basic sanity issue and being stuck with a pile of chips you paid fab on that are wrong.
Risc-v advantage is allowing lot faster time to check out basics. There are possible transition issues from fpga to asic but basic things like I placed all the pins do stuff in sane locations for nice PCB construction done by using fpga to emulate chip.
oiaohm,
It still seems their business models are different, Their respective websites suggest that Marvel sells you a finished product (whether they outsource it or not) and SiFive sells you a design. Consequently I’d expect SiFive’s service to be much cheaper since their product is data
I haven’t found out what the costs are for either company’s products though. I’m certainly curious what a finished product would cost including minimum quantity going each route, but I don’t see this information published for either.
Also while RISC-V is an open instruction set you are free to implement without a license, it’s not clear to me that the bitstream product that SiFive sells qualifies as “open”. it sounds like they’re selling precompiled binaries so to speak and that’s it – although if I’m wrong please point me to where it says you can get the source, I’d definitely take a look!
I still maintain FPGAs are nice for prototyping, however you don’t get to pick the layout for FPGAs anyways, that’s done by whoever designed the FPGA and chances are your final ASIC chip will have different IO needs than the FPGA, which is inherently generic by design with many pins not being applicable for your application. There’s always going to be a risk you get something wrong even if the logic is right, which is why it is worth having several people look it over before sending it off to the fab regardless of whether you tested on an FPGA.
It’s not really an advantage for RISC-V though, you can get ARM IP for FPGAs too if you wanted.
https://developer.arm.com/ip-products/designstart
But it goes back to the point that unless you’re actually designing custom CPUs, running CPUs on top of an FPGA isn’t all that useful for ordinary users. They’re slower, less efficient, and more expensive than the real deal. If you eliminate the software->cpu->fpga indirection and instead go strait from software->fpga, I think that could really spark some new innovation and even new competition for GPGPU, but in terms of what we’re talking about using an FPGA to run a CPU, I don’t find this especially valuable. We may have to disagree.
I would say for most of mobile communication protocol, it is possible to be implemented on cpu, because the TX RX delay requirement is low. LTE HARA feedback timing is after 4ms. However for WiFi, it is almost impossible to be implemented on CPU, because the acknowledgement is required after 10us (SIFS).
Paper to explain:
https://www.orca-project.eu/wp-content/uploads/sites/4/2020/03/openwifi-vtc-antwerp-PID1249076.pdf
For WiFi it is almost impossible implemented by SDR on cpu. You need FPGA like this project https://github.com/open-sdr/openwifi
Reason is explained here: https://www.orca-project.eu/wp-content/uploads/sites/4/2020/03/openwifi-vtc-antwerp-PID1249076.pdf
I would not call anything Imagination Tech’s portray PowerVR graphic being involved 100% fully top-to-bottom open source hardware, … :-/ IMG w/ PVR was and still is one of the most OpenSource hostile companies, They could never get their act together to release the most minimal register level specification to draw some triangles or blit some rectangle screen VRAM data. What the freaking heck is keeping companies from publishing the most minimal and vital product documentations? Cost them nothing and nobody wants or expects them to write the driver. Given some proper docs plenty of developers would have taken up the tasks over the years, …
rene,
Well, obviously they believe that the market will buy their products with proprietary drivers and without the specs. They’re not wrong. They’re selling hundreds of millions of proprietary chips to manufacturers who for their part potentially benefit from the planned obsolescence that proprietary components enable. If they were open, it would be a hell of a lot easier to extend the life of products with open source updates. Whether it’s appliances or mobiles, manufacturers benefit from short life cycles.
It isn’t clear how we can fix this mess given that a company’s economic interests are maligned with customer interests. But one thing is clear: so long as huge corporations hold all the power and the movement protesting proprietary parts is too small to make even the slightest difference, we’re not going to see widespread changes. It pains me to say it, but the norms for existing markets are already set and aren’t likely to get better without some kind of major intervention. Our best hope for change is in new markets where the norms haven’t been set yet.
I’m not optimistic about it, we’ve been moving in the wrong direction. I still remember 30 years ago many computer components actually came with detailed service schematics and programming samples out of the box. This was normal. We’re so far from that today. I don’t believe those norms are ever going to return.
A few pieces of legislation would fix the problem.
1) US Gov only buys and uses FOSS hardware and software which is unencumbered by patents, binary blobs, etc.
2) Companies must publish manuals, specifications, source code, tooling, and the means to unlock the products they sell to allow the products to be maintained by the consumer 5 years after the initial release date to the public.
> A few pieces of legislation would fix the problem.
And if wishes were fishes the rivers would be full. There is no evidence of any will to make that happen, so it doesn’t matter how many or few pieces of legislation it would take.
Also, everything should be under the MIT license with the no advertising clause and a exemption of patent claims clause of course.
They would lose professional services revenue, and customers wouldn’t be dependent on them. Vendor lock-in at the hardware level is one of the reasons companies like ARM so much.
It mixes 1 32 bit processor (for control) with 64 bit ones. I wonder what would the cost (in power consumption, memory usage and bandwidth, etc) of using a 64 bit one all in RISC-V there is little difference in terms of design. I suppose this “control” processor is not supposed to be accessed by the user.
What I expected for a Rasperry pi alternative is for the GPU to be RISC-V based.
This is actually very common on “PC” architectures as well, in the form of a BMC (dell DRAC, HP ILO, etc), also commonly referred to as a service processor or management processor. They often aren’t even the same ISA.
They are probably using it to manage and bootstrap the system.
p13,
Most of those are optional accessories that aren’t necessary to boot the system. Still, they are extremely useful for “remote hands” administration. Some Intel CPUs have vpro built in, which runs on a low speed low power core that can be used when the system is off. These are so full of potential but every damn one of them is running a proprietary firmware that cannot be audited or replaced by end users. We’re forced to run the manufacturer’s code and on top of that every one of them has had serious vulnerabilities that put their trust into question. System management really needs to be open source to be trustworthy.
This is a great discussion, the sort that got me interested in this site. Funny that something as obscure as a Open SOurce RISC SBC would get people talking. FWIW, I tend to agree with Alfman.
I realise this hardware can be used to make all sorts of software defined devices, but as a carbon budget killer no credible manufacturer is going to go down that path when a dedicated $0.02 chip can do the job at 1/5th or 1/10th the energy budget. I spend all my R&D days fine tuning routines and designs to get maximum battery life out of IoT / IIoT sensors, I’m not going down the path of SDR or similar no matter how cheap and powerful the hardware becomes, and the hardware cost is already almost irrelevant.
cpcf lot of wifi and bluetooth chips are SDR inside. Yes I understand the maximum battery life problem. The reason why lot of wifi and bluetooth are SDR inside is to allow firmware to change functionality as required instead of having to remake the chip. Yes you have country regulations on what encryption and radio freqs and the like you are allowed to use that can change with very little notice. Do note you have a 12 months delay in lots of cases between final of a chip design and getting it tapped out in volume.
1) you can tap out the most power perfect wifi/;bluetooth design and before you get to use it have some regulator or standard change the rules on you then leave you with a stack of unless silicon.
2) make a wifi/bluetooth SDR with customized cpu/fpga that as long as the change is not too extreme you can just apply a firmware update.
Yes the hard reality is cpcf is a lot of those really cheap wifi and bluetooth chips are really SDR inside because is reduced the lemon chip problem. It would be nice to have proper open hardware SDR as a option.
Heck there are 3G, 4G and 5G modems in phones are in fact SDR with Linux kernel with closed source userspace programs in the controller chip. Yes do devices connected these do appear just to be a chip you hooked up until you look at the firmware blob the controller needs and see custom Linux,
Software Defined Radio is insanely common. Lot of software defined radio is not end user controllable software defined radio.
dedicated $0.02 chip << Turns out to be horrible in radio chips there are a lot of cheap chips that under close inspection are in fact custom fpga not a asic. Same reason government regulators over what are allowed freq and to support security updates.
Cpcf in radio chips you can be using a what in your design you call a decanted chip that really SDR inside and the reason why the chip is cheap and cost effective is that is SDR so the maker is not needing to scrap silicon when regulator changes rules. Do note government regulators on radio usage in a bad year for wifi vendors changed the rules 20 times.
I can understand asic for like bitcoin mining. When playing in radio its a different problem with all the different government regulation over radio frequency usage due to the rules can be changing faster than you can produce silicon. I would like to see a risc-v SDR instead of a lot for the current arm SDR you find for moblie phones cellular modem for cellular, wifi, blutooth and fm radio.
oiaohm,
SDR means you are writing software to decode the analog radio streams, but it requires a fairly beefy CPU to do this relative to the bandwidth, Typical networking gear use relatively weak CPUs because the heavy lifting is done in silicon. I think we need to distinguish between a “software defined radio” and a software controlled radio. Having the ability to program the radio’s frequencies does not imply that the radio is implemented as software. In fact most (all?) SDR implementations actually sit behind downconverting tuners and it is these tuners (and not the SDR) that are responsible for for setting/selecting the radio frequencies.
Why is it so horrible? Assuming a chip faithfully implements the standard at a low cost and power budget, what makes it bad? The IEEE standards can be implimented in silicon with some programmable parameters. The frequency is almost certainly one of those programmable parameters that can be changed programmatically by the host without any new silicone. SDR is not necessary.
I’m not arguing that a silicon ASIC can match the flexibility of SDR, obviously it can’t. However an ASIC is just fine for the mass produced devices that we use all and you’d need considerably more expensive and powerful CPUs to handle the radios in software. FPGA solutions can be acceptable, much better than a CPU. Still, in the long run mass produced ASIC chips that implement the standard are the cheapest and most energy efficient solution. Higher layers of abstraction can be used, like FPGA and CPUs, but those layers incur additional costs. Obviously I don’t need to tell you that software is great for prototyping, but when you go to mass production, the software inefficiencies can add up, especially for things like mobile radios that remain on 24/7. Can we agree on that?
It’s like intel chips adding AES instructions to x86, obviously it was fairly easy to do in software, but is far more efficient in silicon. There’s a parallel debate to be had here “if the standard changes then the silicon becomes useless and therefor software is better”. Sure I understand the point. Some trivial changes (like number of rounds) won’t impact the silicon, yet more complex changes may indeed render the silicon obsolete. Clearly software is more adaptable, yet the performance and efficiency are still reasons to prefer silicon even when you can in fact do it in software.
“SDR means you are writing software to decode the analog radio streams, but it requires a fairly beefy CPU to do this relative to the bandwidth, Typical networking gear use relatively weak CPUs because the heavy lifting is done in silicon. I think we need to distinguish between a “software defined radio” and a software controlled radio. Having the ability to program the radio’s frequencies does not imply that the radio is implemented as software. In fact most (all?) SDR implementations actually sit behind downconverting tuners and it is these tuners (and not the SDR) that are responsible for for setting/selecting the radio frequencies.”
True most implementations of SDR implement downconverting tuners.
“Why is it so horrible? Assuming a chip faithfully implements the standard at a low cost and power budget, what makes it bad? The IEEE standards can be implimented in silicon with some programmable parameters. The frequency is almost certainly one of those programmable parameters that can be changed programmatically by the host without any new silicone. SDR is not necessary.”
This is where you have screwed up. Its not just implementing the IEEE standards you need to pass government regulation as well. You need to be able to programmatically change the tuners around. fgaa with fixed function blocks is the common wifi/bluetooth tuner part. You were not thinking that the complete tuner need to be programmable swaped. This will be a very simplified example. fpaa has 2 fixed block downcoverting tuners in A and B. In country 1 A assigned to wifi and B assigned to bluetooth passes validation yet reverse fails and in country 2 B assigned to wifi and A assigned to bluetooth passes validation. It is possible at times for a fpaa to have enough configurable parts to assemble a complete custom tuner.
Yes major of the radio chips contain field-programmable analog array (FPAA) with fixed functions this allows you to do with the analog parts like you would with a FPGA with fixed functions.
Next not to be left with a stack of useless silicon you need to aim to support more than the current version of IEEE standard for wifi/Bluetooth.
“I’m not arguing that a silicon ASIC can match the flexibility of SDR, obviously it can’t. However an ASIC is just fine for the mass produced devices that we use all and you’d need considerably more expensive and powerful CPUs to handle the radios in software. FPGA solutions can be acceptable, much better than a CPU. Still, in the long run mass produced ASIC chips that implement the standard are the cheapest and most energy efficient solution. ”
You are keeping on going back to the most power efficient solution. When it comes to your wifi and bluetooth and cellular its not the most power effective any more. The devices need to pass different country regulator requirements and you need future proofing to support more than the current IEEE standard.
“It’s like intel chips adding AES instructions to x86, obviously it was fairly easy to do in software, but is far more efficient in silicon. There’s a parallel debate to be had here “if the standard changes then the silicon becomes useless and therefor software is better”. Sure I understand the point. Some trivial changes (like number of rounds) won’t impact the silicon, yet more complex changes may indeed render the silicon obsolete. Clearly software is more adaptable, yet the performance and efficiency are still reasons to prefer silicon even when you can in fact do it in software.”
AES is a good one. This is why for wifi, bluetooth and cellular you find like a arm chip with encryption extensions. Implementing your own encryption design in silicon and getting it certified that is right is very expensive. Due to the design contain a arm/some cpu core it can drop back to software encryption if it has to. Ok the device is not going perform great but some function is better than absolutely none.
The reality is a lot of the bluetooth and wifi and cellular inside the chip is not a pure asic but a mix of fpaa with fixed functions then fpga with fixed functions and finally cpu with fixed functions like AES encrypt/decode in hardware. Lot of these chips are in fact full blown SDRs right down to programmable being able to make a custom tuner in the fpaa if you can get the documentation for it. This also explains why FCC and other can kind of get worried when people start talking about open source firmware for these parts because they are not hardware restricted. Yes not using the fixed function blocks in the fpaa/fpga/cpu inside these chips is going to make the chip run like a major power vampire.
Remember a non functional chip cannot be sold and has to be recycled. A chip that is horrible power hungry that works with current standards and will pass current day government requirements you still have decent chance of at least selling it off at cost. This is why if you make a radio parts making a exact to standard ASIC results in having to charge more to recover cost of development faster because missing flexibility means you can be left holding the bag with a stack of unsellable silicon. Yes those taking those chips for recycling will most likely charge you for them.
Alfman basically there is a halfway point between ASIC and fpaa/fpga/cpu combo solutions. Fixed functions in the fpaa, fpga and cpu give you a lot of the power advantages of making a decanted ASIC (ok not all of it). Keeping the SDR part by going fpaa/fpga/cpu combo massive reduces the odds that some standard or government regulation change will leave you holding a stack of unsalable parts. This is the common trade off.
Also the fpaa/fpga also allows you to recover lot of chips with minor defects. Lets say you designed a chip with 2 downconverting tuners and ones dud. You can software wire one out. Yes you are using more silicon area but your functional yield per waffler can in fact be higher. Remember absolutely power perfect ASIC one silicon node defect and it can be total paper weight.
You have a 3 pointed problem.
1) Power effectiveness.
2) Production yield
3) Future proof(ability to support future IEEE standards and government regulation)
Yes the more power effectiveness the design is the lower the yield and future proof is with radio stuff. Alteration to increase yield will normally come at the cost of power effectiveness. Yes future proof also comes at the cost of power effectiveness.
Highly power effective radio gear is not normally your really cheap stuff due to the yield problem. fpaa/fpga/cpu with fixed functions combo gives you really high production yield values and really good Future proof with decent power effectiveness.
ASIC does not always win. The mid ground fpaa/fpga/cpu with fixed functions combo is commonly not thought about and this your common wifi, bluetooth and cellular chips. Yes the fpaa/fpga/cpu with fixed functions are full SDR in most cases.
oiaohm,
The FCC certification requires that the end product doesn’t broadcast on prohibited frequencies, but the underlying chips are merely following the instructions of the firmware used to program them.
Not for nothing, but this situation is the exact same with the SDR solution you’ve proposed where frequency selection is still set by firmware or software. There’s no inherent advantage to using SDR on CPU, FPGA (or heck even a GPGPU) over an ASIC when it comes to controlling the radio parameters.
Granted you’d need a new ASIC chip to support the next standard, but realistically it isn’t clear that even if you had an FPGA that A) the FPGA chip would be over-provisioned to such an extent as to allow it to support future IEEE standards. And B) the manufacturer would even bother to add new standards to a device that was marketed and sold under the old standard. And C) the users would even think or care about flashing new firmware when it’s available.
Alas, this is true of all hardware. Sometimes the old components still have life ahead of them in new low power IOT appliances that don’t need the latest and greatest. I’m probably not the only one buying old stock for new arduino projects 🙂
An ASIC solution would pass FCC certification as well as SDR or FPGA solution. Obviously you can’t update an ASIC but frankly many system builders are buying turnkey solutions to use them as building blocks. It’s not their goal to build WIFI chips inhouse, they’ll buy a few samples, test whether they work, and then buy boatloads of them to to ship in their products. To the engineers specing these parts, low cost and low power ASIC provide real benefit now, whereas an FPGA or SDR solutions costs more, use more energy for a hypothetical benefit that they may never use nor even make accessible to the customer. This isn’t to say FPGA or SDR never have a place, but ASIC solutions can and do win out for some projects.
I’m not saying it does. For prototyping SDR is absolutely awesome! However we cannot overlook the millions of products where engineers are just looking to add a known good WIFI/bluetooth stack as cheaply and efficiently as possible. The fact is SDR and FPGA solutions check boxes that many engineers won’t care about at all for their projects. Say I were building a wifi thermostat or smoke detector for example:
Would pure SDR work? Sure,
Would FPGA work? Sure.
Would an ASIC work? Sure.
They all work.
Which is cheapest? Most likely the ASIC.
Which is most battery efficient? Most likely the ASIC.
Which would be easiest to interface with a cheap micro-controller? Probably the ASIC.
There are countless ways to tackle the problem, you could engineer the product around something like a raspberry pi zero w for ~$10. As cheap as that is though it would cost more per unit, you still end up going through an ASIC module to do wifi anyways. And you’d run through batteries far quicker.
Again, I’ve used SDR and it’s awesome, but mass produced devices (including wifi modules) will tend to favor ASIC.
–There are countless ways to tackle the problem, you could engineer the product around something like a raspberry pi zero w for ~$10. As cheap as that is though it would cost more per unit, you still end up going through an ASIC module to do wifi anyways. And you’d run through batteries far quicker.–
There is one big problem with the example you just gave the Cypress CYW43438 wireless chip on the raspberry pi zero w is doing lots of it heavy lifting in a cpu.
Do note when that chip was first released it was only Bluetooth 4.2 its been able to be updated to Bluetooth 5.1 by firmware.
https://www.cypress.com/file/298076/download
Page 5 by document page 6 by pdf is the block diagram. What you really have here is a ARM Cortex-M3 with 2 radios.
Its basically ARM core design with fix function radio blocks added to it. Those fix function radio blocks only do the radio bit of the IEEE standards for wifi and Bluetooth. The non radio bits like encryption, hand shake, connection maintaining.. they are all in the ARM Cortex-M3.
Yes it possible for the ARM Cortex-m3 to receive raw radio ADC signal from the 2.4Ghz wifi block that why the note exists in the block diagram of “radio digital”
If you really want to be insane with the Cypress CYW43438 you can do Bluetooth with the wifi radio please note that stunt of using the wifi part for bluetooth starts in with one of the Cypress chips before that one that did not have a bluetooth block at all.
There is a fpaa in the Cypress CYW43438 that how the Arial switching and amp switching is pulled off.
CPU controlled radio in min. But most of your wifi chips turn out to be SDR capable.
Cypress is one of the more streamlined ones. I would still like to see in the Cypress CYW43438 done with Risc-V instead.
oiohm,
I don’t see what the problem is.
Sorry but SDR is not feasible for modern WiFi. with 802.11ac wifi you can have 160Mhz of bandwidth with 8 MIMO streams. I assure you that’s using a silicon DSP or ASIC of some sort, maybe in combination with an FPGA, but those wifi dongles aren’t using pure SDR to process the radio waves directly in software.
cpcf,
I’d be interested in hearing more about what you do 🙂
Thom, is there any reason you imagine Power to be open source hardware? It’s not. It’s just another architecture, with a public documentation of the instruction set architecture. SPARC had for the first incarnation (Niagara) of the sun4v processors (T series) an actual open source CPU implementation and firmware. POWER is not open source, it is open spec. And it really doesn’t matter. All CPU architectures are well documented and understood. You can write your own firmware for most Intel and AMD chipsets, if you have the skills and resources. Most of the firmware in a modern computer is actually open source and it’s called EDK2. The proprietary parts are the ones written by Intel/AMD for the specific chipset, but you can write your own, based on the chipset datasheets.
Furthermore, even if the CPU is open source, it’s useless without an open-source and/or open-spec Chipset. Most of the system initialisation happens by poking registers in the chipset to detect and initialise memory, to detect and initialise the PCI-E bus, etc.
Power is an architecture like any other, that is used today less and less. IBM still pushes it for banks and for SAP by giving 90% discounts on grossly overpriced hardware. They claim that any 2u server has a list price of $1M, and they are giving you a super offer of 90% discount, but in the end you are buying a server worth at most $50k for $100k. Same strategy that it uses for storage where they claim that the list price of a FlashSystem 9000 is $1.3M, but you can buy it for $170k.
You are fond of the Raptor. But what is it in reality is a 2005 PowerMac G5 Quad (eBay for $200), using less power and able to run newer versions of AIX and Linux including Linux-KVM. It’s 2005 performance sold in 2020. It does offer an open source ROM, but that’s not something that is in any way useful. The IEEE1275 firmware in the powermac is quite standard and very well understood by open source developers.
If you want open source firmware, a lot of boards are supported by u-boot, including x86 boards.
d3vil you missed the details.
https://github.com/antonblanchard/microwatt
This is a power9 cpu reference design yes this was with IBM blessing. There are Openpower cpus like the microwatt and the latter chiselwatt by antonblanc that are true full blown open hardware all the way down. Not designed to be exactly the fastest.
https://github.com/openpower-cores
There the A2 that is a power7 class cpu this is a old desktop/server class power cpu design. Interesting enough A2 was test tapped out at 7nm yes the 7 nm version runs a lot faster than the original power7 chips. Yes that is all parts provided by openpower-cores there is what you require to make A2 system.
Please note all these openpower reference cpus are royalty free for asic or fpga usage.
https://openpowerfoundation.org/final-draft-of-the-power-isa-eula-released/
Yes the fact they mentioned there.
You were right to pull up on Power 10 as of yet there The license current license on the Powewr ISA says if someone implements a to specification Power 10 design and releases it there are no royalties to pay on the chip design you release even if you use stuff that OpenPower member patented.
OpenPower is little more than “public documentation of the instruction set architecture”. Openpower does have public reference cpu implementations and the required patent grants to implement your own without having to looking over shoulder for that hidden baseball bat.
Yes claiming OpenPower is not open source I would say that would be fine since a lot of the OpenPower stuff is still being a lot of throw design over wall after we have got it like 99% designed. OpenPower development is moving in a open source direction for the low level silicon.
d3vil if you look at the facts OpenPower is more than open specification not quite what you would call a full open source process.
chiselwatt power cpu design is called chiselwatt because the raw design coded in chisel the same language as risc-v. There are other overlaps where a chiselwatt can use the same chipset parts as a risc-v.
It would be really warped to have chip that has chiselwatt and risc-v cores.
https://diglloyd.com/blog/2006/20060128_3-PowerMacG5QuadPowerUsage.html Horrible as it sounds you cannot boot a PowerMac G5 Quad without at least a 500 watt power supply. Raptor Talos motherboard will boot safely for 400watt power supply. Yes the Raptor prebuilts come with huge mother power-supplies as they are expect you to be populating the 3 PCIe 4.0×16 slots with some fairly power hungry cards.
Sorry the old Power Mac G5 Quad motherboard and cpu combination is slower and more power hungry than the new Talos motherboard.
https://www.phoronix.com/scan.php?page=article&item=power9-x86-servers&num=2
Are you really attempting to say a Power Mac G5 Quad from 2005 can in fact keep up with EPYC 7251 that is really Raptor Talos boards kind of equal to x86. 2016-8 performance is the Raptor stuff that well and truly kicked 2005 stuff into the curb on power usage and performance. Bang for buck 2016-8 vs 2020 is still a huge lot of difference.
Really none of the old power cpu based apple hardware for power vs performance is competitive against the new Raptor stuff. Dollars vs performance the old apple systems can win but you also have the old hardware with old hardware reliability issues.
I must have missed the news that there are open source power architecture designs. I take back what I said about the open source cores. I am still convinced that it’s useless, unless all the necessary chips on the board are as open as the cpu, mostly since today they go together, chipset and CPU. You can’t plug a POWER CPU in any motherboard with any chipset. And I’m not talking about socket layouts and other electrical level issues either.
I already mentioned the power usage. I never disagreed on that one. 90nm CPUs are not as efficient as 14nm CPUs. But keep in mind that a Quad 2.5GHz 970MP which costs about $250 is not a lot slower than the quad 4GHz Power9 that Raptor is offering at $7000. Sure, the electric bill will be half. Sure, the Talos will have PCIe 4.0 vs the PCIe 1.0 of the Quad PowerMac. But in 15 years I would have expected a lot more. And I’m quite convinced that the PowerMac is sexier and a real improvement in the silence department.
I’ve been a UNIX admin for 15 years, mostly Solaris SPARC, but I get POWER systems in front of me even today. I am quite convinced that they are not worth it. Now there are two thing that you can run on POWER: Linux and AIX. If you want to run Linux, there is ZERO incentive to choose POWER of x86. Mainly because for the same performance x86 is half the price. Plus, while incredibly well supported by Red Hat, Linux POWER is still behind on a lot of aspects. Licensing RHEL and OpenShift is “specially priced” on POWER, so there is no business case. All architectures that have no business case are destined to die. Some slower, some faster. We’ve already lost MIPS, Alpha, IA64, HPPA. We are about to lose SPARC and POWER.
And if you want more performance than an X86 can provide (64 cores with 8 threads each running at 5GHz or something similar), SPARC is still incredibly well positioned. And Solaris is still a decade more modern than AIX. Still dying at the murderous hands of Larry Ellison, but more modern than AIX. I doubt that there are more than 1000 installs of a Linux Distribution on a POWER or SPARC system of that class or higher, because it probably is untested and buggy.
Alfman,
Keep in mind that the proprietary firmware is mostly open source and available on Github. What the vendors add (coloured menus, Microsoft certificates and boot logos) are mostly irrelevant. The only thing that is still sadly proprietary is the plaform PEI and DXEs that Intel provides to the vendors as binary-only in a PlatformPkg. In SPARC terms it would be an OpenSource OBP and a proprietary POST.
Power9
A 4 core/32 thread 3.8GHz CPU in your $7000 workstation is 3 times faster than a 4 core/4 thread 2.5GHz CPU that also used to cost $7000 15 years ago and now costs $250. The Pentium4 650 is about 24 times slower than the i7-9700k for most loads. You actually chose an incredibly good comparison:
P4-650 is an 84W 1C/2T 3.4GHz CPU built at 90nm.
i7-9700 is an 95W 8c/8T 4.9GHz CPU built at 14nm. Roughly the same TDP and 24 times the performance.
Dual970MP are 250W 1P/4C/4T 2.5GHz CPU built at 90nm.
Power9 included in the Raptor system is a 90W 1P/4C/32T 4GHz CPU built at 14nm. It offers 3 times the performance of the QuadMac in 1/3 the CPU power and thermal envelope. Assuming that you compare a similarly power and thermally budgeted system, it would offer 9 times the performance in the same power and thermal envelope.
Maybe Steve was right to drop Power. Intel, which has moved disappointingly bad in the past few years, has managed to increase the performance 24 times in the same budget, while POWER has only increased 9 times in the same budget.
It really doesn’t matter how good POWER is, even if it was the best architecture on the board at this point in time, it’s used by less than 1% of the installs out there. ARM beats everyone and is the winner in the long run, as I predicted about 11 years ago. Everything is ARM, and nobody can compete with it in the same Power/Thermal envelope. Just compare 3D Games on the iPad with 3D games on the PS3. The PS3 was a 90W Dual POWER CPU plus 7 SPUs and a 50W GPU. The iPad is a 5W CPU/GPU. 140+W vs 5W. And the 5W offers better resolution, better frame rates, better anything.
Sure, when Apple chose to move to ARM, they sacrificed our ability to run Linux or anything else on that hardware, but the last time I booted a Linux on non-virtual laptop hardware was 5-6 years ago, so it doesn’t bother me that much. What bothers me is battery life, heating, sustained performance and build quality. Fundamentally, my Laptop or Workstation is a tool that I need for almost everything I do. If I had to choose between open source or productive I’ll always choose productive. Because it’s what pays the bills and gives me time to spend with the family.
No architecture other than x86 and arm have gained critical mass. Without it, the development budgets are lacking and the architecture becomes uncompetitive. Don’t be mistaken, POWER is uncompetitive.
IBM came to our office showing us how a POWER system can beat x86 for SAP. But when we’ve put their 70% discounted POWER system through a simulation of the TCO, it lost badly to Dell R740s. We have 30 admins that can install 500 x86 servers in a day. We have 1 admin that can install 4 POWER servers in a day. It needs to make sense. If POWER was adopted by HP and Dell, then I wouldn’t mind giving it a chance. Remember the industry solidarity around IA64? Everyone ported their UNIX to it, everyone had IA64 systems in the pipeline, everyone was saying that it will succed because of that. But history has proven that it’s still not enough. POWER had its chance to succed, but the year when it started failing was 1997, when Windows NT 4.0 stoped receiving updates for it.
And I am writing this while wearing a black T-Shirt from the mid 2000s with OpenPower written on it in fluorescent green.
Let power die gracefully. Maybe RISC-V will stand a chance now that NVidia is going to screw ARM for all of us.
“You can’t plug a POWER CPU in any motherboard with any chipset. And I’m not talking about socket layouts and other electrical level issues either.”
https://github.com/openpower-cores The A2 here is a soc as in no independent chipset. So motherboard for this does not in fact need a chipset. Yes that is a power 7 chip. When the references server/desktop power 8/9 and 10 are release expect them to be full soc as well. I think it was around power6 ibm started getting rid of the independent chipset. Yes it insane that every PCIe lane on the raptor board goes to the cpu.
Sorry the old Power Mac G5 Quad motherboard and cpu combination is slower and more power hungry than the new Talos motherboard.
“A 4 core/32 thread 3.8GHz CPU in your $7000 workstation is 3 times faster than a 4 core/4 thread 2.5GHz CPU that also used to cost $7000 15 years ago and now costs $250. ”
This is way out you have gone by clock speed.
https://www.phoronix.com/scan.php?page=news_item&px=PowerMac-Intel-KBL this gives you where a old PPC970 sits. Its absolutely not great.
https://www.phoronix.com/scan.php?page=article&item=power9-x86-servers&num=2
This gives you where the Raptor Talos II sits.
“P4-650 is an 84W 1C/2T 3.4GHz CPU built at 90nm.
i7-9700 is an 95W 8c/8T 4.9GHz CPU built at 14nm. Roughly the same TDP and 24 times ”
The Talos II chips are closer to 30 times the perform of the old PowerMac G5 Quad stuff. 250 dollars x 30 is not really. that cheap. To buy as much processing power as the Talos II has in PowerMac G5 is 7500 USD. So if you are going dollars per performance a 7000 USD Talos is cheap and compact vs a old PowerMac G5 and you are not going to have the old hardware reliability issues.
The A2 reference power7 design is about 24 times of a old PowerMac G5 Quad, The IPC(instructions per clock) have gone up in the newer generations of power designs by a hell of a lot. Please note the A2 is a 10 year old design.
There are some benchmarks where the new power chips don’t get as much advantage as they should but that still does not make old PowerMac G5 good buying at 250 dollars. For decent buying in dollars vs performance you have to get a PowerMac G5 for less than 200 dollars hopefully closer to 150.
The horible reality here is that you can in lot cases get power 7 servers second hand for less money than a PowerMac G5 Quad and of course that power 7 server motherboard will run rings around the PowerMac G5 Quad. IBM did some major redesigns in power cpu design just after apple stopped using power chips so its on the wrong side of the performance cliff. Power6 is about as old in power chips you want to go in most cases if you are after performance vs dollars.
d3vi1,
Weird that you slipped this message to me in that wall of text, haha…
Anyways, can you provide specific link for what you’re talking about? I’m not talking about example source code, yes I can find that too. But I’m talking about open source code that’s produces working firmware for specific motherboards. Take my gigabyte aorus board for example, where are you going to get source code for a firmware that will boot on it? I’m not sure you can get a fully working firmware without the manufacturer’s copy of the code because some initialization and system management functions are unique to the chipset (such as configuring voltages and fan control loops).
Some source code has been leaked. You might get useful information from that if you happen to find leaked code for your specific system…but you can’t count on that and you don’t have permission to use that code.
http://www.xtremesystems.org/forums/showthread.php?285721-AMI-UEFI-BIOS-source-code-leaked
A lot of vendors will use AMIBIOS as a template, but that’s only a starting point and even that doesn’t necessarily mean a generic upstream version of AMIBIOS will run properly or optimally on your specific motherboard. And that’s the thing, if you are a motherboard manufacturer, then sure you’ve got a lot of starting points for your firmware development. However as an end user you want/need the specific firmware that works with your motherboard. Having generic firmware source code doesn’t do you much good unless you plan on adding support for the chipset yourself.
So I’m curious, do you have any hands on experience replacing proprietary firmware on an x86 motherboard with generic code?
I paid ~$2000 for my POWER9 workstation. Any G5 from the Apple era is essentially unusable at this point, it’s just woefully under-powered. I’ve daily’ed my 32 thread POWER9 machine for a year now without any complaints. It’s the most stable system I’ve ever used – and that’s running 4 displays on an AMD GPU and 4 disks in two ZFS pools with 64GB of ECC memory.
There’s no point in engaging with you when you’re blatantly lying.
d3vi1,
Most x86 boards have proprietary firmware and don’t support either of these though.
Ideally both need to be open. Every board manufacturer can do it’s own thing and the only thing that matters for the specs like ACPI and UEFI are that the system tables and function handlers are initialized for the bootloader/OS. It is not particularly difficult, but needs to be customized for the system and the bits and bobs are rarely documented by the manufacturer. I have some x86 computers with erroneous firmware tables, In windows it’s common for these kinds of issues to be patched by the manufacturers with driver updates. But the consequence of this can break linux and other operating systems which lack the proprietary manufacturer quirks to run properly. I am able to patch linux because it’s open source, but I can’t fix the firmware without reverse engineering it.
Ideally everything would be open source and we wouldn’t be dependent on any proprietary code. It’s difficult to get there with cheap commodity hardware, but there are some manufacturers that make this their goal…
https://www.tomshardware.com/news/system76-disables-intel-me-firmware,36030.html
I’m fairly price sensitive though, I wish these were more affordable.
https://system76.com/laptops
All of your ramblings on “anybody can replace the BIOS on an x86 machine” aside, you’re demonstrably wrong on many aspects of POWER9. The first major part is that the low-level code that actually boots the CPU is fully open-source, as is the software for the management computer and so on. Much of that work is done by Intel ME on current-gen Intel processors and is not replaceable. You’re stuck with the closed-source ME and other software running on your machine well below your OS.
https://wiki.raptorcs.com/wiki/OpenPOWER_Firmware – Every bit of code needed to boot strap the system from power-on is published.
Your agenda is quite clear when you say that a modern Power9 CPU has the same performance as a 2005 Quad-core G5. That’s laughably wrong, and you’re being very dishonest when you even try to make that claim. The 4 core / 32 thread 3.8Ghz CPU in my workstation is as much a PPC970 as a i7-9700k is a Pentium4 650.
You can value different things in a computer, but at least try to make an argument in good faith.
Another pointless SBC where the ISA is ‘open’ (what does it really matter), yet the parts that people actually care about (i.e. Can I decode by h265 torrent video?) is closed.
Ironically, the most ‘open source’ SBC out there are those powered by Chinese Rubbish ALLWINNER CPU’s – and not because AllWinner cares about open source, but because thousands of people have spent years reverse engineering binary blobs and chinese datasheets.
i.e. There’s real Linux Kernel code for the ARM MALI video engine and Allwinner Video Decode IP blocks.
https://linux-sunxi.org/Main_Page
This is about as good as open source is going to get. The ISA? Who cares!
Anonew,
Granted you may not care, but it’s not pointless to everyone. Proprietary firmware is a source of bugs and vulnerabilities that leaves us less secure. And on top of all this proprietary means less customization. I’d love to be able to customize the management engine on all my computers to do what I want. Just look at all the extensions that open source brought to NAS devices and network routers & access points to add features that are far beyond what the manufacturers included originally. “Who cares”, creative people who believe they can write a better firmware & features than the manufacturer is who.
I’d be delighted to have RISCV processors that had an open & customizable management engine.
IMG tech and open source in one sentence. That’s a joke, right?