Ars Technica summarises and looks at the various claims made by Micro Magic about their RISC-V core.
Micro Magic Inc.—a small electronic design firm in Sunnyvale, California—has produced a prototype CPU that is several times more efficient than world-leading competitors, while retaining reasonable raw performance.
We first noticed Micro Magic’s claims earlier this week, when EE Times reported on the company’s new prototype CPU, which appears to be the fastest RISC-V CPU in the world. Micro Magic adviser Andy Huang claimed the CPU could produce 13,000 CoreMarks (more on that later) at 5GHz and 1.1V while also putting out 11,000 CoreMarks at 4.25GHz—the latter all while consuming only 200mW. Huang demonstrated the CPU—running on an Odroid board—to EE Times at 4.327GHz/0.8V and 5.19GHz/1.1V.
Later the same week, Micro Magic announced the same CPU could produce over 8,000 CoreMarks at 3GHz while consuming only 69mW of power.
I have some major reservations about all of these claims, mostly because of the lack of benchmarks that more accurately track real-world usage. Extraordinary claims requite extraordinary evidence, and I feel like some vague photos just doesn’t to the trick of convincing me.
Then again, last time I said anything about an upcoming processor, I was off by a million miles, so what do I know?
Thom Holwerda,
Exactly! I’ve been trying for so long to encourage everyone to be patient and form their opinions based on data rather than marketing claims. It’s surprisingly difficult to convince people to detach themselves from preconceived opinions and look at the data sometimes.
Don’t beat yourself up, the M1 did well in some benchmarks and poorly in others. A lot of people only want to focus on the benmarks where it does well and ignore those where it is behind. This can be annoying when you try to have an objective conversation, but that is human nature I suppose, haha.
Post sell off of ARM I feel there has been an uptick in RISC-V astroturfing. I’m a bit sceptical of RISC-V as it seems more of an American thing and wonder if pushing RISC-V is less about technical and equity issues and more about who ultimately controls and influences the CPU platform.
Marketing especially unethical marketing by its nature is not about communication and persuasion on the merits but about subverting your judgement. It is indeed good advice to study the data and the rules behind what generated the data. Some scepticism and expertise is required and not everyone has the training or time or inclination for this. It’s the same with politics. Lobbyists and vested interests with deep pockets and now too many politicians spend more time leaning on marketing than creating good legal frameworks and policy based on the public interest.
Let’s play with hyperbolic statements :
https://www.youtube.com/watch?v=01y6bR6ETpA
Sorry but I don’t get what you are saying.
I’ve been involved on a bit of this stuff for the printed SolarPV, in that sector the target is $1/m² but the cost of what you can do in that square meter doesn’t rise proportionally with the density of devices on the film. In that SolarPV sector it’s more about heat management and durability when exposed to UV.
So I can see this chip stuff in the lower power market becoming very dominant, provided the applications can keep the processors cool.
Interesting once the cost target was reached in printed SolarPV it hasn’t supplanted traditional silicon as first thought, but it’s become supplemental. They add one or more layers of printed SolarPV to conventional PV to boost performance.
This may well be the same tactic for the processors, you can imagine an array of these devices working at low bandwidth/demand feeding a centralised conventional chip with heavily curated data, in effect doing all the housekeeping which could massively improve performance and efficiency.
Oh china is biting pretty hard on RISC-V as well. For example:
https://www.nextplatform.com/2020/08/21/alibaba-on-the-bleeding-edge-of-risc-v-with-xt910/
There is also the open source Hummingbird processors that are aiming at the Cortex-M space. Seeed studios sells various versions of them (and a rather cheap FPGA board if you want to experiment yourself).
Reading through wiki I note RISC-V have incorporated themselves in Switzerland to avoid the issue of unilateral sanctions. Switzerland recently had its own scandal when it turned out one Swiss supplier of backdoored security products was owned by US and German intelligence. Oops.
I’m old enough to remember scenes on the news of Chinese wearing Mao suits and going everywhere on bicycle.
So am I, but times change. I am not saying anything about politics, but their policy of favoring home grown processors internally is creating great incentive to be competative. RISC-V is one of the ways they are doing that, along with RISC-V, MIPS, ARM, and x86.
RISC-V doesn’t guarantee open implementations, that is up to implementors to decide. However it provides implementors a common, modular, and extensible ISA that is unencumbered by patents or other IP concerns.
Small world. The politics is a bit of an issue I agree although the law is a lot clearer so easily subject to technical discussion and there is a fair degree of case law and science to lean on.
I do agree with your comments on why the Chinese are using RISC-V and other CPUs and what they are used for. We would do the same if we were in their shoes. As for implemenations not necessarily open this is why I kind of obsess the issue including transcoding and architectual solutions or interfacing for modular systems which allow for genuine mix and match solutions. I agree we are unlikely to see any movement on this although there are plenty of technical people who are interested.
I always thought OpenGL got a lot of things right. There was a universal core you could depend on with everything else being an extention. For the vast majority of use cases there’s no real need to step away from. Then we had the Fahrenheit scandal where Microsoft punked SGI and then went on to use their monopoly to force Direct3D on the world as well as use their position to push into console gaming and the cloud (while not releasing OS which gave basic users and businesses the capability to host their own local cloud). What the Chinese are up to at a hardware level is a response but I’m fearing the Chinese are basically taking an open system and (like NVidia who are ten times worse than AMD/ATI ever were) are effectively closing it in practice.
This is the point where I think engineers need other people who understand the governance and consumer rights issues to step in and add support otherwise engineers are always going to be at the mercy of the boss class and financiers. An open and competitive architecture won’t happen unless there is this involvement.
HollyB,
This is true. People often take such things for granted, but if too few people take an active role to ensure open platforms remain viable, they can become marginalized and become niche/”second class citizens” in the real world.
@HollyB
I hear what you are saying, but I think you are looking for RISC-V to be more than it is, and that it wants to be. There are completely open and inspectable implementations of RISC-V that I think satisfy what you want, but at it’s core RISC-V is intended to spur innovation, research, implementations, etc by providing a common, IP-Free ISA. That is is.
Once you get to the FAB you are so caught up in proprietary processes you simply can’t be as open as you want (If I am reading you correctly), and if there were more restrictions placed on it then you wouldn’t see as many private companies adopting RISC-V so quickly (for example Western Digital). Given time (and this is still very new) you probably will see chip manufacturers who are as open as they can possibly be, hell we may see completely open fabs at some point. But isn’t it OK for those to be a subset of the whole, so consumers can choose, and still know that code will run on those processors transparently as proprietary implementations?
@jockm
Fair comment. I’m more concerned about the abstract meta stuff like interoperability versus lock-in than what happens at the FAB end of the problem. I don’t know enough about the engineering to know what is covered by patent versus trade secret to know how much or little they can open up and this is before international security and trade wars are considered.
At this point I’d be happy if someone with influence produce a discussion document covering things like acces to instruction sets, interoperability with things like transcoding, the OS and VM layers, support for end users investments in software, the use of escrow and barriers such as copy protection and copyright. There would be other none technical considerations of course but I would expect anyone publishing this to consult and fold in the relevant sections. I’m content to leave thinsg there for now. Everyone has their opinions and musings. I’m just happy being able to broach the subject. If it’s something people can run with at some point I’m sure they will pipe up.
@HollyB
Well you can see some of that discussed in the FAQ, and you can get into the weeds by looking at the implementer’s guide. But let me try and summarize as best as I can for you:
It’s a modular instruction set architecture that can describe everything from an 8 bit to a 128 bit computer, but the well defined ISAs are for 16, 32, an 64 bit systems. When I say modular I mean that if you don’t want floating point, you can leave out that module, etc. In addition vendors are allowed to create their own modules to push functionality down to the processor level (for example this is what WD is doing).
So so long as the bitness, and subsets match, binary executables will carry over and run the same, and this is what you (or at least I) would want. This means that a RISC-V CPU can be anything from a microcontroller to a server grade CPU, and doesn’t have to implement the modules it doesn’t need.
But there isn’t a specific standard so that peripherals all work the same, or have the same memory addresses, etc for a microcontroller. RISC-V endorsed the Wishbone bus for systems on a chip but vendors aren’t required to use it
RISC-V doesn’t concern itself with operating system interoperability but to dictate that would be to limit implentors. So calling conventions, etc aren’t defined. GCC and LLVM collaborated on what they wanted from a compiler implementation level, but anyone is fine to define their own (as with any other CPU).
So to summarize, there is binary compatibility so long as bitness and ISA subsets match, but that doesn’t doesn’t mean that you can just move from one CPU to another and assume that IO and the like all work the same.
Does that help?
@jock
You’ve explained some of the detailed issues which helps but there’s two issues here. The main problem at the abstract level is core versus extended functionality. I’m bothered about the general purpose baseline versus the use case specific implementation issues. There is flexibility but this contains gotchas.
To borrow from (classic) OpenGL again you have core functionality which is good enough for everyday use. The industry then broke down into industrial use versus game use. A lot was identical differing only in implementation where implementations certified for industrial use took less shortcuts and were pixel accurate, and retail implementations were a bit quick and dirty in places and sacrificed accuracy for performance reasons. For a broader context there are software versus fastpath issues where a given OpenGL function may have been fully or only partially implemented in typically faster graphics card hardware . (In some cases CPU was actually faster than the hardware renderer due to the technology of the time. With modern backend cloud implementations being available in the PS5 for some cases the fastpath will be over an external network.)
It may be RISC-V benefits from some extentions to facilitate co-existance of OS and portability. I don’t know really being only casually familiar with it. Where the balance lies is a question between RISC-V, OS vendors and IHVs, and consumers and I think there is some scope for discussion. I expect government regulators would have an eye on this too if public discussion got enough traction.
The nitty gritty of transcoding and subsystems (and VMs) cooperating with each other to anyone can run anything they like without vendor lock-in and forced obsolescence is a technical thing. I think this really needs a technical vision articulating examining what is and isn’t possible then a deeper look at the gotchas and whether vendors will cooperate or not, and the use and abuse of patents and copyright to stop an advance in this area.
@HollyB
I think you have to just look at the spec or trust me when I say the core spec defines a thoroughly modern CPU with feature sets on par with any modern CPU
I also think you need to define what you mean by transcoding, because it feels alien to my understanding of the term
@HollyB
After a lot of thought I suspect by transcoding you mean additional instructions to facilitate emulating other architectures (for example x64) vit a JIT or AOT compiler, and yes there is working group J that is looking into that, however that may not be the right approach
The lesson of Apple’s M1 is that you don’t need new instructions (to the best of our knowlege there are no additional instructions in the M1 for that), but a different Mode to implement the intel consistency model so the code would execute more like an x64 processor.
Because RISC-V doesn’t dictate the implementation, extra instructions for emulation aren’t guaranteed to matter. Implementations are potentially going to vary within a single semiconductor company, and this is what we want. We need to give implementers the freedom to experiment and innovate, and RISC-V gives them a compatible ISA to do that
@jockm
You need to seperate the meta issues from business decisions from implementation. At an abstract level I don’t really care whether transcoding is done via hardware or software (ditto support for VMs and hooks or subsystem mechanisms for different OS to run at the same time). For some or all of this to work via a fastpath or slowpath (could be hardware or software) the overall concepts and systems and regulations which enable this need to be worked out and specified. This is the level I’m kind of discussing.
@HollyB
This is why I asked you to define what you meant, and ended up making a guess.
But I still don’t think you get it, RISC-V is a ISA, just an ISA. Implementations are left to implementors. What you are talking about is implementation… I think
HollyB,
You say that you don’t care about implementation, which is fair enough given that many users don’t care either. However somebody obviously does have to care about this stuff and many of us here on osnews do find these things important. The x86 memory model is more strict and emulating it in software is inefficient. For the M1, apple chose to implement x86 memory model in hardware rather than in software to avoid certain implied inefficiencies of software overhead. They probably learned a lot from microsoft’s x86 emulation and decided to go with hardware assistance.
RISC-V could implement an x86 compatibility mode like apple, but for users like myself though, I don’t really see much benefit in emulating x86 code and I suspect most people who find RISC-V appealing aren’t that interested in emulating x86 either. Apple specifically needed backwards compatibility in order to run their customer’s proprietary mac software, but not everyone is as tied down to x86 software compatibility.
As a FOSS user, what I want most is a very consistent and reliable boot strapping process where the owner is in control with no proprietary dependencies. I lament that this did not happen with ARM and for better or worse this leaves x86 (with all of it’s problems) as the friendliest FOSS platform to date. I’d really like to see RISC become the platform of choice for FOSS, but we’ve got a bit of a catch-22: we need manufacturers to make these products viable, yet all too often when they do it comes will strings attached, proprietary blobs, and owner restrictions. This concerns me a great deal because while x86 has kind of been grandfathered in as a platform where FOSS can thrive, for most new devices coming out we aren’t so fortunate and very often we’re forced to hack into our own devices for the right to run independent software.
Ultimately, while I have good vibes from RISC-V, I still fear that its openness could be subverted by corporate influences like has happened with most of our ARM devices.
@Alfman
I think we’re talking at cross purposes or have different goals or priorities in mind. I’m not getting into whataboutary or having words put in my mouth and as I think we’ve covered everything this is probably a good place to end discussion. I’m sure there will plenty of opportunity to revisit this in the future.
HollyB,
Putting words in your mouth, huh? I’m quite confused that this is your response to me. I was just adding my own opinion. Anyways if you want to end the discussion here that’s ok! 🙂
@HollyB
Here is the thing, you keep asking low level questions (or ones that can only be answered in a low level way because of what RISC-V actually is), and then get seemingly upset when we answer that way. You would be better served by talking in a less technical way, and in one that emphasizes clarity.
If your questions are indeed about governance, you have done a bad job of explaining what you mean. And you can find that at the RISC-V website, I won’t google that for you
I am out too
With both M1 and this there is a reason for healthy skepticism, even if the results ultimately prove out in the end. As you said extraordinary claims require extraordinary evidence.
For people with use case and power envelopes which match the capabilities either would be useful if Arstechnica tests are accurate. As for whether it is good for all things all the time we don’t really know so comparing them to currents major CPUs isn’t an exact comparison. Can they scale to more demanding uses? No idea and the marketing puff doesn’t say.
I’m sure someone will find a use. Does anyone have any ideas what?
I benchmarked it against the PineBook Pro. At it’s high end it’s as capable as those processors so it would have a decent use case in the same space as what you’d use Pi’s or Rock64 boards on. https://nequalsonelifestyle.com/2020/12/06/mm-riscv-vs-rock64-arm/
I’m very curious about what would happen if we started including those $1 solar powered calculators in these performance per watt comparisons.