The A2O core is an out-of-order, multi-threaded, 64-bit POWER ISA core that was developed as a processor for customization and embedded use in system-on-chip (SoC) devices. It’s most suitable for single thread performance optimization. A follow-up to its parent high-streaming throughput A2I predecessor, it maintains the same modular design approach and fabric structure. The Auxiliary Execution Unit (AXU) is tightly-coupled to the core, enabling many possibilities for special-purpose designs for new markets tackling the challenges of modern workloads.
Intel’s current troubles and the rise in popularity of alternatives is creating a very rare and ever so small opportunity for smaller ISAs to gain some traction. I’ll take what I can get in our current stratified technology market.
Looking for the J2 (SH-2) then J4 (SH-4) alternative ISA to get some traction.
I was a big fan of the SH-x architecture, and love how much they packed into 16 bit instruction words, but at the cost of higher register counts and other tradeoffs. I am so glad the J-Core project exists, but honestly I think the future is in RISC-V. It is synthesizable all the way down to a Spartan-3E, has lots of implementations and extensions to pick from.
I am sure the J2 and J4 will get used (I mean they are being funded from somewhere), but I don’t think it will really catch fire
must have heard of Lethal League Blaze as an average fighting game, falling between a 2D fighter, a sports game, and a bit of Smash Bros. The game is centered on hitting a baseball, with each subsequent hit causing the ball to travel more quickly than the last.
When the ball is repeatedly hit, it will start to build up speed and travel across the screen insanely fast. This effect eventually causes time to freeze instantly, whenever the ball is struck while mind-bending visuals flash on screen.
This game also features a host of refinements and new content including an expanded roster and more game modes. Its having an Ultimate and an excellent platform fighter that’s easy to pick up for just about any player, best fighting android games
Gangstar Vegas is an activity and enterprise game in an open world similar to Grand Theft Auto where you control a little time evildoer who needs to go up a score in Las Vegas’ black market. Gangstar Vegas mobile game is available to run on a mobile phone device.
As is not out of the ordinary in a sandbox game this way in Gangstar Vegas download pc you can do nearly anything you need in the city of this city. You can assault any passers-by utilizing a wide range of weapons.
Weapons such as automatic weapons, shotguns, handguns and even your own particular clench hands. You can likewise drive a wide range of vehicles, similar to trucks, squad cars, sports autos, and some more.
SPAM
I am going to take issue with Thom’s analysis: Intel’s troubles are in implementation and don’t hurt the ISA at all. So AMD fills the void for X generations? China has it’s own x86/x64 implementations and while underwhelming now are bound to get competitive after a few generations. So that will be three implementers competing with one another.
For that matter on modern CPUs ISAs are largely uninteresting and are only loosely coupled to the implementation underneath. A2O is really cool, and I can see applications for it, but I don’t think we are going to see a rush to produce competing commercial implementations.
ISAs matter when you don’t have a microarchitecture, I have designed some fun ISAs and TTL implementations in Logisim that would have run like a bat out of hell back in the 70s, but been cost prohibitive to ever implement (dual port SRAM is cheap right?). ISAs are about compatibility and not needing to reinvent the wheel.
I agree with this analysis that C-SKY is probably going to be the last ISA added to the linux kernel (at least for a long time): https://www.phoronix.com/scan.php?page=news_item&px=C-SKY-Approved-Last-Arch
The point of RISC-V is to create an ISA flexible enough, modular enough, and expandable enough that is unencumbered that we don’t have to pay licensing fees or worry about ISA patents and focus on implementations. In the mean time tool makers can focus on making the best compilers, etc the can for the standard.
In the long term that might spell trouble for x86/x64 but intel having a bad spell isn’t going to be it. It will be someone coming out with a CPU that is fast enough and cheap enough to be viable in the marketplace. It would require someone with a lot of money to get space on a >= 14nm fab and sell motherboards competatively for A2O to make a dent.
To help illustrate my point: the ESA created a FOSS version of SPARC V8 called LEON in 1997. You can find rad hardend versions for sattelites, etc; you can find it in some interesting embedded devices like GPS/GNSS devices; and it had(has?) mainline Linux support at a time when SPARC was still highly relevant. It didn’t make a dent.
RISC-V has momentum, it has lots of implementations, it is synthesizable on cheap FPGAs (you need a $14,000 Xilinx for A2O) and we are seeing commercial implementations of it. It is going to take a while, but I think it is the best bet for competition against ARM and x86
“If we add another architecture in the future, it may instead be something like the LLVM bitcode or WebAssembly, who knows?”
Is he gonna predict this future ? 🙂
https://www.destroyallsoftware.com/talks/the-birth-and-death-of-javascript
I wonder if someone can make a transpiler something like Transmeta was trying to create for older architectures to RISC-V in some far away future.
Al though the comments on Phoronix actually pointed out: https://www.theregister.com/2018/06/18/microsoft_e2_edge_windows_10/
Perhaps I should have been more clear, I was talking about the mainline kernel. There are are forks that support architectures that have been dropped, are experimental, etc. For example you don’t see Microsoft’s E2 architecture in the mainline kernel. MS maintains that in a fork. They can submit it to the mainline kernel and Linus, et al would decide if it would go in.
He who? Me? Meh
I know the Gary Bernhardt talk well, and while he was talking about ASM.js everything he said could apply to WebAssembly. And yes you could make a CPU that executes WebAssembly directly, and have *that* be in the mainline kernel, I am less convinced that is going to happen. I have looked a lot at wasm and it doesn’t feel well suited to implementing in silicon personally. Attempts to make FPGA implementations haven’t been much to look at so far.
I am not so convinced with Bernhardt’s conclusion that this would allow us to get rid of memory protection, but it would just be a matter of compiling the Linux kernel with nommu. If it happens I won’t complain, my gut just says it is unlikely.
As for a transmeta like approach, sure of course. That is the point of a microarchitecture. AMD’s x86 products started by taking the microarchitecture of the AMD 29000 and having it handle x86 instructions instead of 29000. Transmeta’s approach was interesting in that it loaded part of itself from from storage at boot time and could theoretically translate other architectures to it’s VLIW microarchitecture.
Putting it that low didn’t work that well for transmeta. The company’s commercial value ended up being in their low power technology and patents than the CPU itself.
The future of binary portability is probably more like what DEC did with FX!32 and Apple is doing with Rosetta 2 — ahead of time transpiling to the new architecture. IBM bought the original Rosetta technology and uses it on the POWER systems to dynamically translate x86 to PPC
Poor PowerPC.
Not a single comment about PPC in a PPC article….
What’s to say? Are ISAs interesting anymore? For the kind of CPUs we are talking about you aren’t writing in assembly aside from small bits of boot code.
A2O is really cool OOO CPU with strong single thread performance and a focus on AI. I can see why they pointed to autonomous driving as a potential application. I am starting to make a dent on reading the VHDL but I doubt I will have anything to say that hasn’t been said by analysts. It’s an interesting chip and it seems like it would be reasonably powerful. But it is only a single core with two threads. So I wouldn’t expect to see it on your desktop anytime soon. There are other Power chips better suited to that, but expensive.
But you didn’t really bring up anything relevant about PPC either…
I would like to expand on this a little. If you go to https://github.com/openpower-cores/a2-boot and add up all the assembly code you find it is — with comments — less than 1,000 lines. Whatever compiler you use might need so more, but you aren’t talking more than 2,000 lines of assembly. At that point you are in the hands of code generators, which, frankly, do a pretty good job. Especially when you are talking about Power, ARM, x86, etc
We just don’t live in the days of hand optimized assembly outside of a handful of use cases. This is why I say ISAs don’t matter, because the vast majority of people who have to deal with them are either writing bootstrap code, or the backend to a compiler. The tiny fraction left over are specialized use cases (for example bit banging analog video output), or hobbists,
So long as the ISA is good enough it doesn’t matter at the desktop or above, because the ISA isn’t being directly executed. It is being effectively JITed into the microarchitecture. In lower end CPUs (historical, most microcontrollers, etc) the efficiency of the ISA is more relevant to code density, performance, and other considerations. However there, they are all good enough at this point.
What matters is performance (aka implementation) and what features does it offer. Does it offer lots of peripherals, is it optimized for a specific domain (like the A2O and AI), etc? But none of that is ISA specific.
I completely get wanting to use something other than x86/x64 for political reasons, or because of performance, or so you can run your favorite OS, but the ISA is secondary to all of them really.
OK I have written waaay to much on this subject, but I have been researching historical computers, and designing my own 68K based computer for fun, before I move back to FPGAs and make a soup to nuts RISC-V based system. Apparently I had a lot of thoughts…
ISAs haven’t mattered since Out of Order became a thing.
A lot of open ended questions have been answered, one of them being ISA. As of now, most ISAs do a good job, and the microarchitecture is decoupled enough that the performance does not come from the instruction set per se, but the architecture itself. Also compilers and code generators are at the point that they produce better code than human hand coding, for most cases.
https://www.youtube.com/watch?v=cdDT-CQmcVg
There are still lot of ISA in use that are not great.
Power, arm, risc-v and x86 are for most usages not major-ally different in performance effecting things.
You do get interesting things out the ISA. Like ISA with less registers equals larger code and slower code.
Out of Order made risc vs cisc matter less. But the register count of the ISA is still a big thing. Also how well the pattern of ISA provided registers map to program tasks.
Yes compilers and code generators do beat humans in most cases because there are a lot of factors that humans have trouble keeping track of to make something in machine code that the optimizations systems in cpu handle well. Hard fact making best working code has become harder since Out of Order came a thing.
Maybe for some in-order design, a larger number of explicit register may make a difference.
But we’re at a point that there really is not much difference between the major RISC or CISC designs. Out of order made x86’s register limitations irrelevant, and X86_64 has enough explicit registers.