When OSNews covered the RISC V architecture recently, I was struck by my own lack of excitement. I looked into it, and the project looks intriguing, but it didn’t move me on an emotional level like a new CPU architecture development would have done many years ago. I think it’s due to a change in myself, as I have got older. When I first got into computers, in the early 80s, there was a vibrant environment of competing designs with different approaches. This tended to foster an interest, in the enthusiast, in what was inside the box, including the CPU architecture. Jump forwards to the current era, and the computer market is largely homogenized to a single approach for each class of computing device, and this means that there is less to get excited about in terms of CPU architectures in general. I want to look at what brought about this change in myself, and maybe these thoughts will resonate with some of you.
As said, back in the early 80s, and through the 90s there was lot of variety in the general purpose computers that were available. Take the original IBM PC, for example: on release, in 1982, it didn’t offer much to get “excited” about. Although expensive, it was weak in a number of areas. It had rubbish graphics and sound, and general performance was fairly standard for the time. It’s hard to get emotional about a design like that, and that reflects IBM’s attitude to personal computers at the time – this was a tool for getting work done, nothing more. “Character” is a good word for what a lot of the competing machines had to offer. Even a computer like the Apple II had character, although it was a capable workhorse as well.
So, it started back then, then – you could buy a computer, to get a job done, without getting emotional about it or you could buy one for fun. Furthermore, if a computer can sometimes be seen as a grey box to get work done, the CPU can be visualised as black box: being objective in this way, it doesn’t matter how it works internally, we’re only interested in its level of performance.
So, this raises the question, is there any rational reason to care about the CPU architecture nowadays, and thinking about that, was there ever?
Back in the old days, one reason to care about the type of CPU was that you felt that it took superior approach. So, there’s a practical side to the choice of CPU architecture emerging, and this gives us an important criterion to consider: performance. The performance of a CPU can be broken into two further criteria: throughput and power consumption. So, you might go for an ARM based computer, to reap the benefits of lower power consumption at the expense of software compatibility and throughput. In a way, this example is a cheat, because we’re now talking about different classes of computer. Rather than considering processor architecture choice for, say low power home servers or mobile computers, to level the playing field, let’s just look at desktop computing.
Historically, I’d argue, there was more to get excited about when it came to processor architecture choice. Back in the old days, programmers might care about processor architecture because were interested in assembly language programming. Briefly, architectures like ARM and 68000 were reasonably friendly to program, whereas by comparison, the Intel 8086 and its successors were extremely fiddly to work with. Assembly language programming would often be used for game, and sometimes even application programming up until the mid 90s, when high level programming took over. So, this is an example of a justification that has died out for most people, even programmers. In addition, coming back to the concept of pure “approach”, one might feel that a given design ethos had more potential, and that technology was a better one in which to invest. Over the course of its life cycle, Intel’s Pentium 4 series, for example, was widely regarded as an underwhelming technological approach that lagged behind what AMD were offering on the desktop at the time.
In all fairness, there is an aspect of CPU architecture, beyond raw performance, that might still be of interest to general users, and that is the number of CPU cores. For a long time it was uncertain whether consumers would be interested multiple core CPUs because it is possible to create a single core processor that offers better throughput than a multicore design in typical use. However, once multicore CPUs became commonly available, many users liked the smoother experience it offered with less interruptions from background tasks. Best of all, it offers a massive speed boost when it is most needed, on tasks that can be parallelised, such as video encoding. So, a user might prefer a CPU architecture based on personal preference (and typical usage), when it comes to a larger number of cores, as opposed to a lower number of faster cores.
Ethics can come into it too. I bet I’m not the only user of this site that has sometimes given preference to open source software for this reason. Speaking personally, RISC V stirs something along these lines in me. I like the idea of having an open source processor, one that explores modern design ideas. Going back to the 80s and 90s ethics were often a large part of decision-making process. For a lot of users, they liked the idea of having something that wasn’t powered by an Intel (or compatible) processor, such as a PPC Apple Mac. In terms of the black box approach to objectively assess a processor, who’s to say whether the a PPC was superior to an Intel Pentium of the same price, in terms of raw performance. However, for many, it felt “cool” to be running something hi-tech and non-mainstream. In fact, it can be analogous to supporting a sports team, in terms of emotional attachment. Back in the 90s, case stickers that mocked the “Intel Inside” slogan and logo were a common addition to a machine using an “alternative” desktop architecture such as ARM, SPARC or PPC. Compatibility came into it too; by that stage there was an entire lineage and a culture surrounding the Mac, and only PPC could get you into that world. Ironically, after extolling the advantages of PPC over Pentium, Apple eventually switched to Intel. For many, I suspect, it was a development that decreased their enjoyment of running that platform, although it led to an increase in performance.
Consoles went the same way. The Sony PlayStation 3 used a Cell processor, another PPC derived design, and this added a mystique to the machine that appealed to many purchasers. In the same way, many of us held out hopes for Itanium, a (initially) server-targeted CPU architecture. Both of those architectures have in common that they are cool, interesting approaches. In the case of the Cell, a fairly conventional CPU also featured seven additional processing units, an approach often used in supercomputers. In the case of Itanium, rather than a complex processor that can rearrange code on the fly as it is being executed, it is a stripped down, simplified processor. The clever bit is that the compiler itself has to do the work to optimise the code for such a processor. Intel had hoped that it would eventually take over as the mainstream processor architecture to power consumer computing. But after a lack of success in the server market, compared to competing architectures, Intel has recently announced that it is phasing out Itanium. Looking back at the PlayStation 3, the unusual architecture was certainly, legitimately, powerful. In fact, clusters of PlayStation 3s were sometimes assembled to carry out super computing tasks. However, in the final assessment, it rarely trounced its main competitor, the conventionally designed XBOX 360 (a re-boxed PC, really) when running games. However, many users loved the thought of having a calculative wonder machine in their living room.
So, where does that leave us? Well, the first point is that what matters is, of course, where it leaves you. If you like the idea of running something a bit unusual (such as an ARM based workstation) to increase your enjoyment of using it, then there’s nothing wrong with that. If you’re doing something for enjoyment, why not do it in the most rewarding and enjoyable way?
As for the RISC V boards that are available, there’s a possibility that I might get one if they’re affordable. I highly doubt that they will score highly on the price/performance equation, and sadly, that’s the main thing I’m interested in these days. It’s boring but true, and maybe a lack of youthfulness has something to do with it?
I can’t speak for anyone else, but I like learning assembly of different processors. Its like speaking a different language or watching a new show with a novel approach. Its its own world of low level fun. I don’t have a terrible amount of time to dabble in assembly these days, but its still cool to read about.
I would not call the Xbox 360 a conventional design. Its Xenon CPU was a powerful 3-core PowerPC derivative. Definitely something to be excited about. And the cores were put to great use in games. The first Xbox, however, was just a Pentium PC in a box.
And doesn’t Xenon CPU trace some lineage (SIMD instructions?) also from Cell processing units IIRC? (added by IBM when Sony wasn’t looking 😛 ) Plus IIRC some small fast memory in the same package as GPU, that’s also not very PC-like / kinda similar to few Nintendo consoles.
Also about PPC part of the article…
And let’s not forget that Apple CEO from the time of 68k->PPC transition said that not going Intel back then already was his biggest mistake…
Honestly, it feels to me that the architectures considered to be the most special are also the ones that were either less general-purpose or tied to a specific system. I think two things have happened since the “good old days” that both makes a resurgence of new architectures possible and makes that resurgence less special.
Back when we had a plethora of different architectures, the software we ran was also down at that level or on top of a very thin operating system. Now we have enough computing power easily available that we’ve built up many different layers. To replace the underlying architecture, one really only needs to replace the kernel and some drivers and provide a new compiler. All of the rest can run just as well. Of course, this is much more obvious and easy to do now that we have so much open source software around.
The other thing that has happened is that we’ve standardized and combined so many interfaces into just a few, like USB, EFI, PCI, Ethernet, HDMI, Bluetooth, and Wi-Fi. Fewer and fewer special applications exist that require different connections, so things feel much more homogeneous between systems from a hardware and peripheral standpoint too.
I care about the RISC-V simply as the potential of another platform to help break down the monoculture we’re currently in. I appreciate that ARM is making strides in this area, but another player that’s reasonably performant, would be nice.
The wild internet is getting scarier and scarier for all the wrong reasons. The idea of a piece of open hardware that can be better vetted from a trust level is getting more important.
Do I want to give up my Mac for something running OpenBSD on a RISC-V or a MIPS? Not really. But, it just seems more and more that’s the way someone has to go. It’s paranoia for sure. I recognize that.
But at the same time, I’m getting sick of it. I just want to hang out on the few tiny forums I participate in, doodle with my pointless coding adventures, play a couple of games. I’m not interested in contributing indirectly, implicitly, or subversively to some mass data set — anonymous or not.
The pervasive surveillance condition we’re living in today chafes.
Does something like RISC-V solve that? No. Not necessarily. But, maybe I’ll feel better anyway.
Risc-V kind of bothers me because 1. The supporters are constantly talking about how terrible all other CPU architectures are while also not seeming to have extensive experience working with them, and 2. It seriously looks like just a “for fun” project where the developers are constantly and desperately trying to say how it’s not that. It’s OK for something to be for fun, just because you want to, and it’s OK for that to also have some practical applications too.
Like Linux was one day. You know where it led to. Hence if people takes the train, of course RISC-V will improve and become less than the toy (you pretend) it is. Or not. Much since it is now backed by serious firms that have a financial interest to improve it and depend less from the big cpu suppliers, I think there’s a little bit of a chance that it would succeed more than the “parallax propeller” or the “parallella epiphany” or the “j-core” that could have been another alternatives.
i think the problem is computing, in general, have become way more complicated than it was in the 80s and 90s, so it is much more difficult to build an alternative hardware platform – there are way more layers, components and subsystems that you need to port, adapt or even build from scratch in order for an alternative hardware platform to be usable to general public. for example, once there was only DOS and the like, and it was simple to build, say, a new CPU that can run DOS – nowadays, even building a new CPU to run NetBSD (personally i think it could be the simplest architecture for new hardware to support) is not a simple endeavor, the bar is just too high.
It depends, cpu could still be rather minimal in complexity, see cortex-m0 for low demand embedded usage, but since expectations and needs evolved, more complex and powerful cpus were made. You still can use a 68000 to make pretty much what you were able to do in the 90s, but then don’t expect more from it.
The problem isn’t the 68K per se (well, something like a 68020). The only thing a modern CPU “needs” is an MMU. Virtual memory is pretty much a must for an end user computer. After that, it’s then a problem of just raw performance.
You can’t connect to the internet now without expensive encryption. And make no mistake, encryption is expensive. We just have “fast enough” CPUs to make it transparent.
So, the real baseline is “fast enough to do TLS” and an MMU. A machine that’s “fast enough” to do TLS may not be fast enough to run modern JavaScript (another non-functional requirement), it’s just hard to say. JS implementations have improved mightily, and I think much of the performance problems we see in modern web apps are as much network and latency issues than raw JS performance, something a faster CPU can’t fix.
Well, a good CPU (the 68020 is great indeed) plus MMU (thus the 68030) plus FPU (hence the 68040) plus SIMD/DSP and you get enough to cover most needs. For your encryption/javascript stuff, a FPGA could also help to preprocess the computational heavy tasks. Then for more needs, higher frequency and/or cores solves the problem. And you get where we are now.
https://github.com/Kochise/m68140
Somehow related, article which helped me to understand how modern CPU works:
http://www.lighterra.com/papers/modernmicroprocessors/
JohnnyO,
That’s a very good link with lots of in-depth info, thank you for posting it! Some of the information is quite old even though it says it was last updated in 2016. It would be nice if it could be updated for a post-spectre landscape where speculative execution has been identified as a source of side channel leaks. The mitigations have a substantial impact on the effectiveness of complex out-of-order pipelines.
http://www.lighterra.com/papers/modernmicroprocessors/
Really great post. I have the same sentiment, reminiscing about the days when the processor in your machine was akin to a cause that you could identify passionately with or against. I still have regrets that the 6809 arrived just a little too late to lead a revolution or that Chuck Chuck Moore’s asynchronous high count multi core Fourth chips never took off. Maybe the fact that RISC-V is less exciting and more of the same is what will allow it to get through its embryonic phase and become a movement, like Linux once did. Maybe it will form the basis of a new thread of compiler configured hardware that explodes through computation barriers, or some other substantial paradigm shift.
I also don’t care that much, GNU/Linux works fine in all of them. For a number of years my main machine was an iBook G4 (PPC) running Gentoo and it was mostly OK, same as running it on a x86. I’ve also ran it other architectures like ARM and, once again, things were OK.
CPU Architecture. doesnot matter for profession. a use requirment must need to fill. is main things to consider