Hardware Archive
Onyx Boox has just done something exciting; they have taken a page from the Hisense playbook and released a dedicated e-reader with the familiar candy bar shape as a smartphone, except it is a dedicated e-reader. You can do phone calls with this unit and talk to people on Facebook Messenger, Whatsapp or WeChat with dual microphones. However, it does not support SIM cards or eSim, and you must be on a WIFI connection to do anything useful. The most significant advantage of the Onyx Boox Palma is carrying an e-reader around with you in your pocket; you can’t do this with the vast majority of e-readers on the market. The Palma is available as a pre-order for $249; when it launches, the price will go up to $279.99.Only a small batch of units are available as a first come, first serve basis and will ship out sometime in August 2023. I don’t really have a use for something like this, but the price is interesting, and if it can indeed do smooth scrolling as they claim, I might actually be interested out of sheer curiosity. It’s kind of like if Apple released an iPod Touch, but with an e-ink display.
A few weeks ago we reported that the European Union wanted to force device makers to make batteries user-replaceable, and today it’s been confirmed and made official. The regulation provides that by 2027 portable batteries incorporated into appliances should be removable and replaceable by the end-user, leaving sufficient time for operators to adapt the design of their products to this requirement. This is an important provision for consumers. Light means of transport batteries will need to be replaceable by an independent professional. Excellent.
You could put it this way – DisplayPort has all the capabilities of interfaces like HDMI, but implemented in a better way, without legacy cruft, and with a number of features that take advantage of the DisplayPort’s sturdier architecture. As a result of this, DisplayPort isn’t just in external monitors, but also laptop internal displays, USB-C port display support, docking stations, and Thunderbolt of all flavors. If you own a display-capable docking station for your laptop, be it classic style multi-pin dock or USB-C, DisplayPort is highly likely to be involved, and even your smartphone might just support DisplayPort over USB-C these days. Back when I bought my current 144Hz 1440p monitor with G-Sync for my gaming PC, DisplayPort was the only way to hook it all up, since HDMI wasn’t yet supported. Ever since, out of a baseless sense of caution, I’ve always preferred DisplayPort for all my PC video connection needs, and as it turns out – yes, DisplayPort is definitely better than HDMI, and this article will tell you why.
Truth be told, this was the first time I heard of an Addressograph. So what does it do? What was the motivation behind its creation? And how does it work? Let’s take a dive into an Addressograph. I had never heard of this machine either – it’s designed to imprint things like names, addresses, and other information onto envelopes and forms. It’s one of the many, many innovations we’ve lost along the way in the 20th century that I’d love to see in the real world sometime.
Instead, they ended up on eBay, at a bargain-basement price of $59.99 each. And when the modern retro computing community turned them on, what they found was something worth bringing back to life. It took a while for anyone to notice these stylish metal-and-plastic machines from 1983. First, information spread like whispers in the community of tech forums, Discord servers, and Patreon channels where retro tech collectors hid. But then, a well-known tech YouTuber, Adrian Black, did a video about them, and these eBay machines, slapped with the logo of a company called NABU, were anonymous no more. The NABU is an incredibly interesting story, but I would like to take this time to highlight Adrian Black, one of the very finest retro computing YouTubers out there. He’s incredibly knowledgeable and capable, kind, calm, and takes his time to fix and showcase the hardware he works on. He’s the Mister Rogers of retro computing, and living proof that no, not all YouTubers are flashy, algorithm-chasing airheads.
The European Union (EU) is set to usher in a new era of smartphones with batteries that consumers can easily replace themselves. Earlier this week, the European Parliament approved new rules covering the design, production, and recycling of all rechargeable batteries sold within the EU. For “portable batteries” used in devices such as smartphones, tablets, and cameras, consumers must be able to “easily remove and replace them.” This will require a drastic design rethink by manufacturers, as most phone and tablet makers currently seal the battery away and require specialist tools and knowledge to access and replace them safely. This should’ve been mandated more than a decade ago, but better late than never. Faulty batteries is one of the primary reasons people eventually upgrade, even when their device is otherwise still perfectly functional. Device owners should be able to easily open their device and replace the battery, and of course, said batteries should not be hindered by patents, trademarks, or any other artificial monopolies – anybody should be able to produce them. The battery in my 2018 Dell XPS 13 9370 bulged a few years ago, but since the laptop is easily opened, it took me about 5 minutes to replace the faulty battery with a brand new one, and it only cost me about €100 – on a laptop that originally cost about €2200, I think that’s an amazing deal to keep the machine going. It’s otherwise in tip-top shape, and its 8th Gen i7, 16GB of RAM and 4K display can easily last me another ten years, especially since, as a Linux user, I won’t have to worry about my operating system killing off support. Smartphones should be the same.
I was wondering what the IBM Personal Computer would have been like if they had chosen the Motorola 68000 instead of the Intel 8088, so I used my MCL86+ to emulate the 68000 and find out! The MCL86+ is a board which uses a Teensy 4.1 to emulate a microprocessor in C code as well as use its GPIOs to emulate the local bus of the Intel 8088. It can be used as a drop-in replacement for the Intel 8088 and can be cycle accurate as well as run in accelerated modes. That’s a neat trick.
The other day I was asked an interesting question: What was the first BIOS with support for ROM shadowing? In the 1990s, ROM shadowing was common, at first as a pure performance enhancement and later as a functional requirement; newer firmware is stored compressed in ROM and must be decompressed into RAM first, and firmware may also rely on writing to itself before being locked down and write protected. Old PCs did not use ROM shadowing because it made no sense. ROMs were only marginally slower than RAM, if at all, and RAM was too precious to waste on mirroring the contents of existing ROMs. Over the years, RAM speeds shot up while ROM remained slow. By about 1990, executing BIOS code from ROM incurred a noticeable performance penalty, and at the same time devoting 64 or 128 KB to ROM shadowing was no longer prohibitively expensive. But who did it first? The OS/2 Mussum’s content never fails to be deeply interesting. And the answer to the question is the same answer it always is when it comes to who did something first in the early years of the PC platform. It’s always the same.
If you’ve kept a close eye on the technology space of late, you probably know that this is perhaps one of the most interesting times for processors in many years. After a number of stagnant generations, Intel has started competing again; AMD’s Ryzen chips are still pretty solid; ARM is where a lot of the innovation is happening; and RISC-V looks like it’s going to be the coolest thing in the world in about a decade. But none of these chips, honestly, can hold a candle to the interestingness of the chip I’m going to tell you about today. It didn’t set the world ablaze; in fact, it was designed not to. In the end, it was used in relatively minor systems, like internet appliances and palmtops. But technologically, it bridged the gap between two camps—RISC and CISC. And that’s what makes it interesting. Today’s Tedium looks back at the Transmeta Crusoe, perhaps the most interesting processor to ever exist. The Crusoe was absolutely fascinating, and the most bonkers what if?-scenario with the Crusoe is that in theory, there was nothing preventing the Crusoe’s software translation layer from emulating something other than x86. If this technology had evolved and received far more funding and success, we could’ve had a vastly different processor and ISA landscape today.
British chip designer ARM is working on its own advanced semiconductor to showcase the power and capabilities of its design, Financial Times reports. According to people briefed on the move, ARM will work with manufacturing partners to bring the new chip to fruition. They’re not intending to get into the chip game, as this will only be a prototype chip to demonstrate what they can do.
Loongson’s 3A5000 is the most promising domestic Chinese CPU we’ve seen so far. Compared to the Zhaoxin KX-6640MA and Phytium D2000, Loongson’s 3A5000 is a wide core with a better balanced backend and a better cache hierarchy. But it suffers the same fundamental issues as the other two in its quest to be a general purpose CPU. Loongson’s LA464 simply cannot deliver performance in the same class as any recent Intel or AMD architecture. Compared to its western counterparts, LA464’s architecture is smaller, the L2 and L3 caches are worse, and the DDR4 memory controller is embarrassingly bad. Even though Loongson has gotten their cores up from 1 GHz to 2.5 GHz, no one runs desktop or even laptop CPUs at clocks that low. Because of its massive clock speed deficiency, Loongson can’t even get in to the same performance ballpark as recent desktop CPUs. It even struggles against Neoverse N1 running at 3 GHz. This is a far more detailed looking at these processors than we posted a few days ago.
Amid the push for technology independence, Chinese companies are pushing out more products to satisfy the need for the rapidly soaring demand for domestic data processing silicon. Today, we have information that Chinese Loongson has launched a 3D5000 CPU with as many as 32 cores. Utilizing chiplet technology, the 3D5000 represents a combination of two 16-core 3C5000 processors based on LA464 cores, based on LoongArch ISA that follows the combination of RISC and MIPS ISA design principles. The new chip features 64 MB of L3 cache, supports eight-channel DDR4-3200 ECC memory achieving 50 GB/s, and has five HyperTransport (HT) 3.0 interfaces. The TDP configuration of the chip is officially 300 Watts; however, normal operation is usually at around 150 Watts, with LA464 cores running at 2 GHz. China’s rapid improvement in microprocessors isn’t really all that interesting for us in other parts of the world, because chips from companies like Loongson don’t really make their way over here. What is interesting about this, however, is the implications this continued trend will have for the geopolitical state of the world. A China not dependent on Taiwan’s TSMC for its chips is a China that can more freely invade Taiwan.
Ampere has quietly launched its Altra developers kit aimed at software creators for cloud data centers. Along with Dev Kit featuring the company’s system-on-chips with up to 80 cores, the Ampere also offers a pre-built workstation running its 128-core SoC, according to Joe Speed, the company’s edge computing chief. An unexpected twist is that the workstation can run Windows and even has driver support for Nvidia’s GeForce RTX graphics cards. The Ampere Altra Developer Platform (AADP) is a prototyping system for general embedded applications, but it can obviously be used for building software for the cloud. The machine can use a variety of add-in boards, including Nvidia’s GeForce RTX cards. What is a bit surprising is that it can run Windows, making it perhaps the most powerful Arm-based machine that runs the consumer-oriented Microsoft operating system. Ampere’s ARM workstations have been high on my list of desirable hardware I cannot afford and have no use for.
I come bearing great news for everyone waiting for Star64 – the SBC will be available for purchase on April 4th. Due to some last-minute logistics issues we failed to make the March launch date announced in February – our apologies for the slight delay. The boards have now finally been delivered and getting packaged and ready for dispatch. Let me just quickly reiterate the Star64 features: Quad core 64bit RISC-V, HDMI video output, 4x DSI and 4x CSI lates, i2c touch panel connector, dual Gigabit Ethernet ports, dual-band WiFi and Bluetooth, as well as 1x native USB3.0 port, 3x shared USB2.0 ports, PCIe x1 open-ended slot and GPIO bus pins (i2c, SPI and UART). The board also features 128M QSPI flash and eMMC and microSD card slots. The board will be available in two different RAM configurations – with 4GB and 8GB LPDDR4 memory for $69.99 and $89.99 respectively. I’ll await some reviews first, but this seems like a very obvious buy if performance is at least reasonable. I really want to support RISC-V hardware, but so far, it’s been rather slim pickings. Here’s top hoping it gets better soon.
FORTH is an early programming language developed by Charles H. Moore in the late 1960s. More developed FORTH on an IBM 1130 minicomputer, which had a 16-bit CPU and only 8 KB of RAM. To keep things simple and reduce memory consumption, he implemented FORTH as a stack-based virtual machine using the Reverse Polish Notation (RPN). But FORTH is much more than just a programming language. Because FORTH has a built-in interpreter, compiler and disk I/O support, a computer running FORTH is also called a “Forth system”. My4TH is such a Forth system. You can develop and debug your Forth programs directly on My4TH. You can enter your source code with the built-in text editor and store it in the on-board EEPROM memory. From there you can compile and run it directly on the My4TH board. This is well beyond my capabilities, but it seems like an incredibly cool piece of hardware. Niche, sure, but I wouldn’t be surprised if some of you were into this sort of thing.
Today, we’re introducing a major set of upgrades to the Framework Laptop spanning two new models – the Framework Laptop 13 (13th Gen Intel® Core™) and the Framework Laptop 13 (AMD Ryzen™ 7040 Series). We’ve not only scaled up performance and enabled an AMD-powered version for the first time, but we’ve also delivered refinements to the day-to-day user experience with a higher capacity battery, matte display, louder speakers, and more ridgid hinges. And Framework kept their promise: these new mainboards can be ordered separately and fit into the existing Framework 13″ laptop. The company also showed off their next product – a 16″ laptop that not only comes with an upgradeable GPU, but also a completely configurable input deck, so you can configure the keyboard and trackpad area in any configuration you like. I’m so happy Framework is doing well, as it shows that glued shut, non-repeairable, and non-upgradeable laptops are not some sort of universal inevitable truth.
If you’re new to the Arm ecosystem, consider this a quick primer on terms you likely have seen before but might have questions about. Well, exactly what it says.
Cobalt Networks were one of the early pioneers in network appliance hardware and produced some of the first turn-key webserver boxes you could buy, founded in 1996 as Cobalt Microserver. Cobalt boxes are immediately identifiable from their distinctive deep blue plastic bezels starting with the 1998 Cobalt Qube 2700. The Qube used a 150MHz QED RM5230; these CPUs are part of QED’s R5000 family and we’ll talk about their architecture a bit later. They came with 2.1GB hard disks with later larger options, 10Mbit Ethernet, 16MB of RAM standard with up to 256MB supported, and a “console” consisting of a backlit rear-mounted 2-line LCD and control buttons (on later machines, but not the original 2700, a serial port provided an actual console if you held down a button during startup). A fair number of typical configuration tasks such as setting its IP address could be done directly from the panel and the rest were intended to be done through its Perl-based web console. They were designed to run Linux from the ground up and shipped with Red Hat using a 2.0.x kernel. Cobalt’s network appliances were so exotic back in the day, and once they started hitting the used market, I almost pulled the trigger quite a few times. These days, they’re harder to come by, and their use is, of course, inherently limited now, but that doesn’t make them any less eye-catching.
The JH7110 isn’t amazing. But it’s not bad, either. I still wouldn’t recommend most people buy this board, unless you already know a lot about Linux and SBCs in general. That may change a year from now, but right now, this board isn’t targeted at the same market as a Raspberry Pi. At around $100, and not being quite production-ready, I’m only recommending this board to people interested in exploring RISC-V for now. This seems like an expected experience for a relatively new architecture that still has rather limited hardware and software support. When the first Raspberry Pi came out, the situation wasn’t much better either, so give it a few years and RISC-V will be in a better place in the market for sub-€100 single-board computers.
Paul Weissmann, maintainer of OpenPA, the definitive source of information on HP’s PA-RISC hardware and software, has published an article about how the state of information preservation on this topic has changed substantially since OpenPA’s founding in 1999. The main challenges for OpenPA at the time were both finding all the available information, as search engines were still young in the late 1990s, as well as making sense of it all as it was just so much and new sources kept appearing. This went on until the mid to late 2000s, when solid and stable sources could be found and referenced, which OpenPA did. The Internet and information on it changed since then, slowly but surely, in a profound way. Many original sources have disappeared and so much information has been lost in only two decades – making OpenPA the authoritative source for PA-RISC in some ways. A long journey from documenting complex information of the 1990s to an historic archive on the PA-RISC era. OpenPA is an amazing resource, so if you happen to have any information worth sharing with Weissmann, please do so.