Hardware Archive
On Monday at CES 2025, Nvidia unveiled a desktop computer called Project DIGITS. The machine uses Nvidia’s latest “Blackwell” AI chip and will cost $3,000. It contains a new central processor, or CPU, which Nvidia and MediaTek worked to create. Responding to an analyst’s question during an investor presentation, Huang said Nvidia tapped MediaTek to co-design an energy-efficient CPU that could be sold more widely. “Now they could provide that to us, and they could keep that for themselves and serve the market. And so it was a great win-win,” Huang said. Previously, Reuters reported that Nvidia was working on a CPU for personal computers to challenge the consumer and business computer market dominance of Intel, Advanced Micro Devices and Qualcomm. ↫ Stephen Nellis at Reuters I’ve long wondered why NVIDIA wasn’t entering the general purpose processor market in a more substantial way than it did a few years ago with the Tegra, especially now that ARM has cemented itself as an architecture choice for more than just mobile devices. Much like Intel, AMD, and now Qualcomm, NVIDIA could easily deliver the whole package to laptop, tablet, and desktop makers: processor, chipset, GPU, of course glued together with special NVIDIA magic the other companies opting to use NVIDIA GPUs won’t get. There’s a lot of money to be made there, and it’s the move that could help NVIDIA survive the inevitable crash of the “AI” wave it’s currently riding, which has pushed the company to become one of the most valuable companies in the world. I’m also sure OEMs would love nothing more than to have more than just Qualcomm to choose from for ARM laptops and desktops, if only to aid in bringing costs down through competition, and to potentially offer ARM devices with the same kind of powerful GPUs currently mostly reserved for x86 machines. I’m personally always for more competition, but this time with the asterisk that NVIDIA really doesn’t need to get any bigger than it already is. The company has a long history of screwing over consumers, and I doubt that would change if they also conquered a chunky slice of the general purpose processor market.
So we all know about twisted-pair ethernet, huh? I get a little frustrated with a lot of histories of the topic, like the recent neil breen^w^wserial port video, because they often fail to address some obvious questions about the origin of twisted-pair network cabling. Well, I will fail to answer these as well, because the reality is that these answers have proven very difficult to track down. ↫ J. B. Crawford The problems with nailing down an accurate history of the development of the various standards, ideas, concepts, and implementations of Ethernet and other, by now dead, network standards are their age, as well as the fact that their history is entangled with the even longer history of telephone wiring. The reasoning behind some of the choices made by engineers over the past more than 100 years of telephone technology aren’t always clear, and very difficult to retrace. Crawford dives into some seriously old and fun history here, trying to piece together the origins of twisted pair the best he can. It’s a great read, as all of his writings are.
We’ve all had a good seven years to figure out why our interconnected devices refused to work properly with the HDMI 2.1 specification. The HDMI Forum announced at CES today that it’s time to start considering new headaches. HDMI 2.2 will require new cables for full compatibility, but it has the same physical connectors. Tiny QR codes are suggested to help with that, however. The new specification is named HDMI 2.2, but compatible cables will carry an “Ultra96” marker to indicate that they can carry 96GBps, double the 48 of HDMI 2.1b. The Forum anticipates this will result in higher resolutions and refresh rates and a “next-gen HDMI Fixed Rate Link.” The Forum cited “AR/VR/MR, spatial reality, and light field displays” as benefiting from increased bandwidth, along with medical imaging and machine vision. ↫ Kevin Purdey at Ars Technica I’m sure this will not pose any problems whatsoever, and that no shady no-name manufacturers will abuse this situation at all. DisplayPort is the better standard and connector anyway. No, I will not be taking questions.
Dell has announced it’s rebranding literally its entire product line, so mainstays like XPS, Latitude, and Inspiron are going away. They’re replacing all of these old brands with Dell, Dell Pro, and Dell Pro Max, and within each of these, there will be three tiers: Base, Plus, and Premium. Of course, the reason is “AI”. The AI PC market is quickly evolving. Silicon innovation is at its strongest and everyone from IT decision makers to professionals and everyday users are looking at on-device AI to help drive productivity and creativity. To make finding the right AI PC easy for customers, we’ve introduced three simple product categories to focus on core customer needs – Dell (designed for play, school and work), Dell Pro (designed for professional-grade productivity) and Dell Pro Max (designed for maximum performance). We’ve also made it easy to distinguish products within each of the new product categories. We have a consistent approach to tiering that lets customers pinpoint the exact device for their specific needs. Above and beyond the starting point (Base), there’s a Plus tier that offers the most scalable performance and a Premium tier that delivers the ultimate in mobility and design. ↫ Kevin Terwilliger on Dell’s blog Setting aside the nonsensical reasoning behind the rebrand, I do actually kind of dig the simplicity here. This is a simple, straightforward set of brand names and tiers that pretty much anyone can understand. That being said, the issue with Dell in particular is that once you go to their website to actually buy one of their machines, the clarity abruptly ends and it gets confusing fast. I hope these new brand names and tiers will untangle some of that mess to make it easier to find what you need, but I’m skeptical. My XPS 13 from 2017 is really starting to show its age, and considering how happy I’ve been with it over the years its current Dell equivalent would be a top contender (assuming I had the finances to do so). I wonder if the Linux support on current Dell laptops has improved since my XPS 13 was new?
Do you think streaming platforms and other entities that employ DRM schemes use the TPM in your computer to decrypt stuff? Well, the Free Software Foundation seems to think so, and adds Microsoft’s insistence on requiring a TPM for Windows 11 into the mix, but it turns out that’s simply not true. I’m going to be honest here and say that I don’t know what Microsoft’s actual motivation for requiring a TPM in Windows 11 is. I’ve been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I’d certainly pick a computer with a TPM. But in terms of whether it’s of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I’m not sure that’s a worthwhile tradeoff. What I can say is that the FSF’s claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft’s strategy here, the argument is pretty significantly undermined. I’m not aware of any streaming media platforms making use of TPMs in any way whatsoever. There is hardware DRM that the media companies use to restrict users, but it’s not in the TPM – it’s in the GPU. ↫ Matthew Garrett A TPM is imply not designed to handle decryption of media streams, and even if they were, they’re far, far too slow and underpowered to decode even a 1080P stream, let alone anything more demanding than that. In reality, DRM schemes like Google’s Widevine, Apple’s Fairplay, and Microsoft’s Playready offer different levels of functionality, both in software and in hardware. The hardware DRM stuff is all done by the GPU, and not by the TPM. By focusing so much on the TPM, Garrett argues, the FSF is failing to see how GPU makers have enabled a ton of hardware DRM without anyone noticing. Personally, I totally understand why organisations like the Free Software Foundation are focusing on TPMs right now. They’re one of the main reasons why people can’t upgrade to Windows 11, it’s the thing people have heard about, and it’s the thing that’ll soon prevent them from getting security updates for their otherwise perfectly fine machines. I’m not sure the FSF has enough clout these days to make any meaningful media impact, especially in more general, non-tech media, but by choosing the TPM as their focus they’re definitely choosing a viable vector. Of course, over here in the tech corner, we don’t like it when people are factually inaccurate or twisting and bending the truth, and I’m glad someone as knowledgeable as Garrett stepped up to set the record straight for us tech-focused people, while everyone else can continue to ignore this matter.
This is the Hall Research Technologies SC-VGA-2, sold as a “VGA/HDTV Video Processor.” In addition to slicing, dicing and pureeing, apparently, it will take any of a bundle of input formats and both rescale and resample them on the fly into the VGA or HDTV signal you desire, including 60Hz rates. This came from a seller specializing in teleprompter equipment and Hall still sells an HDMI version with additional resolutions … for around US$500. However, this or the slightly newer SC-VGA-2A and SC-VGA-2B are all relatively common devices and found substantially cheaper used. Let’s try it out and show some sample output, including those delicious NeXTSTEP system messages and some ST grabs. ↫ Cameron Kaiser With the obscurity of some of the hardware Cameron Kaiser details on his website, I’m not surprised he has some seriously unique needs when it comes to taking screengrabs. He couldn’t very well not take the device apart, and inside it appears to be a system with two small processors, at least one of which is an Intel 8051 8bit microcontroller. Kaiser goes into his usual great detail explaining and showing how the device works. If you’ve got unique screengrabbing needs, this might be of interest to you.
System76, purveyor of Linux computers, distributions, and now also desktop environments, has just unveiled its latest top-end workstation, but this time, it’s not an x86 machine. They’ve been working together with Ampere to build a workstation based around Ampere’s Altra ARM processors: the Thelio Astra. Phoronix, fine purveyor of Linux-focused benchmarks, were lucky enough to benchmark one, and has more information on the new workstation. System76 designed the Thelio Astra in collaboration with Ampere Computing. The System76 Thelio Astra makes use of Ampere Altra processors up to the Ampere Altra Max 128-core ARMv8 processor that in turn supports 8-channel DDR4 ECC memory. The Thelio Astra can be configured with up to 512GB of system memory, choice of Ampere Altra processors, up to NVIDIA RTX 6000 Ada Generation graphics, dual 10 Gigabit Ethernet, and up to 16TB of PCIe 4.0 NVMe SSD storage. System76 designed the Thelio Astra ARM64 workstation to be complemented by NVIDIA graphics given the pervasiveness of NVIDIA GPUs/accelerators for artificial intelligence and machine learning workloads. The Astra is contained within System76’s custom-designed, in-house-manufactured Thelio chassis. Pricing on the System76 Thelio Astra will start out at $3,299 USD with the 64-core Ampere Altra Q64-22 processor, 2 x 32GB of ECC DDR4-3200 memory, 500GB NVMe SSD, and NVIDIA A402 graphics card. ↫ Michael Larabel This pricing is actually remarkably favourable considering the hardware you’re getting. System76 and its employees have been dropping hints for a while now they were working on an ARM variant of their Thelio workstation, and knowing some of the prices others are asking, I definitely expected the base price to hit $5000, so this is a pleasant surprise. With the Altra processors getting a tiny bit long in the tooth, you do notice some oddities here, specifically the DDR4 RAM instead of the modern DDR5, as well as the lack of PCIe 5.0. The problem is that while the Altra has a successor in the AmpereOne processor, its availability is quite limited, and most of them probably end up in datacentres and expensive servers for big tech companies. This newer variant does come with DDR5 and PCIe 5.0 support, but doesn’t yet have a lower core count version, so even if it were readily available it might simply push the price too far up. Regardless, the Altra is still a ridiculously powerful processor, and at anywhere between 64 and 128 cores, it’s got power to spare. The Thelio Astra will be available come 12 November, and while I would perform a considerable number of eyebrow-raising acts to get my hands on one, it’s unlikely System76 will ship one over for a review. Edit: here’s an excellent and detailed reply to our Mastodon account from an owner of an Ampere Altra workstation, highlighting some of the challenges related to your choice of GPU. Required reading if you’re interested in a machine like this.
Something odd happened to Qualcomm’s Snapdragon Dev Kit, an $899 mini PC powered by Windows 11 and the company’s latest Snapdragon X Elite processor. Qualcomm decided to abruptly discontinue the product, refund all orders (including for those with units on hand), and cease its support, claiming the device “has not met our usual standards of excellence.” ↫ Taras Buria at Neowin The launch of the Snapdragon X Pro and Elite chips seems to have mostly progressed well, but there have been a few hiccups for those of us who want ARM but aren’t interested in Windows and/or laptops. There’s this story, which is just odd all around, with an announced, sold, and even shipped product suddenly taken off the market, which I think at this point was the only non-laptop device with an X Elite or Pro chip. If you are interested in developing for Qualcomm’s new platform, but don’t want a laptop, you’re out of luck for now. Another note is that the SoC SKU in the Dev Kit was clocked a tiny bit higher than the laptop SKUs, which perhaps plays a role in its cancellation. The bigger hiccup is the problematic Linux bring-up, which is posing many more problems and is taking a lot longer than Qualcomm very publicly promised it would take. For now, if you want to run Linux on a Snapdragon X Elite or Pro device, you’re going to need a custom version of your distribution of choice, tailored to a specific laptop model, using a custom kernel. It’s an absolute mess and basically means that at this point in time, months and months after release, buying one of these to run Linux on them is a bad idea. Quite a few important bits will arrive with Linux 6.12 to supposedly greatly improve the experience, but seeing is believing. Qualcomm made a lot of grandiose promises about Linux support, and they simply haven’t delivered.
If you read my previous article on DOS memory models, you may have dismissed everything I wrote as “legacy cruft from the 1990s that nobody cares about any longer”. After all, computers have evolved from sporting 8-bit processors to 64-bit processors and, on the way, the amount of memory that these computers can leverage has grown orders of magnitude: the 8086, a 16-bit machine with a 20-bit address space, could only use 1MB of memory while today’s 64-bit machines can theoretically access 16EB. All of this growth has been in service of ever-growing programs. But… even if programs are now more sophisticated than they were before, do they all really require access to a 64-bit address space? Has the growth from 8 to 64 bits been a net positive in performance terms? Let’s try to answer those questions to find some very surprising answers. But first, some theory. ↫ Julio Merino It’s not quite weekend yet, but I’m still calling this some light reading for the weekend.
In 1999, some members from the MMC Association decided to split and create SD Association. But nobody seems to exactly know why. ↫ sdomi’s webpage I don’t even know how to summarise any of this research, because it’s not only a lot of information, it’s also deeply bureaucratic and boring – it takes a certain kind of person to enjoy this sort of stuff, and I happen to fit the bill. This is a great read.
Earlier this year, I reviewed the excellent and unique MNT Reform laptop, an (almost) fully open source, very hackable laptop. MNT has just unveiled the upcoming follow-up to the Reform, called the Reform Next. Being highly performant, modular, and upgradeable, MNT Reform Next gives you more freedom than any other laptop. Swap modules, print your own case, customize your keyboard. Since we are committed to open hardware, all sources are public. While Classic MNT Reform is a portable device, we felt like a sleeker, more lightweight design would increase portability and make for a more flexible laptop. ↫ MNT website The focus seems to have been on both performance and size, and I think the latter is especially important for a lot of people who might not have been too enamored with the original Reform’s chunky, brutalist design. The device has been made thinner by splitting the motherboard up into several connected, separate boards, that also happen to improve the repairability and upgradeability of the device. The battery pack has been redesigned for a smaller physical size, too, and the trackball option is no longer available – it’s trackpad-only. The Reform Next is compatible with MNT’s latest processor module, the RK3588, and as such, packs a bigger punch. This SoC has four ARM Cortex-A76 cores up to 2.4 Ghz, and four power-efficient ARM Cortex-A55 cores up to 1.5 Ghz. This SoC is also available as an upgrade for the MNT Reform and the MNT Pocket Reform, and ships with either 16 or 32 GB of RAM and an ARM Mali G610 MP4 GPU. Of course, the Reform Next will be as open as humanly possible, both software as well as hardware-wise, and it’s looking like a worthy successor to the MNT Reform. I’m incredibly delighted that MNT seems to have found a niche that works for them, and enabling them to keep developing and releasing hardware that goes against every trend in the industry, giving us entirely unique devices nobody else is making.
Paul Weissmann’s OpenPA, the invaluable archive on anything related to the HP’s PA-RISC architecture, devices, and operating systems, has branched off for a bit and started collecting information on RISC laptops. Technical computing in the 1990s was mostly done on RISC workstations with Unix operating systems and specialized applications. For mobile use cases, some of the popular RISC vendors built RISC Laptops for mobile Unix use in the 1990s. Often based on contemporary Unix workstations, these RISC laptops were often marketed for government and military uses such as command, technical analysis and surveillance. ↫ Paul Weissmann at OpenPA OpenPA has always had content beyond just PA-RISC (like HP’s Itanium machines), so this is not entirely surprising, and it also happens to be something that’s sorely needed – there’s remarkably little consolidated information to be found on these RISC laptops, and it’s usually scattered all over the place and difficult to find. They were expensive and rare when they were new, and they’re even rarer and often more expensive today. What we’re talking about here are laptops with PA-RISC, SPARC, (non-Apple) PowerPC, and Alpha processors, running some variant of UNIX, like HP-UX, SunOS/Solaris, AIX, and even Windows NT. A particularly interesting listing at the moment is the Hitachi 3050RX/100C, a laptop based on the Hitachi PA/50L PA-RISC processor that ran something called HI-UX/WE2, a UNIX from Hitachi I can’t find much information about. The most desirable laptop listed is the amazing Tadpole Viper, which was the most powerful SPARC laptop Tadpole ever made, and I’m pretty sure it’s the most powerful SPARC laptop, period. It was powered by a 1.2Ghz UltraSPARC IIIi processor, and was also sold as the Sun Ultra 3, in 2005. I would perform some seriously questionable acts to get my hands on one of these, but they’re most likely virtually impossible to find. Anyone who can help Weissmann find more information – feel free to do so.
If you are looking to upgrade your TV and want a long-lasting option, you may consider getting a Samsung AI TV powered by Tizen OS. The reason is that Samsung announced plans to offer seven years of Tizen OS upgrades for some of its Smart TVs. ↫ Sagar Naresh Bhavsar at Neowin Since buying a dumb TV is no longer possible, you might as well get the one with the longest possible support lifecycle. This new policy covers Samsung TVs from 2024 onward, as well as a few modls from 2023. There’s no word on if the ads that I’m assuming are part of Samsung’s smart TVs will also receive seven years of updates. Or, you know, get a good Android TV box and never plug your actual TV into your network to begin with.
Due to its limited RAM of 1,920 bits, the Programma 101 was mostly a machine conceived to make arithmetic calculations – sums, subtractions, divisions, multiplications, square roots -, yet, like modern computers, it could also perform logical operations, conditional and unconditional jumps, and print the data stored in a register, all through a custom-made alphanumeric programming language. This was, in the early ’60s, what set computers apart from calculators, indeed. Overall, in today’s terms, Programma 101 can be considered a sort of “transitional fossil” between desktop calculators and personal computers. ↫ Riccardo Bianchini Olivetti sure is a name that carries an exceptional amount of weight in the retrocomputing world, as classic Olivetti computers, even standard Olivetti PCs, tend to be highly desirable. A Programma 101 in amazing condition is currently for sale on eBay for a massive €20000, and while there’s quite a few relatively cheap ’80s and ’90s Olivetti PCs for sale, a sizable number of them are far more desirable and carry massive premiums for their unique design. It’s sad how many once great and influential computer makers have been relegated to the dustbin of history, outcompeted, acquired, or run into the ground. Some of these once great brands live on as mere badges on electronic junk, and Olivetti, too, was not spared this fate. In fact, what is generally considered the worst PDA ever made, the Olivetti daVinci, was a generic product that just had an Olivetti logo slapped onto it. I have one in-box, and intend to one day write about it, because its awfulness needs to be shared with the world.
Logitech CEO Hanneke Faber talked about someting called the “forever mouse”, which would be, as the name implies, a mouse that customers could use for a very long time. While you may think this would mean an incredibly well-built mouse, or one that can be easily repaired, which Logitech already makes somewhat possible through a partnership with iFixIt, another option the company is thinking about is a subscription model. Yes. Faber said subscription software updates would mean that people wouldn’t need to worry about their mouse. The business model is similar to what Logitech already does with video conferencing services (Logitech’s B2B business includes Logitech Select, a subscription service offering things like apps, 24/7 support, and advanced RMA). Having to pay a regular fee for full use of a peripheral could deter customers, though. HP is trying a similar idea with rentable printers that require a monthly fee. The printers differ from the idea of the forever mouse in that the HP hardware belongs to HP, not the user. However, concerns around tracking and the addition of ongoing expenses are similar. ↫ Scharon Harding at Ars Technica Now, buying a mouse whose terrible software requires subscription models would still be a choice you can avoid, but my main immediately conjured up a far darker scenario. PC makers have a long history of adding crapware to their machines in return for payments from the producers of said crapware. I can totally see what’s going to happen next. You buy a brand new laptop, unbox it at home, and turn it on. Before you know it, a dialog pops up right after he crappy Windows out-of-box experience asking you to subscribe to your laptop’s touchpad software in order to unlock its more advanced features like gestures. But why stop there? The keyboard of that new laptop has RGB backlighting, but if you want to change its settings, you’re going to have to pay for another subscription. Your laptop’s display has additional features and modes for specific types of content and more settings sliders, but you’ll have to pay up to unlock them. And so on. I’m not saying this will happen, but I’m also not saying it won’t. I’m sorry for birthing this idea into the world.
Simultaneous multithreading (SMT) is a feature that lets a processor handle instructions from two different threads at the same time. But have you ever wondered how this actually works? How does the processor keep track of two threads and manage its resources between them? In this article, we’re going to break it all down. Understanding the nuts and bolts of SMT will help you decide if it’s a good fit for your production servers. Sometimes, SMT can turbocharge your system’s performance, but in other cases, it might actually slow things down. Knowing the details will help you make the best choice. ↫ Abhinav Upadhyay Some light reading for the (almost) weekend.
About a month ago, Cameron Kaiser first introduced us to the Canon Cat, a computer designed by Jeff Raskin, but abandoned within six months by Canon, who had no idea what to do with it. In his second article on the Cat, Kaiser dives much deeper into the software and operating system of the Cat, even going so far as to become the first person to write software for it. One of the most surprising aspects of the Cat is that it’s collaborative; other users can call into your Cat using a landline and edit the same document you’re working on remotely. Selecting text has other functions too. When I say everything goes in the workspace, I do mean everything. The Cat is designed to be collabourative: you can hook up your Cat to a phone line, or at least you could when landlines were more ubiquitous, and someone could call in and literally type into your document remotely. If you dialed up a service, you would type into the document and mark and send text to the remote system, and the remote system’s response would also become part of your document. (That goes for the RS-232 port as well, by the way. In fact, we’ll deliberately exploit this capability for the projects in this article.) ↫ Cameron Kaiser You can also do calculations right into the text, going so far as allowing the user to define variables and reuse those variables throughout the text to perform various equations and other mathematic operations. If you go back and change the value of a variable, all other equations using those variables are updated as well. That’s quite nifty, especially considering the age of the Cat, and since the Cat is fixed width, you can effectively create spreadsheets this way, too. There’s really far too much to cover here, and I strongly suggest you head on over and read the entire thing.
In 2019, a startup called Nuvia came out of stealth mode. Nuvia was notable because its leadership included several notable chip architects, including one who used to work for Apple. Apple chips like the M1 drew recognition for landing in the same performance neighborhood as AMD and Intel’s offerings while offering better power efficiency. Nuvia had similar goals, aiming to create a power efficient core that could could surpass designs from AMD, Apple, Arm, and Intel. Qualcomm acquired Nuvia in 2021, bringing its staff into Qualcomm’s internal CPU efforts. Bringing on Nuvia staff rejuvenated Qualcomm’s internal CPU efforts, which led to the Oryon core in Snapdragon X Elite. Oryon arrives nearly five years after Nuvia hit the news, and almost eight years after Qualcomm last released a smartphone SoC with internally designed cores. For people following Nuvia’s developments, it has been a long wait. ↫ Chips and Cheese Now that the Snapdragon X Elite and Pro chips are finally making their way to consumers, we’re also finally starting to see proper deep-dives into the brand new hardware. Considering this will set the standard for ARM laptops for a long time to come – including easy availability of powerful ARM Linux laptops – I really want to know every single quirk or performance statistic we can find.
The impact printer was a mainstay of the early desktop computing era. Also called “dot matrix printers,” these printers could print low-resolution yet very readable text on a page, and do so quickly and at a low price point. But these printers are a relic of the past; in 2024, you might find them printing invoices or shipping labels, although more frequently these use cases have been replaced by other types of printers such as thermal printers and laser printers. The heart of the impact printer is the print head. The print head contained a column of pins (9 pins was common) that moved across the page. Software in the printer controlled when to strike these pins through an inked ribbon to place a series of “dots” on a page. By carefully timing the pin strikes with the movement of the print head, the printer could control where each dot was placed. A column of dots might represent the vertical stroke of the letter H, a series of single dots created the horizontal bar, and another column would create the final vertical stroke. ↫ Jim Hall at Technically We Write Our first printer was a dot matrix model, from I think a brand called Star or something similar. Back then, in 1991 or so, a lot of employers in The Netherlands offered programs wherein employees could buy computers through their work, offered at a certain discount. My parents jumped on the opportunity when my mom’s employer offered such a program, and through it, we bought a brand new 286 machine running MS-DOS and Windows 3.0, and it included said dot matrix printer. There’s something about the sound and workings of a dot matrix printer that just can’t be bested by modern ink, laser, or LED printers. The mechanical punching, at such a fast rate it sounded like a tiny Gatling gun, was mesmerising, especially when paired with continuous form paper. Carefully ripping off the perforated edges of the paper after printing was just a nice bonus that entertained me quite a bit as a child. I was surprised to learn that dot matrix printers are still being manufactured and sold today, and even comes in colour. They’re quite a bit more expensive than other printer types these days, but I have a feeling they’re aimed at enterprises and certain niches, which probably means they’re going to be of considerably higher quality than all the other junk printers that clog the market. With a bit more research, it might actually be possible to find a brand new colour dot matrix printer that is a better choice than some of the modern alternatives. The fact that I’m not contemplating buying a brand new dot matrix printer in 2024, even though I rarely print, is a mildly worrying development.
Framework, the company making modular, upgradeable, and repairable laptops, and DeepComputing, the same company that’s making the DC ROMA II RISC-V laptop we talked about last week, have announced something incredibly cool: a brand new RISC-V mainboard that fits right into existing Framework 13 laptops. Sporting a RISC-V StarFive JH7110 SoC, this groundbreaking Mainboard was independently designed and developed by DeepComputing. It’s the main component of the very first RISC-V laptop to run Canonical’s Ubuntu Desktop and Server, and the Fedora Desktop OS and represents the first independently developed Mainboard for a Framework Laptop. ↫ The DeepComputing website For a company that was predicted to fail by a popular Apple spokesperson, it seems Framework is doing remarkably well. This new mainboard is the first one not made by Framework itself, and is the clearest validation yet of the concept put into the market by the Framework team. I can’t recall the last time you could buy a laptop powered by one architecture, and then upgrade to an entirely different architecture down the line, just by replacing the mainboard. The news of this RISC-V mainboard has made me dream of other possibilities – like someone crazy enough to design, I don’t know, a POWER10 or POWER11 mainboard? Entirely impossible and unlikely due to heat constraints, but one may dream, right?