Keep OSNews alive by becoming a Patreon, by donating through Ko-Fi, or by buying merch!

Monthly Archive:: October 2023

End of an era: Windows CE’s final day

At midnight US Pacific Time tomorrow, Windows Embedded Compact 2013 – or perhaps better colloquially referred to as Windows CE 8.0 – will slip from history as it exits is Extended Support Phase with Microsoft and it, as well as the entire history of Windows CE, becomes an unsupported, retired former product. Windows CE 8.0 was released on 11th August 2013 and slipped into the end of its mainstream support on 9th October 2018. Yet few even noticed either occurrence. As a product CE 8.0 release failed to gain much of any traction or fanfare. Even here in the Windows CE community, most people disregard Windows Embedded Compact 2013 as a complete non-starter. As with Windows CE 7.0 before it. Few, if any devices were ever released on the platform and as a result most people – myself included – have never even seen a physical CE 8 device. I’ve used and own a lot of Windows CE-based devices over the years, and contrary to most people’s opinions, I absolutely adore Windows CE. Back when Apple was still busy not dying, and Android was barely a blip on anyone’s radar, Windows CE-based devices were incredibly powerful, versatile, and capable. Platforms like PocketPC and Windows Mobile may not have been the most graceful platforms, but they were so far ahead of anyone else when it came to pure functionality and capabilities it wasn’t even close. I was streaming Futurama episodes from my Windows XP machine to my PocketPC, while checking my email and browsing with Pocket IE – in the early 2000s. No other platform could do this in a PDA form factor – not even Palm OS. I hope, against my own better judgment, that Microsoft will do the right thing and publish the source code to Windows CE on Github. The number of Windows CE devices out there is immense, and giving the community the option of supporting them going forward would save a lot of them from the trash heap.

How does macOS manage virtual cores on Apple silicon?

One of the most distinctive features of Apple silicon chips is that they have two types of CPU core, E (Efficiency) cores that are energy efficient but slower than the P (Performance) cores, which normally run much of the code in the apps we use. Apps don’t decide directly which cores they will be run on, that’s a privilege of macOS, but they register their interest by setting a Quality of Service, or QoS, which is then taken into account when they’re scheduled to run. With the introduction of Game Mode in Sonoma, CPU scheduling can now work differently, with E cores being reserved for the use of games. This article looks at another atypical situation, when running a macOS virtual machine (VM) assigned a set number of virtual cores. How does macOS Sonoma handle that? Exactly what is says on the tin.

Raptor’s upcoming OpenPOWER systems: more than 4.0 Ghz, PCIe 5.0, DDR5, 18-core option

TalosSpace has more details on the upcoming, recently announced OpenPOWER machines from Raptor. I asked Timothy Pearson at Raptor about the S1’s specs, and he said it’s a PCIe 5.0 DDR5 part running from the high 3GHz to low 4GHz clock range, with the exact frequency range to be determined. (OMI-based RAM not required!) The S1 is bi-endian, SMT-4 and will support at least two sockets with an 18-core option confirmed for certain and others to be evaluated. This compares very well with the Power10, which is also PCIe 5.0, also available as SMT-4 (though it has an SMT-8 option), and also clocks somewhere between 3.5GHz and 4GHz. S1 embeds its own BMC, the X1 (or variant), which is (like Arctic Tern) a Microwatt-based ISA 3.1 core in Lattice ECP5 and iCE40 FPGAs with 512MB of DDR3 RAM, similar to the existing ASpeed BMC on current systems. X1 will in turn replace the existing Lattice-based FPGA in Arctic Tern as “Antarctic Tern,” being a functional descendant of the same hardware, and should fill the same roles as a BMC upgrade for existing Raptor systems as well as the future BMC for the next generation systems and a platform in its own right. The X1 has “integrated 100% open root of trust” as you would expect for such a system-critical part. This all sounds like exactly the kind of things I wanted to hear, and these details make me sufficiently excited about the near future of Raptor’s OpenPOWER workstations. The only little bit of less pleasant news is that the machines won’t be available until late 2024, so we’ve got a little wait ahead of us.

Oberon System 3 compatible with the Oberon+ compiler and IDE

This is a version of the Oberon System 3 (also known as ETH Oberon), compatible with the Oberon+ compiler, IDE and runtimes and the OBX Platform Abstraction Layer (PAL), and thus truly cross-platform (runs on all platforms where LeanQt is available). The migration is still work in progress, but sufficiently complete and stable to explore the platform. The latest commit is tested on both the Mono CLI and as a native executable built with the generated C code. I have to admit that while I’m aware of the Oberon System, I know far too little about it to make any meaningful statements here.

Windows 11 Pro’s on-by-default encryption slows SSDs up to 45%

There are few things more frustrating than paying for high-speed PC components and then leaving performance on the table because software slows your system down. Unfortunately, a default setting in Windows 11 Pro, having its software BitLocker encryption enabled, could rob as much as 45 percent of the speed from your SSD as it forces your processor to encrypt and decrypt everything. According to our tests, random writes and reads — which affect the overall performance of your PC — get hurt the most, but even large sequential transfers are affected. While many SSDs come with hardware-based encryption, which does all the processing directly on the drive, Windows 11 Pro force-enables the software version of BitLocker during installation, without providing a clear way to opt out. (You can circumvent this with tools like Rufus, if you want, though that’s obviously not an official solution as it allows users to bypass the Microsoft’s intent.) If you bought a prebuilt PC with Windows 11 Pro, there’s a good chance software BitLocker is enabled on it right now. Windows 11 Home doesn’t support BitLocker so you won’t have encryption enabled there. Nothing like buying a brand new PC and realising you’re losing a ton of performance for something you might not even need on a home PC.

Raptor Computing working on new POWER systems using OpenPOWER CPU from Solid Silicon

Well, this is a pleasant surprise and a massive coincidence. Besides that BMC-focused press release, Raptor Computing Systems tweeted out that they are working on “next generation of high performance, fully owner controlled systems! Built using the open POWER ISA 3.1, these new machines will be direct upgrades for existing POWER9 systems.” Power ISA 3.1 aligns with new functionality IBM introduced in Power10. This is fantastic news, and it seems they’re sidestepping the IBM POWER10 binary blobs issue by relying on a different chip vendor altogether, Solid Silicon, who announced an OpenPOWER CPU that will be used in Raptor’s upcoming systems, the S1. It seems unlikely to me that the S1 will be an entirely new, unique processor, so perhaps it’s a slightly modified IBM POWER10 design without the binary blobs. I’m incredibly excited about this news, and can’t wait to hear what they’re planning.

Intel Core i9-14900K, Core i7-14700K and Core i5-14600K review: Raptor Lake refreshed

The Intel 14th Gen Core series is somewhat of a somber swansong to the traditional and famed Core i series naming scheme, rounding off what feels like the end of an era. With the shift to their upcoming Meteor Lake SoC, the impending launch of the new naming scheme (Core and Core Ultra) branding, and what Intel hopes to be a groundbreaking mobile chiplet-based architecture. The crux of the analysis is if you’re upgrading from an older and outdated desktop platform, the Intel 14th Gen series is a solid performer, but there’s still value in current 13th Gen pricing. Those must be considered in the current global financial situation; some users may find a better deal. If you already have 12th or 13th Gen Core parts, then there’s absolutely no reason to upgrade or consider 14th Gen as a platform, as none of the features (mainly software) justify a sidegrade on which is ultimately the same platform and the same core architecture. AnandTech always delivers. Unlike Intel.

OS/2 Warp, PowerPC Edition

Speaking of POWER – well, PowerPC – what about OS/2 Warp for PowerPC? What was OS/2 Warp, PowerPC Edition like? An unfinished product, rough around the edges but simultaneously technically very interesting and advanced and showing promise. Even though the OS/2 PPC release wasn’t called beta, it is obvious that this is a beta level product (if even that in some respects). Many features are unfinished or completely missing (networking in the first place). The kernel level code doesn’t look much like production build and prints out quite a lot of debugging output on the serial console. The HPFS support was very unstable, and the stability of Win-OS/2 left a lot to be desired. There were too many clearly unfinished parts of the product (documentation, missing utilities etc.). On the other hand a large portion of the system worked well. The user interface and graphics subsystem in general didn’t exhibit any anomalies. Multitasking was reliable and all things considered, responsiveness quite good for a 100MHz CPU and code that was not likely to have been performance tuned. The multimedia subsystem worked much better than I expected. Many things were much improved compared to Intel OS/2 — internationalization, graphics subsystem, updated console API and so on. The system seemed to have enough raw power, even if it wasn’t harnessed too well. Boot time was rather long but once up and running, the system was snappy (with some exceptions, notably the CD-ROM driver). To reach true production quality, the OS would have needed at least additional six months of intense development, probably more. I’m a tad bit jealous some people manage to find the right hardware to run OS/2 for PowerPC, since it’s incredibly high on my list. At least I have this great article to read through every now and then, until the day I manage to get lucky myself.

IBM hints at POWER11, hopefully will fix POWER10’s firmware mess

Just as IBM was posting “future” processor compiler patches in 2019 for what ended up being early POWER10 enablement, they are once again repeating their same compiler enablement technique with sending out “PowerPC future” patches for what is likely to be POWER11. The “PowerPC future” patches sent out today are just like before — complete with mentions like “This feature may or may not be present in any specific future PowerPC processor…Again, these are preliminary patches for a potential future machine. Things will likely change in terms of implementation and usage over time.“ If this is indeed a sign that POWER11 is on its way, I really hope IBM learned from its mistake with POWER10. POWER9 was completely open, top to bottom, which made it possible for Raptor Computing Systems to build completely open source, auditable workstation where every bit of code was open source. POWER10, however, contained closed firmware for the off-chip OMI DRAM bridge and on-chip PPE I/O processor, which meant that the principled team at Raptor resolutely said no to building POWER10 workstations, even though they wanted to. I firmly believe that if IBM tried even the littlest bit, there could be a niche, but fairly stable market for POWER-based workstations, by virtue of being pretty much the only fully open ISA (at least, as far as POWER9 goes). Of course, we’re not talking serious competition to x86 or ARM here, but I’ve seen more than enough interest to enable a select few OEMs to build and sell POWER workstations. Let’s hope POWER11 fixes the firmware mess that is POWER10, so that we can look forward to another line of fully open source workstations.

ANSI Terminal security in 2023 and finding 10 CVEs

This paper reflects work done in late 2022 and 2023 to audit for vulnerabilities in terminal emulators, with a focus on open source software. The results of this work were 10 CVEs against terminal emulators that could result in Remote Code Execution (RCE), in addition various other bugs and hardening opportunities were found. The exact context and severity of these vulnerabilities varied, but some form of code execution was found to be possible on several common terminal emulators across the main client platforms of today. Additionally several new ways to exploit these kind of vulnerabilities were found. This is the full technical write-up that assumes some familiarity with the subject matter, for a more gentle introduction see my post on the G-Research site. Some light reading for the weekend.

Clever malvertising attack uses Punycode to look like KeePass’s official website

Threat actors are known for impersonating popular brands in order to trick users. In a recent malvertising campaign, we observed a malicious Google ad for KeePass, the open-source password manager which was extremely deceiving. We previously reported on how brand impersonations are a common occurrence these days due to a feature known as tracking templates, but this attack used an additional layer of deception. The malicious actors registered a copycat internationalized domain name that uses Punycode, a special character encoding, to masquerade as the real KeePass site. The difference between the two sites is visually so subtle it will undoubtably fool many people. We have reported this incident to Google but would like to warn users that the ad is still currently running. Ad blockers are security tools. This proves it once again.

Jon Stewart’s Apple TV Plus show ends, reportedly over coverage of AI and China

The Verge reports: The New York Times reports that along with concerns about some of the guests booked to be on The Problem With Jon Stewart, Stewart’s intended discussions of artificial intelligence and China were a major concern for Apple. Though new episodes of the show were scheduled to begin shooting in just a few weeks, staffers learned today that production had been halted. According to The Hollywood Reporter, ahead of its decision to end The Problem, Apple approached Stewart directly and expressed its need for the host and his team to be “aligned” with the company’s views on topics discussed. Rather than falling in line when Apple threatened to cancel the show, Stewart reportedly decided to walk. Props to Stewart for telling Apple to shove it, but this once again highlights that Apple and Tim Cook are nothing but propaganda mouthpieces for the Chinese Communist Party.

Enhanced Google Play Protect real-time scanning for app installs

Today, we are making Google Play Protect’s security capabilities even more powerful with real-time scanning at the code-level to combat novel malicious apps. Google Play Protect will now recommend a real-time app scan when installing apps that have never been scanned before to help detect emerging threats. Scanning will extract important signals from the app and send them to the Play Protect backend infrastructure for a code-level evaluation. Once the real-time analysis is complete, users will get a result letting them know if the app looks safe to install or if the scan determined the app is potentially harmful. This enhancement will help better protect users against malicious polymorphic apps that leverage various methods, such as AI, to be altered to avoid detection. There’s a lot you can say about these kinds of security tools, but with how much access our smartphones have to our data, banking information, credit/debit cards, and so on – I don’t think it’s unreasonable at all for Google (and Apple, if they are forced to enable sideloading by the EU) to employ technologies like these. As long as the user can still somehow bypass them, or disable them altogether, possibly through some convoluted computer magic that might scare them, I don’t see any issues with this. …that is, assuming it won’t be used for other ends. The step from “scanning for malware” to “scanning for unapproved content” like downloaded movies or whatever isn’t that far-fetched in today’s corporate world, and if totalitarian regimes get their hands on stuff like that, it could get a lot worse.

AMD unveils Ryzen Threadripper 7000 family: 96 core Zen 4 for workstations and HEDT

Being announced today by AMD for a November 21st launch, this morning AMD is taking the wraps off of their Ryzen 7000 Threadripper CPUs. These high-end chips are being split up into two product lines, with AMD assembling the workstation-focused Ryzen Threadripper 7000 Pro series, as well as the non-pro Ryzen Threadripper 7000 series for the more consumer-ish high-end desktop (HEDT) market. Both chip lines are based on AMD’s tried and true Zen 4 architecture – derivatives of AMD’s EPYC server processors – incorporating AMD’s Zen 4 chiplets and a discrete I/O dies. As with previous generations of Threadripper parts, we’re essentially looking at the desktop version of AMD’s EPYC hardware. With both product lines, AMD is targeting customer bases that need CPUs more powerful than a desktop Ryzen processor, but not as exotic (or expensive) as AMD’s server wares. This means chips with lots and lots of CPU cores – up to 96 in the case of the Threadripper 7000 Pro series – as well as support for a good deal more I/O and memory. The amount varies with the specific chip lineup, but both leave Ryzen 7000 and its 16 cores and 24 PCIe lanes in the dust. I’m hoping these will eventually find their way to eBay, so that around five years from now, I can replace my dual-Xeon workstation with a Threadripper machine.

CP/M-65: CP/M on the 6502

This is a native port of Digital Research’s seminal 1977 operating system CP/M to the 6502. Unlike the original, it supports relocatable binaries, so allowing unmodified binaries to run on any system: this is necessary as 6502 systems tend to be much less standardised than 8080 and Z80 systems. (The systems above all load programs at different base addresses.) Currently you can cross-assemble programs from a PC, as well as a working C toolchain with llvm-mos. For native development, there’s a basic assembler, a couple of editors, and a BASIC. You need about 20kB to run the assembler at all, and of course more memory the bigger the program. The usefulness of this project is debatable, but that doesn’t make it any less cool.

Debian repeals the merged “/usr” movement moratorium

Debian 12 had aimed to have a merged “/usr” file-system layout similar to other Linux distributions, but The Debian Technical Committee earlier this year decided to impose a merged-/usr file movement moratorium. But now with Debian 12 having been out for a few months, that moratorium has been repealed. In hoping to have the merged /usr layout ready in time for Debian 13 “Trixie”, yesterday that moratorium was repealed. I love Debian’s bureaucratic processes and procedures. I imagine all the Debian people working in a giant nondescript grey building with very few windows, somewhere along a generic highway at the edge of a boring suburb of a forgetable town.

Google thinks now is a good time to decimate its Google News team

Google cut dozens of jobs in its news division this week, CNBC has learned, downsizing at a particularly sensitive time for online platforms and publishers. An estimated 40 to 45 workers in Google News have lost their jobs, according to an Alphabet Workers Union spokesperson, who didn’t know the exact number. A Google spokesperson confirmed the cuts but didn’t provide a number, and said there are still hundreds of people working on the news product. I’m no expert in personnel management and human resources, but with the state of the world such as it is, it seems like an incredibly inopportune time to decimate your news department, especially when you’re a tech company, who already have an absolutely abysmal track record when it comes to dealing with news and misinformation.

Google proposes new mseal() memory sealing syscall for Linux

Google is proposing a new mseal() memory sealing system call for the Linux kernel. Google intends for this architecture independent system call to be initially used by the Google Chrome web browser on Chrome OS while experiments are underway for use by Glibc in the dynamic linker to seal all non-writable segments at startup. The discussion is ongoing, so you can read the original proposed patchset and go from there.

Windows adds support for hearing aides with Bluetooth LE Audio

We’re excited to announce that Windows has taken a significant step forward in accessibility by supporting the use of hearing aids equipped with the latest Bluetooth® Low Energy Audio (LE Audio) technology. Customers who use these new hearing aids are now able to directly pair, stream audio, and take calls on their Windows PCs with LE Audio support. This feature is available on Windows devices with our recently announced Bluetooth® LE Audio support, which will be a growing market of devices in the coming months. In upcoming flights, we will be introducing additional capabilities to the hearing aids experience on Windows, such as controlling audio presets directly within Windows settings. Stay tuned for more details about these new capabilities as they roll out. Excellent news for people who manage their hearing problems with hearing aids. The fact it’s taken the industry this long to realise the potential of connecting hearing aides to computers and phones is surprising, but regulation and Bluetooth’s reputation probably played a role in that. Regardless, this is a great step by Microsoft, and I hope other platforms follow suit.

Windows 11 vs. Ubuntu 23.10 performance on the Lenovo ThinkPad P14s Gen 4

Out of 72 benchmarks ran in total on both operating systems with the Lenovo ThinkPad P14s Gen 4, Ubuntu 23.10 was the fastest about 64% of the time. If taking the geometric mean of all the benchmark results, Ubuntu 23.10 comes out to being 10% faster than the stock Windows 11 Pro install as shipped by Lenovo for this AMD Ryzen 7 PRO 7840U laptop. I recently bought a laptop, and the stock Windows installation – free of OEM crapware, which was a welcome surprise – opened applications and loaded webpages considerably slower than Fedora KDE did. This has not always been the case, and I’m pleasantly surprised that while the desktop Linux world has focused a lot on performance, Microsoft was busy making Windows even less pleasant than it already was. I wouldn’t be surprised if across all price/performance levels, Linux is faster and snapper than Windows – except maybe at the absolute brand-new high-end, since AMD, Intel, and NVIDIA entirely understandably focus on Windows performance first.