Keep OSNews alive by becoming a Patreon, by donating through Ko-Fi, or by buying merch!

Even John Siracusa thinks Tim Cook should step down

John Siracusa, one third of the excellent ATP podcast, developer of several niche Mac utilities, and author of some of the best operating system reviews of all time, has called for Apple’s CEO, Tim Cook, to step down. Now, countless people call for Tim Cook to stand down all the time, but when someone like Siracusa, an ardent Mac user since the release of the very first Macintosh and a staple of the Apple community, makes such a call, it carries a bit more weight.

His main argument is not particularly surprising to anyone who’s been keeping tabs on the Apple community, and the Apple developer community in particular: Apple seems to no longer focus on making great products, but on making money. Every decision made by Apple’s leadership team is focused solely on extracting as much money from consumers and developers, instead of on making the best possible products.

The best leaders can change their minds in response to new information. The best leaders can be persuaded. But we’ve had decades of strife, lawsuits, and regulations, and Apple has stubbornly dug in its heels even further at every turn. It seems clear that there’s only one way to get a different result.

In every healthy entity, whether it’s an organization, an institution, or an organism, the old is replaced by the new: CEOs, sovereigns, or cells. It’s time for new leadership at Apple. The road we’re on now does not lead anywhere good for Apple or its customers. It’s springtime, and I’m choosing to believe in new life. I swear it’s not too late.

↫ John Siracusa

I reached this same point with Apple a long, long time ago. I was an ardent Mac user during the PowerPC G4 and G5 days, lasting into the early Intel days. However, as the iPhone and related services took over as Apple’s primary source of income, I felt that Mac OS X, which I once loved and enjoyed so much, started to languish, and it’s been downhill for Apple’s desktop operating system ever since. Whenever I have to help my parents with their computers – modern M1 and M2 Macs – I am baffled and saddened by just how big of a convoluted, disjointed, and unintuitive mess macOS has become.

I long ago stopped caring about whatever products Apple releases or updates, because I feel like as a user who genuinely cares about his computing experience, Apple simply doesn’t make products for me. I’m not sure replacing Tim Cook with someone else will really change anything about Apple’s priorities; in the end, it’s a publicly traded corporation that thinks it needs to please shareholders, and a focus on great products instead of money isn’t going to help with that.

Apple long ago stopped being the beleaguered company many of its most ardent fans still seem convinced that it is, and it’s now one of those corporate monoliths that can make billions more overnight by squeezing just a bit more out of developers or users, regardless of what that squeezing does to the user experience. Apple is still selling more devices than ever, and it’s still raking in more gambling gains through digital slot machines for children, and as long as that’s the case, replacing Tim Cook won’t do a goddamn thing.

“AI” automated PR reviews mostly useless junk

The team that makes Cockpit, the popular server dashboard software, decided to see if they could improve their PR review processes by adding “AI” into the mix. They decided to test both sourcey.ai and GitHub Copilot PR reviews, and their conclusions are damning.

About half of the AI reviews were noise, a quarter bikeshedding. The rest consisted of about 50% useful little hints and 50% outright wrong comments. Last week we reviewed all our experiences in the team and eventually decided to switch off sourcery.ai again. Instead, we will explicitly ask for Copilot reviews for PRs where the human deems it potentially useful.

This outcome reflects my personal experience with using GitHub Copilot in vim for about 1.5 years – it’s a poisoned gift. Most often it just figured out the correct sequence of ), ], and } to close, or automatically generating debug print statements – for that “typing helper” work it was actually quite nice. But for anything more nontrivial, I found it took me more time to validate the code and fix the numerous big and subtle errors than it saved me.

↫ Martin Pitt

“AI” companies and other proponents of “AI” keep telling us that these tools will save us time and makes things easier, but every time someone actually sits down and does the work of testing “AI” tools out in the field, the end results are almost always the same: they just don’t deliver the time savings and other advantages we’re being promised, and more often than not, they just create more work for people instead of less. Add in the financial costs of using and running these tools, as well as the energy they consume, and the conclusion is clear.

When the lack of effectiveness of “AI” tools our in the real world is brought up, proponents inevitably resort to “yes it sucks now, but just you wait on the next version!” Then that next version comes, people test it out in the field again, and it’s still useless, and those same proponents again resort to “yes it sucks now, but just you wait on the next version!”, like a broken record. We’re several years into the hype, and that mythical “next version” still isn’t here.

We’re several years into the “AI” hype, and I still have seen no evidence it’s not a dead end and a massive con.

Google requires Android applications on Google Play to support 16 KB page sizes

About a year ago, we talked about the fact that Android 15 became page size-agnostic, supporting both 4 KB and 16 KB page sizes. Google was already pushing developers to get their applications ready for 16 KB page sizes, which means recompiling for 16 KB alignment and testing on a 16 KB version of an Android device or simulator. Google is taking the next step now, requiring that every application targeting Android 15 or higher submitted to Google Play after 1 November 2025 must support a page size of 16 KB.

This is a key technical requirement to ensure your users can benefit from the performance enhancements on newer devices and prepares your apps for the platform’s future direction of improved performance on newer hardware. Without recompiling to support 16 KB pages, your app might not function correctly on these devices when they become more widely available in future Android releases.

↫ Dan Brown on the Android Developers Blog

This is mostly only relevant for developers instead of users, but in the extremely unlikely scenario that one of your favourite applications cannot be made to work with 16 KB page sizes for some weird reason, or the developer refuses to support it or some even weirder reason, you might have to say goodbye to that applications if you use Android 15 or higher. This is absurdly unlikely, but I wouldn’t be surprised if it happens to at least one application.

If that happens, I want to know which application that is, and ask the developer for their story.

Introducing Mac Themes Garden

I’ve “launched” the Mac Themes Garden! It is a website showcasing more than 3,000 (and counting) Kaleidoscope from the Classic Mac era, ready to be seen, downloaded and explored! Check it out! Oh, and there also is an RSS feed you can subscribe to see themes as they are added/updated!

↫ Damien Erambert

If you’ve spent any time on retrocomputing-related social media channels, you’ve definitely seen the old classic Mac OS themes in your timeline. They are exquisitely beautiful artifacts of a bygone era, and the work Damien Erambert has been doing to make these easily available and shareable, entirely in his free time, is awesome and a massive service to the retrocomputing community.

The process to get these themes loaded up onto the website is actually a lot more involved than you might imagine. It involves a classic Mac OS virtual machine, applying themes, taking screenshots, collecting creator information, and adding everything to a database. This process is mostly manual, and Erambart estimates he’s about halfway done.

If you have classic Mac OS running somewhere, on real hardware or in a virtual machine, you can now easily theme it at your heart’s content.

Reverse-engineering Fujitsu M7MU RELC hardware compression

This is a follow-up to the Samsung NX mini (M7MU) firmware reverse-engineering series. This part is about the proprietary LZSS compression used for the code sections in the firmware of Samsung NX mini, NX3000/NX3300 and Galaxy K Zoom. The post is documenting the step-by-step discovery process, in order to show how an unknown compression algorithm can be analyzed. The discovery process was supported by Igor Skochinsky and Tedd Sterr, and by writing the ideas out on encode.su.

↫ Georg Lukas

It’s not weekend quite yet, but here’s some light reading ahead of time.

Microsoft changes pre-production driver signing, ends the device metadata service

As the headline suggests, we’re going to be talking about some very dry Windows stuff that only affects a relatively small number of people, but for those people this is a big deal they need to address. If you’re working on pre-production drivers that need to be signed, this is important to you.

The Windows Hardware Program supports partners signing drivers for use in pre-production environments. The CA that is used to sign the binaries for use in pre-production environments on the Windows Hardware Program is set to expire in July 2025, following which a new CA will be used to sign the preproduction content starting June 9, 2025.

↫ Hardware Dev Center

Alongside the new CA come a bunch of changes to the rules. First and foremost, expiry of signed drivers will no longer be tied to the expiry of the underlying CA, so any driver signed with the new CA will not expire, regardless of what happens to the CA. In addition, on April 22, May 13, and June 10, 2025, Windows servicing releases (4D/5B/6B) will be shipped to Windows versions (down to Windows Server 2008) to replace the old CAs with the new ones. As such, if you’re working on pre-production drivers, you need to install those Latest Cumulative updates.

On a very much related note, Microsoft has announced it’s retiring device metadata and the Windows Metadata and Internet Services (WMIS). This is what allowed OEMs and device makers to include things like device names, custom device icons, and other information in the form of an XML file. While OEMs can no longer create new device metadata this way, existing metadata already installed on Windows clients will remain functional. As a replacement for this functionality, Microsoft points to the driver’s INF files, where such information and icons can also be included.

Riveting stuff.

openSUSE removes Deepin from its repositories after long string of security issues and unauthorised security bypass

The openSUSE team has decided to remove the Deepin Desktop Environment from openSUSE, after the project’s packager for openSUSE was found to have added workaround specifically to bypass various security requirements openSUSE has in place for RPM packages.

Recently we noticed a policy violation in the packaging of the Deepin desktop environment in openSUSE. To get around security review requirements, our Deepin community packager implemented a workaround which bypasses the regular RPM packaging mechanisms to install restricted assets.

As a result of this violation, and in the light of the difficult history we have with Deepin code reviews, we will be removing the Deepin Desktop packages from openSUSE distributions for the time being.

↫ Matthias Gerstner

Matthias Gerstner goes into great detail to lay out every single time the openSUSE team found massive, glaring security issues in Deepin, and the complete lack of adequate responses from the Deepin upstream team over the past 8 or so years. It’s absolutely shocking to see how utterly lax the Deepin developers have been regarding the security of their desktop environment and its dependencies, and the openSUSE team could really only come to one harsh conclusion: Deepin has no security culture whatsoever, and it’s extremely likely that every corner of the Deepin code is riddled with very serious security issues.

As such, despite the relatively large number of Deepin users on openSUSE, the team has decided to remove Deepin from openSUSE entirely, instead pointing users to a third-party repository if they desire to keep using Deepin. I think this is the best possible option in this situation, but it’s not exactly ideal. After reading this entire saga, however, I don’t think anyone who cares about security should be using Deepin.

Of course, I doubt this will be the end of the story. What about all the other Linux distributions out there? The security issues in Deepin itself are most likely also present in Debian, Fedora, and other distributions who have the Deepin Desktop Environment in their repositories, but what about the workaround to bypass packaging security practices? Does that exist elsewhere as well?

I think we’re about to find out.

curl bans “AI” security reports as Zuckerberg claims we’ll all have more “AI” friends than real ones

Daniel Stenberg, creator and maintainer of curl, has had enough of the neverending torrent of “AI”-generated security reports the curl project has to deal with.

That’s it. I’ve had it. I’m putting my foot down on this craziness.

1. Every reporter submitting security reports on Hackerone for curl now needs to answer this question: “Did you use an AI to find the problem or generate this submission?” (and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions)

2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.

We still have not seen a single valid security report done with AI help.

↫ Daniel Stenberg

This is the real impact of “AI”: streams of digital trash real humans have to clean up. While proponents of “AI” keep claiming it will increase productivity, actual studies show this not to be the case. Instead, what “AI” is really doing is create more work for others to deal with by barfing useless garbage into other people’s backyards. It’s like the digital version of the western world sending its trash to third-world countries to deal with.

The best possible sign that “AI” is a toxic trash heap you wouldn’t want to have anything to do with are the people fighting for team “AI”.

In Zuckerberg’s vision for a new digital future, artificial-intelligence friends outnumber human companions and chatbot experiences supplant therapists, ad agencies and coders. AI will play a central role in the human experience, the Facebook co-founder and CEO of Meta Platforms has said in a series of recent podcasts, interviews and public appearances.

↫ Meghan Bobrowsky at the WSJ

Mark Zuckerberg, who built his empire by using people’s photos without permission so he could rank who was hotter, who used Facebook logins to break into journalists’ email accounts because they were about to publish a negative story about him, who called Facebook users “dumb fucks” for entrusting their personal information to him, is on the forefront fighting for “AI”. If that isn’t the ultimate proof there’s something deeply wrong and ethically unsound about “AI”, I don’t know what is.

TDE’s Qt 3 fork drops the 3

The Trinity Desktop Environment, the continuation of the final KDE 3.x release updated and maintained for modern times, consists of more than just the KDE bits you may think of. The project also maintains a fork of Qt 3 called TQt3, which it obviously needs to be able to work on and improve TDE itself, which is based on it. In the beginning, this fork consisted mainly of renaming things, but in recent years, more substantial changes meant that the code diverged considerably from the original Qt 3. As such, a small name change is in order.

TQt3 was born as a fork of Qt3 and for many years it was little more than a mere renaming effort. Over the past few years, many changes were made and the code has significantly diverged from the original Qt3, although still sharing the same roots. With more changes planned ahead and with the intention of better highlighting such difference, the TDE team has decided to drop the ‘3’ from the repository name, which is now simply called ‘TQt‘.

↫ TDE on Mastodon

The effect this has on users is rather minimal – users of the current 14.1.x release branch will still see 3s around in file paths and package names, but in future 14.2.x releases, all of these will have been removed, completing the transition.

This seems like a small change, and that’s because it is, but it’s interesting simply because it highlights that a project that seems relatively straightforward on the outside – maintain and carefully modernise the final KDE 3.x release – encompasses a lot more than that. Maintaining an entire Qt 3 fork certainly isn’t a small feat, but it’s kind of required to keep a project like TDE going.

VectorVFS: your filesystem as a vector database

VectorVFS is a lightweight Python package that transforms your Linux filesystem into a vector database by leveraging the native VFS (Virtual File System) extended attributes. Rather than maintaining a separate index or external database, VectorVFS stores vector embeddings directly alongside each file—turning your existing directory structure into an efficient and semantically searchable embedding store.

VectorVFS supports Meta’s Perception Encoders (PE) [arxiv] which includes image/video encoders for vision language understanding, it outperforms InternVL3, Qwen2.5VL and SigLIP2 for zero-shot image tasks. We support both CPU and GPU but if you have a large collection of images it might take a while in the first time to embed all items if you are not using a GPU.

↫ Christian S. Perone

It won’t surprise many of you that this goes a bit above my paygrade, but according to my limited understanding, VectorVFS stores information about files inside the xattr part of inodes. The information being stored is converted into vectors first, and this is the part that breaks my brain a bit, because vectors in this context are far too complex for me to understand.

I vaguely understand the end result here – making files searchable using vector magic without using a dedicated database or separate files by using extended attributes in inodes – but the process is far more complicated to understand. It still seems like a very interesting approach, though, and I’d love for people smarter than me to take VectorVFS apart and explain it in easier terms for those of us who don’t fully grasp it.

Redox gets services management, completes userspace process manager

Can someone please stop these months from coming and going, because I’m getting dizzy with yet another monthly report of all the progress made by Redox. Aside from the usual swath of improvements to the kernel, relibc, drivers, and so on, this month saw the completion of the userspace process manager.

In monolithic kernels this management is done in the kernel, resulting in necessary ambient authority, and possibly constrained interfaces if a stable ABI is to be guaranteed. With this userspace implementation, it will be easier to manage access rights using capabilities, reduce kernel bugs by keeping it simpler, and make changes where both sides of the interface can be updated simultaneously.

↫ Ribbon and Ron Williams

Students at Georgia Tech have been hard at work this winter on Redox as well, building a system health monitoring and recovery daemon and user interface. The Redox team has also done a lot of work to improve the build infrastructure, fixing a number of related issues along the way. The sudo daemon has now replaced the setuid bit for improved user authentication security, and a ton of existing ports have been fixed and updated where needed.

Redox’ monthly progress is kind of stunning, and it’s clear there’s a lot of interesting in the Rust-based operating system from outside the project itself as well. I wonder at what point Redox becomes usable for at least some daily, end-user tasks. I think it’s not quite there yet, especially when it comes to hardware support, but I feel like it’s getting there faster than anyone anticipated.

Google accidentally reveals Android’s Material 3 Expressive interface ahead of I/O

Google’s accelerated Android release cycle will soon deliver a new version of the software, and it might look quite different from what you’d expect. Amid rumors of a major UI overhaul, Google seems to have accidentally published a blog post detailing “Material 3 Expressive,” which we expect to see revealed at I/O later this month. Google quickly removed the post from its design site, but not before the Internet Archive saved it.

↫ Ryan Whitwam at Ars Technica

Google seems to be very keen on letting us know this new redesign is based on a lot of user research and metrics, which always sets off alarm bells in my mind when it comes to user interfaces. Every single person uses their smartphone and its applications a little differently, and using tons of metrics and data to average all of this out can make it so that anyone who strays to far from that average is going to have a bad time. This is compounded by the fact that each and every one of us is going to stray form the average in at least a few places.

Google also seems to be throwing consistency entirely out of the window with this redesign, which chills me to the bone. One of the reasons I like the current iteration of Material Design so much is that it does a great job of visually (and to a less extent, behaviourally) unifying the operating system and the applications you use, which I personally find incredibly valuable. I very much prefer consistency over disparate branding, and the screenshots and wording I’m seeing here seem to indicate Google considers that a problem that needs fixing.

As with everything UI, screenshots don’t tell the whole story, so maybe it won’t be so bad. I mean, it’s not like I’ve got anywhere else to go in case Google messes this up. Monopolies (or duopolies) are fun.

IBM unveils the LinuxONE Emperor 5

Following the recent release of the IBM z17 mainframe, IBM today unveiled the LinuxONE Emperor 5, which packs much of the same hardware as the z17, but focused on Linux use.

Today we’re announcing IBM LinuxONE 5, performant Linux computing platform for data, applications and your trusted AI, powered by the IBM Telum II processor with built-in AI acceleration. This launch comes at a pivotal time, as technology leaders focus on three critical imperatives: enabling security, improving cost-efficiency, and integrating AI into enterprise systems.

↫ Marcel Mitran and Tina Tarquinio

Yes, much like the z17, the LinuxONE 5 is a huge “AI” buzzword bonanza, but that’s to be expected in this day and age. The LinuxONE 5, which, again, few of us will ever get to work with, officially supports Red Hat, OpenSUSE, and Ubuntu, but a variety of other Linux distributions offers support for IBM’s Z hardware, as well.

Building your own Atomic (bootc) Desktop

Bootc and associated tools provide the basis for building a personalised desktop. This article will describe the process to build your own custom installation.

↫ Daniel Mendizabal at Fedora Magazine

The fact that atomic distributions make it relatively easy to create custom “distributions” is s really interesting bonus quality of these types of Linux distributions. The developers behind Blue95, which we talked about a few weeks ago, based their entire distribution on this bootc personalised desktop approach using Fedora, and they argue that the term “distribution” probably isn’t the correct term here:

Blue95 is a collection of scripts and YAML files cobbled together to produce a Containerfile, which is built via GitHub Actions and published to the GitHub Container Registry. Which part of this process elevates the project to the status of a Linux distribution? What set of RUN commands in the Containerfile take the project from being merely a Fedora-based OCI image to a full-blown Linux distribution?

↫ Adam Fidel

While this discussion is mostly academic, I still find it interesting how with the march of technology, and with the aid of new ideas, it’s becoming easier and easier to spin up a customised version of you favourite Linux distribution, making it incredibly easy to have your own personal ISO, with all your settings, themes, and customisations applied. This has always been possible, but it seems to be getting easier.

Atomic, immutable distributions are not for me, personally, but I firmly believe most distributions focusing on average, normal users – Ubuntu, Fedora, SUSE – will eventually move their immutable variants to the prime spot on their web sites. This will make a whole lot of people big mad, but I think it’s inevitable. Of course, traditional Linux distributions won’t be going away, but much like how people keep complaining about systemd despite the tons alternatives, I’m guessing the same will happen with immutable distributions.

GTK markup language Blueprint becomes part of GNOME

This week’s This Week in GNOME mentions that Blueprint will become part of GNOME.

Blueprint is now part of the GNOME Nightly SDK and is expected to be part of the GNOME 49 SDK. This means, apps relying on Blueprint won’t have to install it manually anymore.

Blueprint is an alternative to defining GTK/Libadwaita user interface via .ui XML-files (GTK Builder files). The goal of blueprint is to provide UI definitions that require less boilerplate than XML and are easier to learn. Blueprint also provides a language server for IDE integration.

↫ Sophie Herold

Quite a few applications already make use of Blueprint, and even some Core GNOME applications use it, so it seems logical to make it part of the default GNOME installation.

EU fines TikTok token amount of €530 million for gross privacy violations

A European Union privacy watchdog fined TikTok 530 million euros ($600 million) on Friday after a four-year investigation found that the video sharing app’s data transfers to China put users at risk of spying, in breach of strict EU data privacy rules.

Ireland’s Data Protection Commission also sanctioned TikTok for not being transparent with users about where their personal data was being sent and ordered the company to comply with the rules within six months.

↫ Kelvin Chan for AP News

In case you’re wondering what Ireland’s specific role in this case is, TikTok’s European headquarters are located in Ireland, which means that any EU-wide privacy violations by TikTok are handled by Ireland’s privacy watchdog.

Anyway, sounds like a big fine, right? Let’s do some math.

TikTok’s global revenue last year is estimated at €20 billion. This means that a €530 million fine is 2.65% of TikTok’s global yearly revenue. Now let’s make this more relatable for us normal people. The yearly median income in Sweden is €34365 (pre-taxes), which means that if the median income Swede had to pay a fine with the same impact as the TikTok fine, they’d have to pay €910.

That’s how utterly bullshit this fine is. €910 isn’t nothing if you make €34000 per year, but would you call this a true punishment for TikTok? Any time you read about any of these coporate fines, you should do math like this to get an idea of what the true impact of the fine really amounts to. You’ll be surprised to learn to just how utterly toothless they are.

Microsoft brings back Office application preloading from the ’90s

Back in the late ’90s and early 2000s, if you installed a comprehensive office suite on Windows, such as Microsoft’s own Office or something like WordPerfect Office or IBM Lotus SmartSuite, it would often come with a little icon in the system tray or a floating toolbar to ensure the applications were preloaded upon logging into Windows. The idea was that this preloading would ensure that the applications would start faster.

It’s 2025, and Microsoft is bring it back. In a message in the Microsoft 365 Message Center Archive, which is a real thing I didn’t make up, the company announced a new Startup Boost task that will preload Office applications on Windows to reduce loading times for the individual Office applications.

We are introducing a new Startup Boost task from the Microsoft Office installer to optimize performance and load-time of experiences within Office applications. After the system performs the task, the app remains in a paused state until the app launches and the sequence resumes, or the system removes the app from memory to reclaim resources. The system can perform this task for an app after a device reboot and periodically as system conditions allow.

↫ MC1041470 – New Startup Boost task from Microsoft Office installer for Office applications

This new task will automatically be added to the Task Scheduler, but only on PCs with 8GB of RAM or more and at least 5GB of available disk space. The task will run 10 minutes after logging into Windows, will be disabled if the Energy Saves feature is enabled, and will be removed if you haven’t used Office in a while. The initial rollout of this task will take place in May, and will cover Word only for now. The task can be disabled manually through Task Scheduler or in Word’s settings.

Since this is Microsoft, every time Office is updated, the task will be re-enabled, which means that users who disable the feature will have to disable it again after each update. This particular behaviour can be disabled using Group Policy. Yes, the sound you’re hearing are all the “AI” text generators whirring into motion as they barf SEO spam onto the web about how to disable this feature to speed up your computer.

I’m honestly rather curious who this is for. I have never found the current crop of Office applications to start up particularly slowly, but perhaps corporate PCs are so full of corpo-junkware they become slow again?

DragonFlyBSD 6.4.1 released

It has been well over two years since the last release of DragonFlyBSD, version 6.4.0, and today the project pushed out a small update, DragonFlyBSD 6.4.1. It fixes a few small, longstanding issues, but as the version number suggests, don’t expect any groundbreaking changes here. The legacy IDE/NATA driver had a memory leak fixed, the ca_root_nss package has been updated to support newer Let’s Encrypt certificates, the package update command will no longer delete an important configuration file that rendered the command unusable, and more small fixes like that.

Existing users can update the usual way.

Zhaoxin’s KX-7000 x86-64 processor

Chips and Cheese takes a very detailed look at the latest processor design from Zhaoxin, the Chinese company that inherited VIA’s x86 license and has been making new x86 chips ever since. Their latest design, 世纪大道 (Century Avenue), tries to take yet another step closer to current designs chips form Intel and AMD, and while falling way short, that’s not really the point here.

Ultimately performance is what matters to an end-user. In that respect, the KX-7000 sometimes falls behind Bulldozer in multithreaded workloads. It’s disappointing from the perspective that Bulldozer is a 2011-era design, with pairs of hardware thread sharing a frontend and floating point unit. Single-threaded performance is similarly unimpressive. It roughly matches Bulldozer there, but the FX-8150’s single-threaded performance was one of its greatest weaknesses even back in 2011. But of course, the KX-7000 isn’t trying to impress western consumers. It’s trying to provide a usable experience without relying on foreign companies. In that respect, Bulldozer-level single-threaded performance is plenty. And while Century Avenue lacks the balance and sophistication that a modern AMD, Arm, or Intel core is likely to display, it’s a good step in Zhaoxin’s effort to break into higher performance targets.

↫ Chester Lam at Chips and Cheese

I find Chinese processors, like the x86-based ones from Zhaoxin or the recent LoongArch processors (which you can buy on AliExpress), incredibly fascinating, and would absolutely love to get my hands on one. A board with two of the most recent LoongArch processors – the 3c6000 – goes for about €4000 at the moment, and I’m keeping my eye on that price to see if there’s ever going to be a sharp drop. This is prime OSNews material, after all.

No, they’re not competitive with the latest offerings from Intel, AMD, or ARM, but I don’t really care – they interest me as a computer enthusiast, and since it’s highly unlikely we’re going to see anyone seriously threaten Intel, AMD, and ARM here in the west, you’re going to have to look at China if you’re interested in weird architectures and unique processors.