All weekend, I’ve been mulling over a recent blog post by Fedora Project Leader Matthew Miller, which he wrote and published on behalf of the Fedora Council. Fedora (the KDE version) is my distribution of choice, I love using it, and I consider it the best distribution for desktop use, and not by a close margin either. As such, reading a blog post in which Fedora is announcing plans to make extensive use of “AI” was bound to make me a feel a little uneasy.
Miller states – correctly – that the “AI” space as it stands right now is dominated so much by hyperbole and over-the-top nonsense that it’s hard to judge the various technologies underpinning “AI” on merit alone. He continues that he believes that stripped of all the hyperbole and techbro bullshit, there’s “something significant, powerful”, and he wants to make “Fedora Linux the best community platform for AI”.
So, what exactly does that look like?
In addition to the big showy LLM-based tools for chat and code generation, these advances have brought big jumps for more tailored tasks: for translation, file search, home automation, and especially for accessibility (already a key part of our strategy). For example, open source speech synthesis has long lagged behind proprietary options. Now, what we have in Fedora is not even close to the realism, nuance, and flexibility of AI-generated speech.
↫ Matthew Miller
Some of these are things we can all agree are important and worthwhile, but lacking on the Linux desktop. If we can make use of technologies labelled as “AI” to improve, say, text-to-speech on Linux for those who require it for accessibility reasons, that’s universally a great thing. Translation, too, is, at its core, a form of accessibility, and if we can improve machine translations so that people who, for instance, don’t speak English gain more access to English content, or if we can make the vast libraries of knowledge locked into foreign languages accessible to more people, that’s all good news.
However, Fedora aims to take its use of “AI” even further, and wants to start using it in the process of developing, making, and distributing Fedora. This is where more and more red flags are starting to pop up for me, because I don’t feel like the processes and tasks they want to inject “AI” into are the kinds of processes and tasks where you want humans taken out of the equation.
We can use AI/ML as part of making the Fedora Linux OS. New tools could help with package automation and bug triage. They could note anomalies in test results and logs, maybe even help identify potential security issues. We can also create infrastructure-level features for our users. For example, package update descriptions aren’t usually very meaningful. We could automatically generate concise summaries of what’s new in each system update — not just for each package, but highlighting what’s important in the whole set, including upstream change information as well.
↫ Matthew Miller
Even the tools built atop billions and billions of euros of investments by Microsoft, Google, OpenAI, Facebook, and similar juggernauts are not exactly good at what they’re supposed to do, and suck at even the most basic of tasks of providing answers to simple questions. They lie, they make stuff up, they bug out and produce nonsense, they’re racist, and so on. I don’t want any of that garbage near the process of making and updating the operating system I rely on every day.
Miller laments how “AI” is currently a closed-source, black box affair, which obviously doesn’t align with Fedora’s values and goals. He doesn’t actually explain how Fedora’s use of “AI” is going to address this. They’re going to have to find ethical, open source models that are also of high quality, and that’s a lot easier said than done. Sourcing doesn’t even get a single mention in this blog post, even though I’m fairly sure that’s one of the two major issues many of us have with the current crop of “AI” tools.
The blog post also completely neglects to mention the environmental cost of training these “AI” tools. It costs an insane amount of electricity to train these new tools, and with climate change ever accelerating and the destruction of our environment visible all around us, not mentioning this problem when you’re leading a project like Fedora seems disingenuous at best, and malicious at worst.
While using “AI” to improve accessibility tools in Fedora and the wider Linux world is laudable, some of the other intended targets seem more worrisome, especially when you take into account that the blog post makes no mention of the two single biggest problems with “AI”: sourcing, and its environmental impact. If Fedora truly intends to fully embrace “AI”, it’s going to have to address these two problems first, because otherwise they’re just trying to latch onto the hype without really understanding the cost.
And that’s not something I want to hear from the leaders of my Linux distribution.
Yes, AI has problems, and one of them is proprietary tools and models. This is something FOSS needs to figure out, and consequently, RH since they don’t sell anything proprietary. I’m sure RH, CloudLinux, and random CentOS supporting companies would like to get AI tools and libraries into a future release, which is one of the drivers behind this.
Anyway, Fedora is a good distro for this. It’s community driven, it’s rather experimental, and developers are Fedora’s target market. Being able to get an environment quickly setup to experiment with AI is a big first step. It lowers the barrier to entry for people who want to experiment with AI and try to fix some of the problems. “dnf groupinstall ‘AI Development Tools and Libraries'” would be pretty nice. I am perfectly capable of setting a complex development environment, but I’d rather use the distro packages for any initial experiments.
This sounds like “We’re going to add more features!” rather then anything concerning.
These all sound like reasonable suggestions. I’m not sure what “infrastructure-level features” are, but sorting though logs and test results, package automation, and writing descriptions would be very helpful. Those plus suggesting tests are probably the top things I would automate with AI. I don’t want to sort through test results or logs, especially at Fedora’s scale. I’d rather make the machine do that while I do something else.
We all talk about how FOSS is understaffed (IT departments are always understaffed), but we’re pushing back on tools which could help with the workload. It’s worth testing out the new tools to see if they help, and it’s worth figuring out where their deficiencies are.
My biggest guess about how this is going to work, AI is going to get plugged into Fedora’s OpenQA instance. (https://fedoraproject.org/wiki/OpenQA)
As an aside, this basically sounds like the pitch for GitLab Duo (https://about.gitlab.com/gitlab-duo/), which sounds like an entirely reasonable use of neural nets. Fedora moving to GitLab would be bigger news, and that would be great.
It’s probably not going to affect anything, just like having lots of development libraries available in the repos doesn’t affect the day-to-day of regular people.
>They lie, they make stuff up, they bug out and produce nonsense, they’re racist,
I think Google Gemini took the crown lmao
Yeah, bad vibes… It’s always a relief to see you stand for ethics though.
Speaking of your choice of distro, have your requirements changed any since moving away from translation work? Any chance you could daily drive something like Haiku now? I’d love to read your review of its current state… Serenity OS too! Maybe even Picotron! Oh, possibly React OS…?
Oh lord, Every few years a stable Distro that just works has to take up some quixotic quest to chase a popular thing. Tablets, phones, cloud, now AI. Just let the distro be, and build a different experience for the new thing with out that thing breaking the distro for everyone else.
I’m an AI realist, it can be useful to augment human capability in specific situations. It could automate parts of the distro build, assuming a human has the bandwidth to test its work.
This has me quite concerned for Fedora. I’ve recently moved to Fedora and a few weeks afterwards, I see that blog post.
While I appreciate that it’s a bit more grounded in reality than most of the AI trend posts that have went around for various projects, this still feels like they are following trends for fear of missing out, rather than providing any actual major benefit to Fedora users.
I’ll admit, I’m quite exhausted from a lot of AI discussion and have become quite the AI skeptic, I understand that they can have valid uses, but I believe that their uses do not outweigh the ethical issues and electricity costs of running/training them. It makes me lose a bit of faith in the leadership of Fedora that any ethical issues aside from source code wasn’t prioritized, it is very possible for an open source project to ultimately cause more harm than good, and I think that there’s a lot of assumptions from people that as long as anything is running on open source code that it can only cause good, when that is not the case.
AI is nothing more than snake oil, or an attemot to sell corporate giants the Brooklin Bridge. It’s all buzzwird with zero substance. That being said, I see the term used as nothing more than a metrics grab that requires the use or subscription to IBM’s network, to which you will give them your personal info for free to do what they wish without the individual profiting at all. Lets see what this term does to RHEL.
An easy way to get into AI is by using Ollama and the models that can be used by it.
https://ollama.com/
https://ollama.com/library
Also, there is a Linux distribution that has already gone big on AI.
https://www.makululinux.com/wp/