Fedora 36 is releasing this morning as what is yet another release in recent times of being a very robust and bleeding-edge yet stable and reliable Linux distribution. I’ve already been running Fedora Workstation 36 and Fedora Server 36 snapshots on various systems in my benchmarking lab and this release has proven to be quite solid while adding new features and polish on top of the excellent Fedora 35.
I have no reservations about stating that Fedora is by far the best desktop Linux distribution you can get today (assuming you prefer GNOME, that is). It’s polished to an insane degree, not afraid to both develop and implement new technologies that bring the Linux desktop forward – kicking and screaming, lots of kicking and screaming – and sports excellent community support through things like RPM Fusion.
Linux Mint if you prefer less bleeding edge, Fedora if you want the best the Linux desktop has to offer.
You are not forced to use GNOME, just go to https://spins.fedoraproject.org/ and choose what’s best for you.
We deploy Fedora across the board in our business.
We usually use the “Server” variant as a baseline, installing either KDE or mate depending on HW and usage type.
Last time I counted, we had over 200 Fedora machines (70% of them virtual).
Typing this on a F35 virtual machine (w/ GPU, keyboard and mouse) running on a big iron oVirt cluster.
gilboa,
Did your business ever have centos? Obviously it was very popular in enterprise when it tracked RHEL making it extremely stable but then they decided to replace it with centos stream for business reasons. Many were complaining about it…
https://www.zdnet.com/article/centos-linux-8-is-about-to-die-what-do-you-do-next/
https://www.servethehome.com/red-hat-goes-full-ibm-and-says-farewell-to-centos/
I’m not very involved on the redhat side of things so I’m curious how it’s been going. I wonder if you or anyone can report on that.
I used CentOS as a desktop OS by the CentOS 6 era.
While I enjoyed the stability, many standard desktop applications were not part of the distro repositories (or EPEL), and old libs everywhere made getting them running a non-trivial task.
Nico57,
It’s funny you should say that. At work we encountered a system with a hard CentOS 6 dependency and I had to build other software for it that was present in CentOS 7 & 8 but not 6. Backporting dependencies wasn’t easy or fun.
I tried running CentOS 6 as a desktop. It was a fun challenge, and I learned a lot about compiling software on Linux. Eventually, I installed Fedora and cut down the maintenance work and planning by a lot.
We use CentOS 8 (now streams) as well.
While streams moves somwhat faster than CentOS 8 normal, we have zero issues.
– Gilboa
Forgot to add, we (als0 use CentOS 8 stream to host our oVirt clusters (big and small) which is RH version of VMWare ESXi.
Including the cluster that hosts the VM being used to type this message…
– Gilboa
“Many were complaining”.
But now a lot more of software is tested on something before official RHEL comes so developers can fix issues BEFORE new release so that they can be almost 100% sure that their application will work on new RHEL.
Initially I was also against this change. but now I like seing bug reports/issues for software running on Centos Stream precisely for the reason of finding and fiing them before the release.
If GNOME 3 would be doing a better job then i agree Fedora might be an attractive or maybe even the most attractive option for desktop GNU/Linux distribution ATM. But until Gnome or more specifically Gnome Shell starts to do a better job. Until then i won’t promote them in any way. As i don’t want the future on desktop to be like that. We can have and do deserve better than that. So for now sorry Fedora no can do.
Can gnome shell even use more threads today? 16 to 64 cores (32 to 128 threads) is not uncommon even in desktop systems. My own lowly system has 44cores 88 threads. Last time i used gnome (1,5 years ago) it felt like going back in terms of technology to the windows 98 era. Fedora is great in many ways but i’d rather run i3, amiwm or kde in most if not all instances. But as i said, it has been a while since i used gnome. Perhaps it has gotten better. And lets not talk about how ugly and flat gnome has become, it looks almost as bad as windows 10.
I’m continually amazed not just by how insanely polished Fedora has become, but how stable and rock solid it remains, even when they’re pushing cutting edge features. The team behind this distro keeps hitting the ball out of the park release, after release, after release…
I’ve been running Fedora exclusively on my main laptop since F24 came out in 2016, and I’ve been continually upgrading it for the past 6 years without any major problems. The only time I re-installed it was when I upgraded the SSD and wanted to try out btrfs. It’s simply rock solid and dependable.
Every now and then I try out different distros on VMs just to see how each is getting on, both major ones like Ubuntu, Debian and Arch, as well as smaller ones like BunsenLabs and Kodachi, and my heart just drops whenever I hit basic bugs as early as the installation process. This is the stuff that makes you appreciate distros like Fedora and all the hard work they put into it.
The release management is amazing for something with so much package churn between releases. Fedora since 2006, and there’s been like 5 show stopping bugs in all those years.
I did have to reinstall my laptop with F35 because python got weird, but that was after several years of release upgrades. 8|
Anaconda is the thing that gives me the most problems with Fedora. Mainly the storage portion. I’ve run into several bugs. The latest was with /etc/resolv.conf in the Everything network install.
To me Fedora is still an experimental playground for RedHat. Perhaps they are now doing better job than they did in the past. Perhaps Ubuntu, by pushing anti-features, is now worse than it used to be. But I still remember a revelation of switching from Fedora to Ubuntu. Linux that works out of box – imagine that. Drivers, codecs available through repositories, sane defaults, streamlined UI, no silly restrictions designed for pushing users to a commercial offering. There really was the a reason Ubuntu has become a dominant Linux distribution in just a couple of years.
Then there is Fedora’s focus on Gnome and its ecosystem. Yes, we can choose various spins but they all have to either share the underlying infrastructure with the main version of Fedora or maintain their own. If you want a good KDE or Xfce experience look elsewhere.
Could you elaborate on “silly restrictions designed for pushing users to a commercial offering”?
I’m a Fedora developer for 13 years now, and I do not recollect anything like that.
I might have wrongly blamed Fedora for what Red Hat have done to their free Red Hat Linux distribution after releasing RHEL. For start, that’s long time ago, definitely more than 13 years ago, or ~15 when I switched to Ubuntu, so it is mostly history now.
Red Hat has removed some non-free features (with a very strict interpretation of free) and moved some enterprise or convenience features exclusively to RHEL. I don’t really remember details of the latter and searching for them now doesn’t return useful results. I used Fedora for a few years after Red Hat Linux has been discontinued and the non-free part remained an issue. Indeed the community would have probably brought back any missing functionality but it also introduced tons of bugs and the whole distribution was constantly changing.
Red Hat Linux was by far the most popular distribution at one time and it has left a big hole in the community. This, plus new users attracted with ease of installation, were what has fueled growth of Ubuntu.
Fedora only ships Free software. They’re attached to a US company, so it makes sense for the project to pass on shipping encumbered or possibly encumbered software. Distros based in other countries can ship non-free software due to different laws. Fedora cannot, and the world is a better place for FOSS because of it.
Fedora has leapfrogged Ubuntu in terms of being the top desktop distro. The community has put in a lot of work to make Fedora nice, and RH has let the community do what they would like.
Fedora has everything RHEL does and more. Times have changed.
> Drivers, codecs available through repositories
Fedora is a “free” as in libre distribution, these will never be included in the default repos. But it takes all of 2 minutes to add RPM Fusion repos if you want these things.
> sane defaults, streamlined UI
Vague, subjective assertions that I’d totally disagree with.
> There really was the a reason Ubuntu has become a dominant Linux distribution in just a couple of years.
Sure. But being the easiest distro != the best distro for power users.
> If you want a good KDE or Xfce experience look elsewhere.
Gonna need more than zero explanation of what’s wrong with Fedora KDE or XFCE spins.
If you like fedora underpinnings and RPM but prefer KDE i can think of no better alternative than openSuSE tumbleweed. It is highly integrated with bleeding edge kde technology and has many of the advantages of fedora. It is a very good distro in many regards. If that is not your cup of tea there is lots of source distros like void, arch, manjaro and others that you can use with specific build flags for almost all software to suit your needs. I myself is rather fascinated by void and it’s runit init system that boots by computer in seconds rather than minutes. I bet there are some disadvantages to the parallel loading of runit somewhere, but so far it has worked fine for me.
> Vague, subjective assertions that I’d totally disagree with
Back then these were assertions we both concete and objective. Both Fedora and Ubuntu were targeting the same user base and yet people flocking to the latter.
Perhaps thing have changed now but the Red Hat’s playground vibe is still here. And then there is focus on Gnome, which make it not worth even trying again.
Back in 2003 things were different. Fedora was supposed to be a RH Linux replacement, basically what CentOS is now, and the user base was more interested in servers and such. No one thought Linux would be a viable desktop OS until Ubuntu showed up and started polishing things.
Then RH let the community takeover Fedora, and they started pushing it towards desktop usage, which works out for everyone. Fedora is basically it’s own thing now. RH forks CentOS from Fedora and removes stuff they don’t want to support, but the Fedora community gets to do what they like with Fedora.
KDE, Xfce, sway, Cinnamon, openbox, and others are in the Fedora repos, work well, and are the same thing as the Fedora Spin.
I plan on running plain vanilla Fedora/Gnome on my vintage Thinkpad X201 for my personal use, now that my job is forcing us to use a VPN on Windows in our work computer. Good to be back on Linux after all those years! At some point I was hesitating to install a RHEL clone instead, then use Flatpak for any app not in the repos.
What did you run in the interim? BSD, windows, amigaos, haiku, syllable, unix, macos?
Windows for the past 10 years, had to make a buck. Couple of years with MacOS, but I didn’t like it. Now that I’ll officially be VPN spied at my job, I need to setup a personal laptop. I have my vintage Thinkpad X201 here, just ordered a new keyboard, and SSD. Will setup my stuff with as much FLOSS as I can, most probably Fedora/Gnome. But still considering RHEL clone with Flatpak so I don’t spend my time updating that beast!!!
I went with Fedora/Gnome, very smooth on my vintage Thinkpad!
Good! RHEL/CentOS is going to be a 3 year cycle now, but Fedora is the best bet for a desktop. I’ve wasted so much time compiling software and juggling repos for a CentOS desktop in the past…
I tried Fedora because Pop!_OS is slowly turning away from GNOME and I really like the GNOME workflow/experience.
The installer ux is abhorrent, but many distros have a similar problem (tbh Pop isn’t much better in this regard). However the first thing I saw after I logged in to the new system was that some part of GNOME software crashed.
I’m probably really unlucky, but immediately I felt as if I went back in time to the beginnings of GNOME 3.x. Also the animations on Pop were really smooth and on Fedora they are laggy. Sure, this is an old laptop, but how come two distros using the same DE have so different performance? All in all the first impressions were really bad – hopefully it’ll get better in time.
Yeah, I don’t understand the GNOME hate. Unless people just don’t realize the length to which they are attempting to make a desktop that works well with either touch, keyboard or mouse or a combination of all. Once you realize this, the GNOME desktop makes a lot of sense. Granted, I do not like the choice in 40+ to put the dock/dash/whatever they are calling it this week, onto the bottom. It has never really made much sense there when your file menus are up in the left corner. Much less mouse/finger movements there.
This is especially noticeable on UltraWide Screens. I hooked my macbook to my 32:10 monitor and was trying to do something in word and about gave up using computers, feeling that technology had wronged me! Only way my sanity remained at all is I have a mouse with buttons to adjust the DPI…
Much of the hate stemmed from poor threading back in the day.. perhaps it has gotten better. But nowadays even basic low budget desktops from eon and wallmart has 16 cores/32 threads. Another factor has always been a problem with gtk3/4 not allowing proper themes and not with gnome itself that is is so gosh darn flat and ugly. Looks like a tech demo from xerox in the 70s unfinished, but worse and almost as ugly as windows 10. And the looks will only get worse as more apps start using libadwaita.
This might seem obvious, but I figure I’ll ask to make sure, do you have nVidia graphics? If so then the difference may be that you need to install the proprietary drivers.
Not sure what my post had anything to do with nVidia proprietary drivers. Yes, I do have them, and yes they are installed. My complaint was about macOS, not GNOME. macOS hasn’t supported nVidia for many years.
Derp, it’s late, just realized you were replying to the other person.
The 380series of osx drivers from the nvidia website can be easily modded to support everything prior to pascal (980ti and down through all maxwell and kepler products) by adding the device id’s to the kexts. But yes that is some rather old (2015) drivers and cards by today’s standards. And osx afaik does not support SLI (i had four of them 980’s at one point) but the old drivers support sli cuda/compute just fine if you are in need of that and not aching for 3d perfomance. It also works on the titan and mining versions of kepler and maxwell for the same reasons.
It takes a little time to figure out Anaconda’s quirks. I do like it better then the OpenSUSE installed though. Oddly, Anaconda has a more flexible storage section then the OpenSUSE installer. ???
I’m not going to use anything that pushes Gnome. Yes, I know there are Spins, but why would I bother with Fedora when I know the Spins are second-class citizens? The Gnome developers are unsalvageable. 11 years and STILL no taskbar or dock or any way to switch between your windows with one click. Just insane.
GNOME is a mess, yes. It’s a interface designed around a convergence concept with mobile that never happened, and they have been doing a very poor job in letting that mentality go and become a desktop only UI again.
But no taskbar and dock is by design, thus it’s unlikely this will be ever “solved”. It’s a desktop that is very hostile to mouse only navigation (I dare to say, newcomer hostile design entirely), you must learn keyboard shortcuts to be productive. And if you don’t like that, their instance about that is: my way or highway.
They made long time ago that odd choice.
There are extensions for a dock or taskbar. I don’t know the names since I don’t use them, but they exist.
I updated to it the day it came out… No serious issues so far. I have no love for IBM, but have to admit that Fedora has improved spectacularly since they bought Red Hat.
I do wish the Packagekit offline updater were a bit smarter though. Offline updates are good, but being prompted to reboot for them first thing in the morning is annoying; and having to reboot several times in a row for a series of tiny library updates is Microsoftesque. Fortunately dnf offline-upgrade doesn’t seem to have this flaw… Too bad it also has to use a separate package cache from Packagekit and normal dnf. So basically package management is still a bit of a mess.
(And that’s not touching the mess which is Packagekit. It should really not be necessary to set ShutdownTimeout in PackageKit,conf just to not have the service bloating up over time, or at least ShutdownTimeout should be enabled by default? But Packagekit has had these memory leak issues for like a decade now, with no sign of a fix. Packagekit IMO is one of the biggest reasons Linux is still not suitable for most end users.)
As far as “bleeding edge” though, I think the conflation of dated software versions with stability is pretty bogus these days. So many of the stability issues I’ve encountered on Debian and Ubuntu have been due to needless conservatism re pushing updated packages. E.g. there was a graphics bug for a long time that made Firefox unusable with the Wayland backend under Sway; it was fixed in Sway 1.6.1, but Ubuntu kept the 1.5.x version months after 1.6.1 was released.
And backporting patches isn’t a remotely adequate workaround – in fact it’s been a major source of security issues in Debian, where backported patches have failed to fix issues or introduced new problems, due to maintainers not fully understanding the codebase in question. Which makes perfect sense. This ain’t the 90s anymore, modern software is *complicated*. Especially for important software like e.g. browsers or crypto libraries, it’s better to trust the upstream devs.
rainbowsocks,
You raise a good point. Naturally the latest code has more bug fixes, but there’s also risk in updates that break things too. The idea behind the delays (as I am sure you already know) is to have more people testing before promoting the code to stable. This way maintainers can weed out issues that can and do happen in testing. But at the same time just like you I find it frustrating sometimes how far behind the stable distros are, especially when you know a problem has been fixed already and it takes a long time to go through the pipeline.
So I’d say there are valid arguments on both sides of the argument. My biggest gripe however is not about having a plurality of update cycles for stable and testing, but rather that the distros generally make it all or nothing, which isn’t helpful at all. The granularity is bad. It would be very useful to be able to keep different packages at different update levels but this is not something that most distros handle well.
In principal they could, but in practice adding “testing” packages to a “stable” system often results in dependency chaos prompting unwanted updates of stable packages to testing versions. In practice I’ve often needed to manually build software packages from source whenever I needed newer packages than what the repos have. This is doable, but tedious and not ideal.
Anyways, someone like you should probably start with the “testing” repos on debian which will take less long to update. You could use one of the other VM based package managers too those provide strong isolation between applications (and their versions). The problem of course is that most developers aren’t using those tools to distribute software, so the lack of universal standards is another hurdle.
Debian Testing used to be my fave actually. I gave up on it for two reasons:
a) At least at the time I stopped using it, it still suffered badly from a mix of outdated packages (with unfixed bugs) and surprise breakage from upgrades. Fedora had a similar frequency of breakage, but at least kept everything up to date.
b) More important now, it doesn’t include nonfree firmware out of the box, which in this day and age is just garbage. I’m glad to see that including nonfree firmware is finally getting serious discussion though, so hopefully that situation will change soon.
For nerdier distros like Arch and Void, the the main issue for me is lack of OotB Secure Boot support. Void was totally my fave before Debian Testing though 🙂
RH is attempting to fix this with Streams. There is the base system, and then there are streams like Postgres 10, Node.js 14, Perl 5.28, etc. for developers which can be enabled and used like a part of the system.
Then there are containers, and I have to say they containers plus btrfs does make things easier. I can start lots of different versions with different libraries easily.
There are still problems. Most container work is focused on application containers while systems containers are an after thought. Container networking is incomplete compared to VMs. Systemd doesn’t handle container upgrades very well when the container is started as a service.
Flatland_Spider,
You know, that type of feature could be a real motivator for me to switch distros for the first time in a long time. Although I’m hesitant to do so because of familiarity and habit.
Containers are cool especially if I wanted to break things down by application. But they don’t quite fit my needs at the moment and they still share some of the same problems when it comes to complex systems.
Systemd broke my home grown containers because it monopolizes cgroups unfortunately. I don’t like making systemd a dependency for my software because my distro doesn’t even support systemd, but whether I like it or not systemd and its tight coupling are the future of linux.
IBM has nothing to do with Fedora being a good distro.
That’s weird. PackageKit is dumb, and it regularly doesn’t work. I haven’t seen this problem though. Usually, one reboot is enough, when PackageKit does work.
Are you using Silverblue?
Nah, not using Silverblue, just the normal KDE spin. Though I love the concept of Silverblue and will probably switch to it at some point.
And yeah, IME upgrading through the KDE Software Center quite frequently requires multiple reboots in a row for library updates. I’d *like* to be able to use the Software Center since it also wraps Flatpak and fwupd, but whatevs, it’s not a big deal.
My own preferred update method has been just a shell function that runs dnf in a tmux session, so that the upgrade doesn’t bork out of existence if the desktop crashes. If I see more than a few libraries or desktop components get upgraded, I reboot after. It’s hacky and not ideal, but it’s safer than without tmux, and I’ve yet to see it permanently mess anything up.
I’m running normal Gnome Fedora.
I’ve getting closer to using Silverblue. I’ve gotten used to using toolbox, and I’m working on getting comfortable with building images with buildah and podman.
Yeah. It’s a good idea, and it’s very handy when it works. 🙂
I’ve switched to using the offline-upgrade plugin to replicate the behavior. “dnf -y offline-upgrade download && dnf -y offline-upgrade reboot”