Ubuntu 20.04 LTS on the desktop is shipping with GNOME 3.36 and its plethora of improvements, improved OpenZFS support as an experimental option, the Linux 5.4 LTS kernel and the many improvements the new kernel brings, WireGuard VPN support, and a wealth of other package updates.
I’ve been running it on my laptop since the beta, upgraded from 19.10, and it’s been smooth sailing.
I must be getting old, I used to follow Ubuntu release like a hawk. Test every big milestone either in virtual or on bare metal. But now I don’t. Don’t get me wrong I love Ubuntu, and am happy for this great release. I will be updating both my Ubuntu and Kubuntu system, just going wait till the updates hit my system. Too busy these days to sit at a computer and installing new OS. Just need to have my stuff work.
Sounds just like me motang. Exactly. I may be further along than you though. I’m on Debian Stable. Just security updates. Works the same yesterday, today and tomorrow.
Congratulations as always to the Ubuntu team. It’s probably a great release. Don’t ask me, read the reviews!
Yep, same here for my Linux based stuff, I’m well off the cutting edge with just updates to the essential stuff as and when required. With Linux it’s the only way to fly!
Desktop Operating Systems have reached a “good enough” level quite some time ago. Of course there is always room for improvement, new/better hardware support, cool extra features, more security, removal of annoying bugs, etc but none of that is something that most people will notice from 1 version to another. Of course those smaller improvements add up over time so eventually it is worth the time, effort and some risk to install an upgrade anyway, but you don’t have to be in a hurry if you don’t have a headache causing bug at the moment
This has more to do with the maturity of the current OS’s than your personal maturity
I must agree avgalen, maybe a little of both personal and os maturity. 20 years ago linux for me was a heap of work, I made a heap of compromises. 10 years ago things were much better but there was much tinkering and working at the pointy end, hoping to fill the gaps in functionality. Today for my use case mainstream Debian linux covers all bases. Windows free for 16 years now uninterrupted, work and home computing.
I’ve been on Debian Stable since Debian Hamm. For some devices it was Ubuntu LTS or even ChromeOS and Android for my phone, but my main system has always been Debian stable, same for servers. Almost always was using containers for applications/separate environment, used to be Linux-VServer in 2013 with patched kernel, later become LXC.
Never had huge problems with it, always had a way to solve issues.
Lennie,
When I use ubuntu I go with LTS, it’s requested by quite a few clients. But I prefer debian stable too. It’s been mostly reliable for me.
I should be using containers more. I’ve played around with them a few times but haven’t incorporated them into my production environments.
I mostly run VMs with normal distros for clients (clients can run/install whatever they need inside their VM). LXC is extremely cool, but it wasn’t clear to me that the investment of time & energy to move daemons over to containers would be appreciated (ie billable to clients).
Well, a simple example. let’s say you have a whole bunch of customers with PHP-websites, all on the same LXC-container, possibly using the same PHP version and Apache version, etc. Now you have one customer with an issue which seems it might be solved with a newer Debian install because it comes with a newer PHP-version. Let’s say you want to move customer from Debian oldstable to the new Debian stable. You create an LXC container, with the newer Debian version. You copy the website files and Apache configuration file and user in /etc/passwd (it’s just some files on the same machine because it’s containers). Create a new DNS entry: test.customer-domain (or dump the website domain in /etc/hosts on your desktop) and you can test the customer on the new version. Happy with testing ? Change the www address of the customer-domain (or maybe you have something else like a loadbalaner in front) to the new container and done. The resources usage of doing this is very low. And depending on what you are doing you might also just be able to move the website files instead of copying them. After you’ve done it for a few sites, move all sites. Does a customer complain about something broken ? Move that one customer back to the old container to ‘solve’ their problem in seconds. Giving you time to schedule when to fix it proper. So no need to run multiple VMs for maybe months if you don’t get around to working on this upgrade. It’s low on resources and your time because of some added flexibility.
An other way: upgrade testing (obviously it’s all possibly with VMs too, it’s just a bit more flexible). Copy a directory (of the LXC container). Change the IP, start the container and do an upgrade and see if something breaks. Sometimes you might actually have 2 containers with the 2 different Debian versions sharing the same directory with customer data/website, etc. on the same machine (or VM). Because an LXC container can do a bind-mount. With 2 VMs, you’d need to use something like NFS, etc. Can be very useful if it’s a very large amount of data. No time needed for setting up an NFS server, copying data and waiting, etc.
I’ve also done it like this: machine without LXC, create LXC with newer distro version and test the applications, etc. with the same data directory. And then upgrade the LXC-host machine and remove the LXC-container used for testing the upgrade.
Also we run Ubuntu in LXC on Debian hosts. So everything that touched the (a lot of the time virtual) hardware is all the same. The other way around is also possible. You prefer Debian, but you want to use Ubuntu livepatch service. Put Ubuntu on all your machines and use Debian in containers. 🙂
I should add for newer applications it’s not the VM-light LXC-containers, but:
Docker/Kubernetes, with miroservice/app/daemon per container.
Lennie,
If I were to migrate, I would give each customer their own container (they all have their own VM today). While it’s potentially a lot more efficient to have each customer sharing containers, troubleshooting individual website compatibility after shared updates becomes complicated. Ideally it wouldn’t be necessary, but PHP in particular has broken so many sites/web platforms/scripts over the years, often in subtle ways that aren’t immediately apparent until you get an angry call that something’s broken. I’ve gotten bitten by this too many times, so now I treat all websites as PHP version dependent.
Obviously LXC can be used to create this separation like a VM and if I had an opportunity to rebuild I’d probably do it differently today. I went with KVM because LXC & cgroups were immature back when I was evaluating them. There are some differences in the way I’d need to deploy them if I were to switch. I’m not sure if I would still give them each their own virtual disk (via LVM) or have them go into a shared filesystem. A shared filesystem is easier, but I’d have to think about quotas and I’m not sure if running a huge volume could be a disadvantage over smaller volumes in terms of maintenance (aka fsck). LXC’s shared memory could reap efficiency benefits over the VM. With LXC/cgroups, managing CPU priority is different as well, Right now I can “nice” an entire VM as needed and limit the CPU count.
I also looked at virtuozzo/openvz, but it wasn’t mainlined at the time and I didn’t want to mess too much with maintaining kernel patches.
About PHP and settings, I put PHP under PHP-FPM each customer/website it’s own user.
The resources was obviously a much bigger problem 10 or 15 years ago. 🙂
Some people play around with LVM volumes instead of quota…?
It all depends of course, your environment isn’t my environment, etc. 🙂
Lennie,
That’s what I do as well. Obviously it’s technically possible to install several versions of PHP but I’d rather not have to deal with the hassle. A lot of code maintainance/jobs are run from the commandline. Some of my clients use inexperienced web developers who are easily confused by non-standard setups, and they’ll blame me when they can’t figure it out. So politically I find it’s best to allow them to run non-customized distros and not give them that excuse… anyways, this is getting off topic, haha.
That’s what I do today, but I’m not sure if I would continue to do it with LXC. It would work either way, LVM is nice to use for snapshots, especially since I still use extfs, but I’m not sure if I would migrate to btrfs at the same time, that’s a whole other thing. So many choices 🙂
“That’s what I do as well. Obviously it’s technically possible to install several versions of PHP but I’d rather not have to deal with the hassle.”
I actually meant, separate versions I run on LXC containers, but to separate customers on the same version it’s PHP-FPM and it’s OK for changing per customer settings too. Actually kind of strict and as mentioned, always a saperate user account.
“It would work either way, LVM is nice to use for snapshots, especially since I still use extfs, but I’m not sure if I would migrate to btrfs at the same time, that’s a whole other thing. So many choices ”
Yeah, I’m also still on extfs, it works. You might be right when filesystems get to large something else might be needed. Possibly xfs isn’t a bad choice for large so I’ve heard.
I use ZFS for long term storage/backup though (those can be very large).
How well is it conserving power usage out of the box? Are there any tweaking thingies one would need to install to improve it on laptops?
I installed Ubuntu MATE 20.04 (which for some “integration” reason no longer ships Thunderbird as standard – Evolution takes it place) in a VM, went “meh, just like every other MATE-based distro out there” and went back to the 10-years-of-support CentOS 8 (with MATE from a third-party repo of course – only way to stay sane!) on my base machine.
I used to try out distros like crazy (on bare metal originally and then on VMs), but like many people have settled down to their preferred distro on bare metal and only try out a few of the major distros in a VM once in a while. I admin many CentOS (and a few Ubuntu) servers at work, so using the same OS at home/work for my desktop is very handy.
I`ve always preferred Debian family distros. I`ve played with Aurox, Mandrake, Fedora, Slackware and Gentoo, but always came back. A few weeks ago I tried CentOS during installation I choose Gnome Desktop, and after installation I had just cli, without X, nothing installed I had to do it manually. Well, after 2 hours I`ve been back to Ubuntu. Even install process was strange and not user friendly for me. Well, 15 years with Debian/Ubuntu made me too much comfortable
I only ever update when the LTS hits. Well, I guess it’s update time again!
Interesting that it has been smooth sailing. I tried it out on my Linux laptop (currently running Ubuntu 19.10) and had bug after bug. Screen rotation on a 2-in-1 still has random rotation problems until you disable the iio-proxy-sensor, I could not get Wayland to do fractional scaling despite editing the dconf settings, and for some strange reason the “favorites” bar (for lack of the precise term) was sluggish as heck — changing the order of app icons wouldn’t update unless I reset the desktop manager (gdm? lightdm…), and alt-clicking a favorites bar app icon would take up to a minute to open options.