I’ve been using Ubuntu as my workstation OS for several months now. Ubuntu Server with the i3 window manager to be specific. I love it, and I’ve had to change my workflow a lot to make it work for me. But now that I’ve made the switch to it from Mac and Windows, I’m very happy with it.
I’ll be honest, there’s not a ton of hard evidence that working on a Linux distro is objectively better than working on Windows or Mac. I have almost equal amounts of time spent working on each of these platforms, and I think each one excels at something different. With that in mind, I think Ubuntu just feels right for the priorities I have now.
So what have I gained, what have I lost, and what did I learn along the way?
While switching platforms is not always an easy task to accomplish – especially for people with very specific platform-specific software needs like, say, Xcode – I am convinced people convince themselves it’s harder than it really is. You can learn a lot from switching platforms, and test runs can teach you where your dependencies lie and how to overcome them, which is a wise thing to do, especially when you’re relying on proprietary tools that have turned into single points of failure beyond your control.
I enjoyed reading this. It’s not really about Ubuntu at all, or even Linux, but rather about the shift away from personal computers brought by commodity hardware, freely available software, easy access to data, and the wonderful flexibility of text files! Ultimately it’s all about developing a personal workflow that works.
That’s what I took away from it anyway.
It’s gotten easier over the years, especially moving between the big 3. Especially for developers who are going to be deploying on Linux or other Unix-like OSs.
Data formats have gotten more open and less proprietary. The biggest one being PDF. Everyone now accepts PDF files, usually in addition to MS Word files.
Most software is web based, or has a web component which is useful. Electron has been helpful too since it’s just a web browser. I mean, it’s awful, and Chromium needs to be ported to the platform first, which is a giant task. However, I wouldn’t have several of my daily apps available on Linux without it.
Make no mistake though, it’s still tough when working with other people or with/in organizations who only focus on MS or Apple products. MS Office and Office 365 is still tightly locked to Windows with reasonable support for MacOS and no support for Linux. Apple’s iWorks suite is tied to MacOS with no support for Linux or Windows. Gsuite has good support for the big 3 plus APIs for everything else. Let’s not talk about the mess that is chat software. The Fat filesystems are the only thing compatible between all 3 OSs out of the box, so say goodbye to encrypting shareable media. 1password doesn’t support Linux, and so on and so forth. I have a MacBook Pro and a Windows VM to cover these special cases.
You still do have to carefully choose your hardware. It’s better, but it’s still too easy to get something that’s doesn’t support Linux. TWAIN scanners are a must. Most printers are okay thanks to Apple owning CUPS. USB dongles (wifi, bluetooth, etc.) are mess due to cheap chipsets. Video cards have their caveats. Never: Nvidia, Sometimes: AMD, Usually: Intel.
Overall, this is the best it’s been since I’ve started messing with alternative OSs in ~95. Back then you were SOL most of the time, but now we can get around problems for the most part. There are even some major OEMs who will support Linux on their machines.
Anyway, about the article…
The author would have avoided a lot of trouble by using a full DE like Gnome or KDE. Most the problems outlined are only a problem because of the incredibly minimalist setup.
I use Fedora Workstation with Gnome as my daily driver, and these are not problems I have experienced in the last decade.
Indeed. I’ve moved to renting servers, and doing all of my work on those. Yes, it’s an incredibly privileged thing, and it does have the draw back of needing a stable network connection. However, it’s really reduced the amount of processing power I need to carry with me, and I just need to make sure everything gets synced to the repos.
“so say goodbye to encrypting shareable media.”
I’m pretty sure the purpose of full drive encryption is so thieves can’t read your hard drive after they’ve taken the whole computer. Encrypting individual files using things like 7zip or pgp is still more than possible.
Same concept except with flash drives which are more likely to disappear. The threat is anyone who picks up the flash drive randomly or maliciously.
I have a LUKS encrypted flash drive which I use for sensitive things like private keys, password backups, other things that I store offline in case of a catastrophic failure of some sort.
It’s also in a locked safe for defense in depth.
It’s very possible. Or you could go the encrypted volume route. Veracrypt is probably the best option for sharing encrypted media.
I’d just rather have cross platform FDE that I don’t have to setup additional software to use or really think about. Yes, it would probably be backdoored and incredibly broken, but state level actors aren’t in my threat model.
Flatland_Spider,
No disagreements generally, but just today I encountered a counterexample. I do tech support for a product, a client of which uses webex. Many years ago I never used to have a problem when it was a local fat application. However once they’ve transitioned to a web client it’s been nothing but a nightmare. It requires browser plugins which in turn require additional local executables. Webex just hangs on windows 7. Audio is broken for me in windows 10. It won’t run on linux. You can dial in, but that sucks and too often meeting are delayed because of webex issues, especially if someone emails you a link at the last minute and you have to install everything with no time. It’s not cheap either. I really don’t know how they stay in business given how problematic the service is, I want to fire them and I’m not even the one paying, haha.
I agree with this. I’m personally attracted to lightweight window managers (ie xfce) because I find myself in the “less is more” camp and I don’t need or want all the bloat. However in my experience, I’ve encountered more issues under xfce than gnome and kde, which seem to be more refined in general. I always find things that don’t quite work out of the box in xfce and face this dilemma: do I install the lightweight DE that I prefer, or a full blown DE with fewer bugs. I’ll probably go with KDE next time.
I took a similar route years ago, but I own my own servers, I never really took to renting them.
I’m not saying software has gotten better because of the rise of web everywhere. I’m just saying we have more options then we used to. XD I like real desktop applications much better.
Zoom is much better, and while it’s not perfect, it works for the most part. I could participate easily on Linux and MacOS, but presenting did work better on MacOS.
I understand that. I just got a refurb Dell USFF to use as an OpenBSD desktop, and I’m going to setup it up with Openbox to keep it minimal. I’m just going to do a little development on it, so I don’t need much.
Xfce is a great project, and it was my goto DE back before Gnome 3. It supported the BSDs well, it had a CDE mode, and it was simple yet still full featured. It just doesn’t have the dev power to keep up with the big two. They’re plugging along, but they’re behind.
I found I really like the MacOS and Blackbox hybrid Gnome 3 has going on. It’s super keyboard centric, and I like that. I couldn’t stand Gnome 2, but I really like 3… Now that it’s stable and video drivers have matured. If I could get all the features I like about Gnome 3 in something lightweight, preferably Qt based, I would probably switch to that. There just isn’t anything out there with the same feature set and paradigm.
I’d like to own my own servers. It’s just expensive.
Flatland_Spider,
Compared to shared hosting, yes it’s more expensive. For smaller low usage servers it might not be worth it, but on the other hand if you’re looking at mid to high specs and/or hosting many VMs, I’ve found it significantly cheaper to do yourself in the long run. The monthly fees for high end dedicate servers are really high, but for about the same price as cheaper dedicated hosting, I can have my own higher end server running several client VMs even better than the cheap dedicated hardware. Granted you’ve got to pay the hardware costs up front, but compared to high rental costs eating profits every month, it’s a bargain IMHO.
Everyone’s different though, some prefer to rent.
Oh, yeah. At a point it makes total sense. People are very proud of their servers.
I’m just not to the point where that makes sense. I’ve done the math, and it’s close. Just not quite there, and a little $6 FreeBSD VPS for dev work and shared hosting makes the most sense.
If I could get the thing to make some money, I probably would make the switch. 🙂
As with everything in computing, it comes down to how much time and effort you’re willing to put in to ops work. If you’re fine phoning up a datacenter technician to say “hey take a new drive of capacity and form factor X out of my pile and put it in slot Y of machine Z”, or building a datacenter and hiring people to do that for you, then owning your own hardware is by far the best computing bang for your buck. With some of the new AMD EPYC chips you can get 64c/128t and hundreds of gigabytes of RAM -per U of rack height-, so you could run quite a large business out of a 40U rack of them… but at the same time that requires competent operations people both at the datacenter and wherever your offices are. One thing I have learned in my time in industry is that those people are finite. Computing as an industry is growing faster than the supply of sysadmins, and a CS degree is thoroughly useless in preparing people for being a sysadmin. That’s what drives businesses to use managed services like shared hosting or public cloud or a desktop MSP.
tidux,
I’ll try not to take that personally, haha.
Indeed, hiring a sysadmin is far more expensive than hosting and a business with no sysadmin would clearly be at a loss. But on the other hand if you have a business that already employs tech staff, then it tilts back to ownership, Assuming you’ve got qualified tech staff in-house, there can be other perks to the DIY approach such as not being restricted to what you can do or install. I’ve had to deal with godaddy recently and their hosting environment is extremely restrictive for power users since customers don’t have root access on VMs and a lot of the standard linux tools are butchered and missing. Consequently someone who’s proficient in linux can actually be less proficient on a hosted service than on their own installation. It’s worth noting that not all hosting providers are this restrictive, I for one don’t impose artificial restrictions on my client’s packages, but the cheap providers tend to be the most restrictive and even limited to a fixed set of applications since it minimizes their support costs.
“The author would have avoided a lot of trouble by using a full DE like Gnome or KDE.”
Or start enough of Gnome when running a minimalist set up – my laptop i3 config includes :-
exec –no-startup-id gnome-flashback
… which is (at present) enough to get Gnomish things working.
Good to know.
While I’m generally an advocate for this stuff, I’m a a cross platform user and also for the disaggregation of hardware, making stuff ubiquitous and utilitarian, I’m not a fan of the solutions here. For example using Krita is basically a political statement.
If you are going to make software free it has to be free like the air we breath, humans do not get to pick and chose which other humans get stuff free and who must pay, that’s discrimination. I have no problem requiring commercial entities to pay while individuals get stuff free, but if you make a solution free to one human user it has to be free to all human users without discrimination. And I won’t be drawn into a debate about the humanity of corporations, regardless of whether they are the UN, WWF, BP or Exxon, they are all commercial!
What are you talking about? Krita is a wonderful piece of software.
If there’s one thing that resonated with me is this line:
“My physical computer now feels entirely replaceable”
I’d been upgrading my PC over years, so my setup was RAID (mirror) running windows 10.
I’ve went through a few hard drive replacement and the RAID was amazing.
One day I had the bright idea of trying to dual boot Ubuntu. I ended up messing up my windows partition. I even tried a windows boot recovery reset and it didn’t work. Licensing service wouldn’t start up. That really annoyed me. I tried to fix it for a bit but gave up. Probably the service permissions got screwed up.
But of course… herein lies the downside of raid mirror. If you screw up, it screws up both harddrives.
But it wasn’t my backup strategy… for that I had my NAS.
Anyways, I decided to start from scratch. I had backed up the files I needed to my NAS.
I started clean.
1 HD Ubuntu
1 HD windows 10
What I was blown away by was that I didn’t really have a lot of local stuff to worry about. I didn’t lose anything. Sure, I had the huge NAS backup. But the reality was most of my work in Windows 10 was stored on One Drive, which I managed to get synched onto Ubuntu. Really the only thing I needed from my old setup were a few VMs.
Probably my biggest surprise was gaming. I had assumed I’d need to boot to Windows for games.
I’m way out of date. I hadn’t played with linux in years. I installed steam. It installed flawlessly and looked the same as in Windows. I’d say about half my game library was Linux supported and all the ones I’ve tested worked fine.
Was Ubuntu painless? Nope, from the first boot I had play with things. My Nvidia graphics card wouldn’t let me log in. I had to google to install other drivers before it would allow me to log in. Even the install was silly. Windows 10 on the fresh install let me ‘pick’ a harddrive to install on. Ubuntu only wanted to install on the first harddrive. Anything else and I had to go on the advanced partitioning. I found that a bit silly. Gnome tricked me into thinking it supported OneDrive as you could enter your online Microsoft credentials to sync thing. But I guess OneDrive is not part of that. But some quick googling got me a syncing tool up and running pretty quick. Always lots of Googling to solve issues, but I’ve managed so far.
GoG wasn’t as good an experience. The GOG client doesn’t work for me.
But I could easily log onto the GOG store and download linux versions of a few of my games. Again about 50% had an easy to install linux version.
I like the article’s idea of storing .files in Git. I’m not sure how much I need that, but it’s definitely something to consider. At the moment, I really haven’t customized it to any great degree that I’d care if it got reset.
Even windows only software is ‘less essential’ than it used to be. Like I thought I’d miss Ms Office. The reality is, I really don’t use it except for work… and I have my work laptop for that. I was online (Google suit) for pretty much everything else and most places these days want PDFs for documents not word files directly. I uninstalled the mail client that came with Ubuntu because well… all my email is online now as well.