Mark Shuttleworth, dropping a bombshell on a boring Wednesday:
We are wrapping up an excellent quarter and an excellent year for the company, with performance in many teams and products that we can be proud of. As we head into the new fiscal year, it’s appropriate to reassess each of our initiatives. I’m writing to let you know that we will end our investment in Unity8, the phone and convergence shell. We will shift our default Ubuntu desktop back to GNOME for Ubuntu 18.04 LTS.
[…]
I took the view that, if convergence was the future and we could deliver it as free software, that would be widely appreciated both in the free software community and in the technology industry, where there is substantial frustration with the existing, closed, alternatives available to manufacturers. I was wrong on both counts. In the community, our efforts were seen fragmentation not innovation. And industry has not rallied to the possibility, instead taking a ‘better the devil you know’ approach to those form factors, or investing in home-grown platforms. What the Unity8 team has delivered so far is beautiful, usable and solid, but I respect that markets, and community, ultimately decide which products grow and which disappear.
That just happened.
I hated Unity from the beginning, however many years ago that was. It even partly drove me away (back to Windows, then OSX). I’m very happy to hear they are going back to Gnome!
I should say why I disliked it- mostly because it was ugly, and the UX paradigms were half baked, and hard to navigate compared to Gnome’s simple menus. I have similar issues with Windows 10 (actually, MS Windows has been creeping toward greater UX complexity in the name of simplicity for decades – icky! Don’t get me started on their overly complicated networking wizard garbage).
Edited 2017-04-05 17:53 UTC
I on the other loved gnome2* but detest anything gnome3 (including gtk3).
I hated Gnome 2, used KDE till Gnome 3 stabilized, and using it since then.
It won’t make any difference.
Depends on what you mean by “make a difference” – I used Linux happily with Ubuntu and Gnome for a couple of years. Market share wasn’t the important thing – the user experience was. Unity wasn’t as good as Gnome (I think it was Gnome 3, I’m not sure).
This was a long time ago now, probably more than 10 years ago, and things have changed. At this point I’d be more excited by something that included the Android Play store. Back then the thing just needed a web browser and Flash Player. These days we need some app store (though there aren’t any great ones for desktop yet, aside from Steam for games).
I love the simplicity and aesthetics of Unity, but Gnome Shell as of today seems to be quite nice also – so I don’t mind switching.
Edited 2017-04-06 21:12 UTC
Wow, if I was on the unity development team I would be grumpy 🙂
Too bad he didn’t see this, to me at least, fairly obvious conclusion from the start. This was the most in your face visible area, but they generally are pretty good at the fragmentation game. Now one could start wondering what they could have achieved with those resources spent on better things.
Maybe they realized that it’s better to be part of open source community and not just riding on the back of them. They should find a better way to differentiate themselves. Gnome and KDE are far better and more stable options right now.
p.s. But I kinda liked the idea of Unity’s interface. People laugh at me for saying that, but I do.
My significant other preferes unity over gnome-shell. Mainly because all instances of running applications are persistently visible (in dock). Quickly switching between 4 similar spreadsheets is not possible with gnome-shell, one needs to constantly switch to overview mode and back.
Well, I think she will have to live with Gnome Classic Mode. But even then, I think I’ll switch her computer from Ubuntu to Fedora. Unity was the only differentiator.
One of the Pros (and, honestly, Cons) of gnome-shell is the extension support.
You might find dash-to-panel to meet your needs. Persistent app icons, thumbnail app selector on hover. While I’m happy using gnome-shell as-is for the most part, I find dash-to-panel very helpful in finding windows I know I have somewhere, but not necessarily on this workspace. (plus sometimes an at-a-glance “is it running” is quite handy)
https://extensions.gnome.org/extension/1160/dash-to-panel/
Their jab at the phone market failed, so this is the natural follow-up. It makes no sense developing a semi-proprietary shell for the desktop in Linux world.
They realized they should just ride Red Hat’s coattails.
Upstart is dead in favor of systemd, and now Unity is dead in favor of Gnome. They don’t say what will happen with Mir, but I’m assuming that’s dead in favor of Wayland.
Apparently, they’re doing pretty well in the traditional Linux roles like servers and embedded devices. It’s unfortunate, but people seem to like them.
The only use of Mir was Unity, so I think that’s a safe assumption. And the happiest part of this announcement for me. No pointless fragmentation. Canonical was an active participant in wayland, and then just stopped and then announced Mir, using all sorts of incorrect reasons why they couldn’t use wayland. It was pretty insulting to the wayland community. It was full of Feature Q is not possible with wayland, which everyone replied ” No, that’s perfectly possible, just not implemented in the reference implementation of weston”. They really meant that they couldn’t use weston. I felt kind of embarrassed for them.
You do realize that Upstart precedes and was actually one of the reasons behind systemd creation, don’t you?
I’m not an Ubuntu user (it is openSUSE all the way for me) but I never understood all the hate toward them, even by Debian people when Debian was not on anyone radar to be the suggested choice for newbies.
It always smelled like a kind of prejudice and envy because, for some time, Ubuntu were the only distribution really biased toward making Linux easy for newcomers.
All of us that love Linux systems should recognize that the Ubuntu guys not only identified problems at hand but also worked hard to fix them, even if they did not got it right sometimes.
Have a pint Ubuntu guys, best wishes from an openSUSE user.
Ah, that’s easy! Canonical has always had a weird attitude towards the rest of the Linux ecosystem including its parent distro Debian, a strong NIH syndrome which leads to their strong unwillingness to cooperate at any level with others, proposing stupid things like CLAs that benefits only one side, pushing unwanted “features” such as Mir, Unity, Upstart, libindicator, etc. even when the alternatives are tried and tested in the field and thus clearly better among a few other things. They strive their best to distance themselves from the “Linux brand” even though that’s where they made their fortune on.
Its users also kind of deserve some of the blame due to their naivety to the point of crediting Ubuntu and Canonical for good stuff that they clearly didn’t do such as better hardware support – that one fuckin’ KILLS ME every time! – and don’t get me started on those articles on Linux sites that supposedly show things like “amazing FTP client for Ubuntu” and when you see the article, it is talking about plain old Filezilla, as if it is some sort of Ubuntu thing.
I do agree that when Ubuntu first came out, there was room for a distro tailored for newbies, with good hardware detection (even if it was essentially Debian Testing and standard GNOME repackaged) but that was a long time ago. Ubuntu is no longer above the others in this regard and it hasn’t been for a long time despite what the “experts” say.
I am happy that they managed to dig a good niche for them in the cloud and the IoT businesses as they actually do a lot of good work and deserve the credit for it despite their weird attitude towards open source in general. But I for one will not miss some of the garbage and attrition that they unnecessarily brought to the table…
This. If there was ever a case of a new implementation of an existing spec (KStatusNotifierItem) hurting the original implementation, it’s libindicator.
For the longest time, I worked very hard to purge any DBus-based tray icons from my system because the effects of libindicator’s crippled API wrapper were all I knew.
(The protocol doesn’t force you to make your primary action a menu. That’s libindicator. KDE apps actually do what makes sense… which is often exactly what the old XEmbed tray icons did. …and because Unity bound primary action to Left AND Right and secondary to Middle, most developers didn’t know there WAS a secondary action and didn’t hook one up.)
Hum, the CLA episode was clearly the result of them overestimating their position and underestimating the compromises developers were willing to accept. Really bad done from any perspective, blame the lawyers and PHB on that.
About Upstart, I already said what I wanted, it was kind of needed at that time but not very well conceived. Almost the same could be said about Mir and Unity, the first iterations of Wayland and Gnome were really bad.
You see, that is what I’am talking about. They never claimed to have invented things that already existed, at least, not that I remember so, but they applied the patches available before most would (many times hurriedly) do to try to fix the issues at hand. They also tried to make the desktop beautiful, what is certainly important to attract regular users. People started to put Ubuntu on their computers when they would not give any other Linux a try. My entire University installed a version of Ubuntu on public computers.
So, why and where the hate came from? Because someone with not enough knowledge attributed to Ubuntu things that were not really developed by them and the actual developers felt they were not recognized and prized by their hard efforts. Most of the distributions collect patches and apply them without crediting the origin of them on release notes, just like Ubuntu does, why are developers not crying foul about it? Because Ubuntu was the first distro to really break in and attract a significative wave of newcomers and the actual developers felt like the distros they worked for were pretermitted. They started to look at Ubuntu with a mix of prejudice and envy instead of ponder about problems related to human habit of misplace credits. Do they need better examples than our politicians and regular voters when figuring out this huge problem of attribution ignorance? I will repeat what my father said to me (miss you!!): do the right thing not because of others but because of you, and remember it if something goes wrong.
They tried and failed many times and, with the exception of the horrible CLA episode, that was not a bad thing because many people started to talk about Linux and many developers started to scratch the heads and said “Oh f., lets do it right.”.
Well, you were asking where the bad blood comes from; I merely stated some of the reasons though I am sure that there were plenty of others.
I remember one instance in particular when Mark tried some weird power play offering financial resources, manpower and what not to Debian developers *if and only if* they agreed to align their release schedule to Ubuntu’s fast six months release schedule (it should be mentioned that Canonical mostly repackages Debian Testing and only a small subset of packages and only on x86 – and perhaps ARM now? I dunno – whereas Debian still handles a bunch of architectures to this day and literally thousands of packages).
It also should be noted that both Unity and Mir are results of their unwillingness to cooperate with others and/or not having their way on major projects such as GNOME and Wayland respectively. I can kind of understand on GNOME as they can be arseholes sometimes. But their excuses to abandon Wayland and create Mir citing “problems” that did not exist were pathetic at best.
Of course, Canonical is no stranger to controversy. It has been involved in very public licensing disputes with the Free Software Foundation. Its decision to include Amazon ads in Ubuntu’s menu system was seen as a crass attempt to cash in on users. And, there have been concerns over the company’s treatment of private data, with users’ search information transmitted to its corporate servers.
They rarely push useful patches upstream.
I feel that this – somewhat old, 2014-ish – article summarizes the issues with Canonical and Ubuntu best than I could: https://www.turnkeylinux.org/blog/ubuntu-not-invented-here-syndrome
I am not disputing that they made an important contribution in terms of bringing Linux to the masses and that cannot be understated. Ubuntu server is a decent product although I fail to see the difference between running that and plain old Debian sometimes.
But Canonical and Ubuntu have always had a troubling relationship with their brethren, only play nice when called on to it and have a weird unhealthy attitude towards open source in general. Some people think it is good that they try so hard to differentiate themselves no matter what and that’s fine; I don’t and think that they should give something back to the pool where they take their resources from.
Edited 2017-04-07 13:14 UTC
Unity is slightly understandable. GNOME3 went to a really weird place that didn’t make a whole lot of sense.
I assumed Mir was because they needed something they could dual license which would allow them to sell a proprietary version to cellphone companies. Wayland wasn’t going play ball, so Mir.
Canonical’s biggest problem is that they haven’t figured out a way to make money. How many of those IoT and cloud installs are buying support contracts for Ubuntu?
Canonical isn’t very good at actually putting out a coherent system.
Debian users were mad Canonical didn’t contribute more to the Debian project. They were hoping Canonical would be their Red Hat, but it was a very single sided relationship.
Ubuntu is good for a single user desktop computer, but there are better things out there for servers and everything like that. Ubuntu was one of the first Linux distros I really used as a fulltime desktop, but I got tired of version upgrades breaking things like clockwork every 6 months. Back in the day, they did do some good work on the desktop front, but other distros have caught up and surpassed them, like Fedora since 10.
For servers, I’m not suggesting Debian; I dislike Debian as well. I would recommend FreeBSD, OpenBSD, CentOS/RHEL/SL, Gentoo, or Funtoo.
I run CentOS, Fedora (workstations and prototypes), and Ubuntu along side each other at work. It never fails that it takes a lot more effort to get the Ubuntu systems working compared to CentOS and Fedora. Red Hat sweats the details, and their distros are easier and coherent systems.
Let’s look at Upstart. Red Hat used it on RHEL 5 and 6, and Fedora used it in the same timeframe. It was an adequate init system on those distros, and RH wrote some very nice tools to work with it.
On Ubuntu, the implementation was a mess. RH has chkconfig to list the runlevels and enable services. Ubuntu had three or four different ways to do the same thing, and not all of them worked for every service. They switch to systemd, and systemd forced a standard mechanism for managing services on Ubuntu which cleans up the mess that was Ubuntu init scripts. I don’t like systemd, but this is one good thing it did.
Red Hat should get more props then Canonical. The amount of engineering effort and work hours they put into the FOSS ecosystem is staggering. They deserve much more credit then Canonical does, and they have put much more money into FOSS then just their core business.
I have my own gripes about SUSE, mainly centered around YaST, so don’t feel like your favorite distro is left out. I just don’t have to support it in production. Although, I am watching it to see how Leap and the new Rolling release shake out. If Leap is painless to upgrade versions, I might start suggesting that for single user desktops.
I never understood why people were intensely opposed to Unity. While I would not classify it as great, it is certainly a good desktop environment: programs are easy to launch, tasks are easy to switch between, the file manager is pretty much what you would expect from a consumer oriented operating system. There may be a shortage of options to customize it, but customization generally gets in the way of usability anyhow (e.g. you cannot be guaranteed to have a consistent environment across systems). As for it being moderately different from other desktop environments, all that I can say is: try to understand the underlying principles of how things work, and you will probably find that your skills will transfer across desktop environments with little modification. There may be a few exceptions to the rule, but Unity is not radical enough to fit that category.
So pretty much Ubuntu realized that “reinventing all the things” is a great way to sap your developer pool.
Good riddance. I’m not a huge Gnome fan, but lets unify already.
I guess this also means Mir is finally dead.
Their niche is repackaging Debian Testing. They aren’t an engineering house like Red Hat or Suse, and it looks like they’ve finally accepted that.
Maybe now they’ll spend more time creating a coherent system rather then the mish-mash of random project which are only partially integrated.
Except GNOME Shell is also “reinventing all the things”
Sure, gnome shell is re-inventing all the things.. and I think they make a lot of weird and super questionable design choices… however, at least gnome runs on multiple distros. Unity generally only runs on Ubuntu… fracturing the ecosystem.
Gnome default desktop, but allow users to change it. easy.
gnome might make some unpopular and odd choices, but they have some kind of a vision to go towards.
kde on the other hand seems just not sure where to go, and tries to be a kitchensink desktop. not saying that it’s a bad thing, but gnome3 really nails it as a desktop for someone not very savvy with computers or linux in particular.
True. I was personally against a lot of Gnome 3’s UX choices early on, however, things seemed to work themselves out in the long run when they were polished.
There have been a lot of odd bumps in the Gnome road however. Things like <delete> not doing anything in file manager (you had to alt + delete so people wouldn’t “accidently” delete things) took an act of congress and scores of users complaining in bug tickets (submitting multiple patches) to get addressed.
The ugly ‘wobble out legacy icon tray’ in the bottom left haunts me to this day (tgf the ‘TopIcons’ extension)
The top fixed Gnome bar looks good, but was a slightly odd choice since most displays are widescreen. (i’d rather give up horizontal space before vertical)
Edited 2017-04-10 15:55 UTC
GNOME Shell is still ugly, with its wasted space at the menu bar. And constant switching of the overview just to switch applications.
I hope it will improve overtime.
I am still hoping that Ubuntu will stick to its user interface design, rename Unity and base it on GNOME technology. It is plain and simple and the only differentiator from the rest of the Linux desktops. KDE is overloaded with unused features.
Yeah, but everyone participated in it. And then when everyone complained, forked and split to recreate gnome 2 in gtk 2/3 (Mate/ Cinnamon), they finally capitulated and created gnome classic.
No mention of Mir? I assume they’re also ditching it?
To be honest, Mir, Upstart, Unity, the Canonical CA, and weird GTK patches were what moved me to Fedora years ago. I’m more than happy here, and unlikely to return to Ubuntu, but it is nice to see progress toward resolving their weirdness.
If this were last Saturday, I’d be doing a double-take. Of all the announcements I might have expected, this was not one of them!
I don’t know what to think. On one hand, it was refreshing to see them try to develop something new. On the other hand, for me at least, it always seemed unpolished and half-baked. It felt as if they never were able to hit their stride with Unity, and it showed. Truth be told, Unity and Windows 10 give me the same vibe: something that could be great but is too unpolished and desperate to monetize (Unity even included online search and ads before Microsoft did).
So the whole thing was an attempt to port the Linux desktop to phones? Gotta say, there is a reason Android succeeded; and it was by building a system from the ground up that would never need package managers and command line interfaces. Trying to get desktop Linux on phones never made any sense in the first place. It’s more likely we’ll see a desktop version of Android become popular than any of the current desktop Linux distros.
I’d love to run Ubuntu Mobile on my phone. They just don’t support my phone’s hardware. All my phones are hand-me downs. Which means at the very least, 2-3 years older then current models. More the reason for further support of a Ubuntu Mobile like os would be needed for older phones. Like older desktops benefit from the various unices.
I absolutely loath Android. The permissions, the lack of security updates(more a phone manufacturer limitation, than Google’s. Still though.) Yes, Android does use a package manager, which sucks for advance usage. As for Android as a better desktop? Wow, someone’s been hitting the crack pipe, a little to much today.
Better or not, it has an actual chance. Desktop Linux does not. The store is not a package manager, package managers require dependencies, which don’t exist in android line because 3rd party software was done correctly. This is capitalism, your personal feelings are irrelevant, start championing the complaints of those who left Linux if you want it to succeed.
I wasn’t referring to the store. I think you have “package manager” confused with something else entirely. A package manager allows a user to install a precompiled(in most instances, or source files in less commons instances), while having the ability for the package manager to manage said installed package, for possilbe updgrades, uninstalls or what have ya. The package manager does not and I’ll repeat ‘does not’ need to do dependency tracking, some don’t. Apple’s comes to mind, some RPM based package managers come to mind and finally Android’s apk(Android Package Kit) comes to mind.
Android still uses a package manager.
Here is an example of a installed app that is not managed by a package manager, but needs to be updated.
1) Download new app
2) Move app archive to a certain root point
3) open command line if not already done
4) unzip, unrar, untar, etc newly moved app archive
There you go. You just updated you previous app with the new version without a package manager. That is what life is like without a “package manager”.
Edited 2017-04-05 19:55 UTC
With that step, you just alienated 98% of the population. But then, maybe you don’t care… I’m not sure what point you’re trying to make
As a semi-advanced user myself, if I have to format and reinstall the OS (or install the OS on a new computer), after the OS install is done, I’d rather just be able to click a button, and have all my apps and settings restored, just like they were before. The closest thing I’ve seen to this is Titanium Backup on Android. But that requires root, and a bit of fiddling. Personally, I don’t like fiddling just for the sake of it. If there’s any way I can avoid that shit, that would be the optimal solution.
WorknMan, that was just a poke at his or her’s anti-CLI stance. One could do all that from a file manager, with out the need for the CLI.
The real task would be to uninstall a large app, with lots of files spread through out the filesystem, with out a package manager. Especially after countless updates have been applied. I’ve done it many time, in the olden days. No fun at all.
I also get your point about new system re/installs. For Android, it’s a little easier to restore files, than for a desktop system. The ecosystem on the Android device is much smaller. Desktops require a more elaborate backup.
I just don’t trust my data on another’s server though. So I’ll stick with the old fashion, /home and app installers backup.
Edited 2017-04-05 20:45 UTC
Doesn’t necessarily have to be on somebody else’s servers – just let me do a ‘snapshot’ of my app settings on my own hard drive before reformatting.
Of course, I doubt any modern desktop OS is up for this task (except for ChromeOS, which is mainly web-based so that doesn’t really count), but it’s a cool idea if anybody designs one in the future.
He doesn’t care. The Linux community is a toxic community that believes Linux is already good enough and requires no change on their end, so they only listen to positive news. They are like American Republicans listening to news only from sources that parrot their own opinions, thus living in an echo chamber of positive reinforcement of their own viewpoints. They will think it is good enough, but the fact remains if I go to any Linux distro help forum, chances are the first response will be open the command line and/or update these config files; which will always be unacceptable compared to the other systems on the market.
As for capitalism. Again you are confusing things. Marketing is not dependent on capitalism. Marketing has been around eons before capitalism.
I would like to know in what reality matrix one would find an Android UX doing better as a desktop system vs any of the different Desktop Environments out there?
There are maybe 20% (number pulled out of thin air) apps that even support proper scaling/rotation for larger screens.
Well, it might not be android, but ChromeOS is getting support for the play store and Android applications.
In addition, the changes made with Nougat (including the 7.1.2 changes for the Pixel C) indicate that Google is making Android mode capable of running as a laptop/desktop alternative.
It’s not going to be too long before there’s a google-backed alternative to http://www.jide.com/remixos
Especially given what Samsung has done with the S8 and the Dex.
What makes desktop linux, desktop linux? GNU userland? Something else?
Why did making desktop linux on phones not make sense?
I think a GNU userland and wayland compositor does make sense on a phone. Kind of what Nokia was trying to do at one point. I think with sufficient financial resources something like kde active could make sense on a phone. Would it overtake android or ios, well no it would not. But I would appreciate it.
Any one willing to lose millions and millions of dollars for my appreciation? Anyone? Bueller?
Bill Shooter of Bul,
There are times I’d find this useful too, but as a niche we are worthless to mass phone manufacturers, haha.
Canonical under funded ubuntu touch development for years and under estimated just how much effort it takes to create a production ready mobile/converged interface and device integrations.
They couldn’t sell it because it wasn’t ready to be sold.
Even if it was ready to be sold, would OEMs and customers line up behind an OS with a non-existent app ecosystem?
Just try and explain to Average Joe and Jane that the phone you try to sell them doesn’t have official apps for WhatsApp, Viber, Tinder and Candy Crash or some other game they like. It’s not 2008 anymore.
Even I, a person who uses smartphones as a internet device like to play some good game like Horizon Chase or Ridge Racer Slipstream once in a while.
Edited 2017-04-05 22:19 UTC
Apple did not have an app ecosystem when they started, Symbian did.
I maintain the only reason Apple succeeded with the iPhone was timing, and the fact they hit the US market first. Symbian was never big here because data plans cost too much and people in the US (and I am referring to the average person, not the cutting edge people that tend to visit this site) tend to be cheap and didn’t really have access to Symbian phones. Then came along the iPhone, which most Apple fans went out and bought, then the “normals” thought they were cool looking and bought into them. They were different than the standard flip phones everyone had. So trendy people bought them and still buy them. Then they spread to other countries (with mixed results). Hell, 3gp was supported practically every where except the US for years… yeah video calling before facetime…
I agree that Nokia did indeed sell smartphones way before Apple, and there was an app store where you could go. The screens were small and lo-res, and navigation was based on buttons and keys, or resistive, stylus operated touch sensors at best, but the phones were very nice nonetheless.
Still, hardly anybody I know ever used a Symbian phone to connect to the internet or download apps from the store. Data access was ludicrously expensive, and wired synch to PCs was cumbersome (very).
Apples greatest innovation and trump card to triumph was forcing operators to sell the iPhone with a data plan. This is what turned the smartphone into the most popular device to access the web. The nice GUI, the capacitive sensor, the high-res screen, were icing on the cake.
Lobotomik,
+1!
Nokia doesn’t get enough credit.
IMHO the iphone apple originally introduced wasn’t that appealing on its own. The cydia app store made it far more interesting than what it was capable of out of the box. Ironically I think this helped apple considerably since Steve jobs had not envisioned the iphone supporting 3rd party apps until he saw just how popular they were becoming with jailbroken iphone users.
I was envious of the unlimited dataplan though, that really was something you could not get anywhere else at a time when data plans costed a fortune. My brother had it and it really did make a difference in how he used it. Apple users were very fortunate to get that plan from ATT because the CEO of ATT is on record saying it was a mistake for them in hindsight.
http://www.tomsguide.com/us/AT-T-Unlimited-Data-Randall-Stephenson-…
Edited 2017-04-06 11:39 UTC
apple succeeded as they were the first to introduce a smart phone that worked.
I owned a Sony P910i, Nokia E series phones and Windows Mobile smartphones and Palm PDA’s. They were unreliable and the software was expensive. All of my smart phones would hang or crash, the worst of which was the windows mobile phones which used to crash when making or receiving calls.
My Sony P910i was reliable at the basic stuff like calls and texts but was unreliable for apps, my Nokia E61 was a good phone that was reliable at the basics but apps were very very slow and pretty crap.
yes there was video calling, it was blocky, unreliable mess, facetime is a 1000x better than the video calling that came before it, it is seamless and reliable, easy for any user to do.
The iPhone when released was very reliable, it never locked up and did things in a very natural way. On my P910 you with used a finger nail or the stupid little stylus, on the E61 you used a directional nub/pad, it was stupidly crap as was browsing the net or getting emails.
The iPhone was the first for me to offer a true desktop experience when browsing the net, it was fluid and easy to use. I never owned a Nokia communicator, however i got the hunch this was the only other phone that could offer a decent browsing experience,
Emails, calendars etc.. were so much better on the iPhone, the only thing it didn’t have was apps, but that soon changed and that in itself was so much better, the apps were a lot cheaper and much better than the crap they had in Handango, which were slow experience and clunky, after making sure you got exactly the right app for your phone.
No matter if you love or loath the iPhone it did force the industry to produce more reliable phones with a better user experience, which has played a major part in the current major global change akin to the industrial revolution of the 1800’s.
Most of the people I know do not use any of the apps you mentioned. They use their phones for communication and taking pictures. For those people, Ubuntu phone works great. I think Ubuntu on mobile only faces a challenge in situations where people are regarding mobile devices as portable app platforms rather than communication tools.
Grandparent comment mentioned WhatsApp.
That application alone has over a billion unique users every month, sending upwards of 30 billion unique messages.
I am willing to bet most of the people you know do have WhatsApp installed.
Without WhatsApp, any mobile operating system is not viable.
It is the single application that will kill BlackBerry OS 10, and was the major factor in USG’s migration from that platform to iOS.
Edited 2017-04-07 18:23 UTC
So now the only open source platform to bring convergence across all devices is KDE where Plasma scales from small devices to large.
And I think Microsoft is the only closed-source platform still pursuing this; but even then – they didn’t start that until WinPhone7/Win8 – long after KDE (KDE4) had the ability to.
And yes – I’ve used KDE4 in Netbook Mode and Desktop Mode; PlasmaActive (targeting Tablets and Phones) is also still under development. I doubt they’ll ever actually release a device but they still have the potential and without having to change the code to do so.
Canonical’s Unity, however, was more about Canonical than it was about the community. Same with Mir and several other efforts of similar nature – Canonical just trying to take over instead of letting community build around them. It’s one reason why Unity caused fragmentation instead of solidifying Canonical in that space as Shuttleworth had hoped.
Android is too, and they are very close. Multiple effords: 2 or 3 companies trying to create an Android version suitable for desktop but also Google is merging ChromeOS with Android and creating features in Android to make it more desktop and tablet.
It’s all very bad news: Android has won and Google will happily continuing gathering tons of data to put ads in our face…
Edited 2017-04-08 01:08 UTC
True – Google is pursuing a convergence that way with Android/Fuchsia/Magenta; however, IIRC they are more out-in-the-open – you can download the code, build it yourself, etc.
Microsoft is the only one still doing a *closed* source convergence.
Now, KDE isn’t likely to take off that way; they’ll be small player assuming they release devices at all. Most likely KDE will be the enabler for open source folks to run an open source converged platform across devices if they so chose.
However, Google and Microsoft will battle it out for the phone/desktop space, and most likely Google will win. Microsoft has too many ties with legacy technologies that make it hard for them to break APIs in ways that would allow them to do the jump (see the whole WinRT debacle); while Google will be able to build off it’s massive success with Android.
KDE is superior at a technological level. They can customize it to their liking and improve it in a shorter timeframe than GNOME while also having more maintainable code (admittedly not a guarantee).
More importantly, Qt is developed as an opensource project by a separate company whose sole goal is improving it. Far more resources are poured into Qt’s proggress and its industry support is way higher (especially outside of Linux). Combine that with an arguably better architecture as well as language and you’re going to be far more attractive to developers.
If they’re gonna do a course correction, I think switching to a Qt would be a good move to avoid a lot of future technical debt.
Unity8 has a Qt foundation. There is definitely an ‘anti Qt’ contingency inside Canonical.
I’d love to see more QT. However, the dual license can be tricky. From what I understand(me, not a lawyer). Is that you can do commercial apps/systems by dynamically linking the QT lib portions that are LGPL only, with out needing to pay QT license fees. Which means, again from my understanding, if there is a feature that is not LGPL in the lib. And you have a commercially release and use that non LGPL feature(s), you have to pay for the license.
How commercial Ubuntu really is, I don’t know. It’s like trying to understand QT’s dual license.
I could and hope to be, completely wrong. They have a very difficult to understand for layman’s, license. I’d like to hear from other commercial developers who use QT. Input would be appreciated.
AFAIK, there is nothing in the Qt libraries that is not LGPL licensed. There are no limitations on calling Qt from free or proprietary code.
The non-LGPL license is offered (I believe) to companies that want to package Qt code within their apps, and/or modify them while keeping the changes proprietary. How large and lucrative that market is, I don’t know, but I’d venture not much. Really, there are not very many commercial programs declarately using Qt.
Digia is the latest owner, but I wouldn’t be surprised if they kicked the bucket sometime. Still, if that happened, the code would remain forever free under the LGPL license.
Qt looks quite more elegant and easy to use than GTK, but for some reason GTK seems to have more traction. Maybe it is because Gnome looks quite more elegant and easy to use than KDE.
It has to do with history. GTK was built out of GIMP to become a competitor to Qt out of fears by the FSF/GNU that Qt would be turned against the community should its owners want to close source it.
KDE extracted a nice protection agreement that survives any sale of Qt to the benefit of the community, thus negating any real or potential issue. However, FSF/GNU still promotes their own GTK suite despite it being massively inferior to Qt.
I prefer QT, I think its very well designed and nice and simple to use.
However… I do think that gnome development ( assuming you use vanilla gnome and no extensions) has been much more stable and reliable as a desktop. which isn’t surprising given the differing resources poured into it.
I don’t think Ubuntu is going to customize much at all. I think they are looking at investing as little as possible in a DE shell. Which means gnome is a great choice.
I still don’t have a smartphone but I was considering buying one, an Ubuntu one.
So, that’s that.
Any suggestion to turn an android unit into something that I really own (i.e. that isn’t going to transmit any and all of my data to whichever app maker that feels like it)?
It’s hit or miss. Some devices are more easier to migrate to Ubuntu Mobile or Sailfish. It really depends on how truly rooted your future phone really is.
I’ve bricked 3 phones in the past couple of years. Only able to salvage one by jtag’n.
Trenien,
I second Ibrahim, don’t go buy an arbitrary android phone and expect it to be easily rootable. I’ve tried rooting a couple devices, some successfully and others not. All were purportedly root-able on public forums.
My most recent tribulation is with a BLU phone I had to try several attempts using a procedure that was for a nonidentical model. To make things even scarier google’s thrown a new wrench into the works with android 5.1. In the process of rooting the phone google’s new FRP “factory reset protection” triggered, which is essentially a kill switch for the phone that google controls:
http://www.androidauthority.com/lollipops-factory-reset-protection-…
During one attempt I bricked the phone and resetting it resulting in being stuck at a prompt asking me to login to a google account to unlock the phone I was trying to root. The caveat was that I had not associated the phone with a google account, so google was holding my brand new phone hostage and I had no recourse other than possibly to send it back (and unlike computers, changing the bootloader on a phone to get root access sets a flag that voids the warranty). I had never experienced google’s FRP kill switch before, so I was extremely fortunate to find a vulnerability and bypass android FRP on this phone (don’t count on a vulnerability being on newer phones though).
In summation:
1. Only select phones that several other people have had an easy time rooting, just because someone says rooting is possible doesn’t mean it will be hassle free.
2. As much as I understand that you may object to it, it would be precautions to register your phone with a google account in order to unlock it should you accidentally activate android’s Factory Reset Protection kill switch like I did.
I share your disappointment with the lack of freedom and control owners get with the top three mobile platforms, but IMHO android is still the least restrictive of these if you select the right phone. Many people don’t demand their own freedoms, and that’s their call, but what’s a shame is that their indifference also ends up affecting those of us who do want more freedom
Rest in peace Linux, was nice too know you.
Sure, Unity was rotting because of all the mistakes Canonical did, but sadly it also was the only somewhat usable desktop Linux had to offer. GNOME3, KDE – utter shit. MATE, XFCE, LXQt conservative useless shit that never left 1995.
I feel homeless now.
Agreed. I can’t believe people are cheering this. The last desktop I can conceivably recommend to a non-Linux geek is Unity. Gnome 2 back in the day but please.. gnome 3???
I’m really shocked by most of the responses here.
Unity is an OSX clone but people seems to love OSX. Admittedly it is a bit buggy but I think the dislike is more broad than that.
Regarding Gnome 2 as an alternative, I honestly haven’t used it in like 3 years but it was god-awful back then. Is it really better or are people just glad to be rid of Unity? If so, you are jumping from the frying pan into the fire IMO.
Unity wasn’t a true OS X clone. It acted differently in many ways and while I always put my dock on the left with auto hide, most people don’t. The default layout was irritating to some.
As for GNOME 3, some of the really bad problems with it have been fixed. It still leaks memory pretty bad on some operating systems, but then again the developers only care about linux. (another bad thing) For ubuntu, that won’t be a problem.
Last i knew, part of the desktop used a web view control under the hood and like microsoft’s active desktop from the 90s, it leaks memory pretty bad.
I’ve bounced between gnome 2, kde 3, kde 4, windowmaker, and xfce 4 over the years with some experiments with Etoile and gnome 3. I can tell you that all of them suck.
I use OS X a lot, and used Ubuntu in the past (I have SUSE on the only computer I have that runs Linux now, with the KDE desktop). I think Unity did quite a good job of bringing OS X user interface elements to X11 (as much as you can on X11), while maintaining their own look and feel. I remember the early days of GNOME 3 Shell, how different it was from GNOME 2 and how unreliable — there was a reason why Cinnamon, MATE and Unity appeared at that time — and having seen the screenshots of distros that still use it, I’m not hopeful for Ubuntu’s future on the desktop, unless Canonical puts *a lot* of effort into making GNOME look and work like Ubuntu.
It’s about time, but I prefer Ubuntu MATE. GNOME 3 is only marginally better than Unity, and far less useable than GNOME 2 which was and still is precisely perfect.
Same here. I like a DE to not stand in my way, not be a hog, but also not be so barebones to hinder, so I settled with Mate, for years now. Don’t ever use the app menu either, launching everything with altf2, so don’t much care how candystore – or not – it looks. Never especially liked Unity, so I don’t really care what happens with it. However, lots of people liked it, and it seemed to manage at least some resemblance of a mainstream Linux DE, but you always have to keep in mind that Linux and FOSS has always been about choice, so there’ll never be just one DE. Unfortunately that also means that sometimes something dies and disappears.
It looks like in he Linux world when a corporate entity goes a different way than Red Hat, sooner or later they will fail. A few examples: video acceleration (Xgl vs. AIGLX), security (AppArmor vs. SELinux), init systems (anything vs. Systemd), desktops (Unity vs. GNOME Shell) and so on.
Myself, I am on the camp hopping for both Unity and GNOME Shell to die a horrible death, but while the former seems to be on this path, the later has some strong life support.
Init systems are not that straight forwards. upstart died due to 1 Ubuntu CLA that companies could not allow there staff to sign. Two it attempts to trace what processes had been started by services with ptrace that broken more services than it tracked.
There are only 2 cgroup/namespace using init systems. Systemd and openrc. Only way to effectively track what has been started by a service under Linux.
The elephant in the room is most Init systems on Linux:
1) written for a OS that no longer exists. Sysvinit was written for sysv and presumes PID values are never recycled.
2) written strictly sticking to posix define functions s like runit on the idea of being cross platform only to end up not in fact supporting how Linux works. Linux is not 100 percent posix compadible. Even on 100 percent posix compadible there is not enough posix compadible functions to properly track what started what. The posix standard is based on what the operating systems had in common.
So once you strip away all the init systems that can by design work you have openrc that is not functionally complete and systemd. Now if there was more real functional choice things would be different. We will not get functional choice while people keep on running back to broken.
https://en.wikipedia.org/wiki/Xgl is a good read. Xgl had known problems before AIGLX appeared. Xgl coming closed protect to open source driver development was a dooming factor and this was before AIGLX. Please note Xgl fate was set before Redhat wrote the first line of AIGLX. This is just Redhat being responsive to hey this is broken ok will put forwards a possible fix.
I will not say Redhat is good in all cases they are not. You do better when you don’t have like the Ubuntu CLAs. Redhat did away with their CLA on Fedora and Centos so that people could submit instead demard that all submits have either under a proper open source license if unlicensed you legally give red-hat the right to license MIT.
Contributor License Agreement (CLA) is a very good way to block yourself from getting developers.
https://www.ubuntu.com/legal/contributors
There is a nightmare here. People working for the USA government are required to release their source code under public license and never own the source code.
(b) To the maximum extent permitted by the relevant law,
You grant to Us a perpetual, worldwide, non-exclusive,
transferable, royalty-free, irrevocable license under the
Copyright covering the Contribution, with the right to
sublicense such rights through multiple tiers of
sublicensees, to reproduce, modify, display, perform and
distribute the Contribution as part of the Material; provided
that this license is conditioned upon compliance with
Section 2.3.
This is out of the Ubuntu contributor agreement. There are many parties who cannot agree to this.
So Ubuntu is on the road to doomed. The Ubuntu CLA need to change to be like the Fedora solution.
There are many corporate entity going a different way to Redhat. HP support of Debian. Debian does not seam to be at any long term threat from Redhat. Debian on package management has gone a different way to Redhat for kinda for ever.
There are quite a few corporate entities that go different ways to Redhat and do quite well. Most don’t have stupid contributor agreements locking people by legal department from being able to contribute.
Redhat has been the best example of a company-backed distro that managed to stay relevant, and Debian is the very best example of a real community-made distro that has always stayed relevant since its inception. Other distros, while some been around for a long time, tried to replicate one or the other way, but in the end, in my book, these two are the ones you can – and could always – rely on.
For years I was happy tooling along in the default desktop Paradigm since the late 80’s, early 90’s. Then one day, Microsoft, Ubuntu, Gnome, KDE – the desktop interfaces I used the most, decided they would drop acid and make a new way to interface on the desktop. Gnome Shell was the most God-awful interface I ever used. KDE 4 appeared to be designed, like Ubuntu Unity, to give a unified interface for Phone, Tablet and Desktop/Laptop. What I want to know is “WHO THE HELL ASKED FOR THIS??” Was it you, or you, or perhaps the guy with red curly hair standing by the light pole? Were you the ones who asked for this? I didn’t, and no one I know did. The only popular desktop that seems to stay sane was the Mac. I now have a Mac. I also have a Linux laptop (from System 76), a Windows 10 laptop, a ChromeBook, and many mobile devices. Yay, so far, ChromeBooks have not taken acid. On the Linux laptop, I have a choice of other sane window managers/desktops. On Windows 10, I limp by, keeping shortcuts to the REAL control panel, etc. nearby. To me, one big lesson to all of this is…DON”T LET DEVELOPERS DESIGN THINGS! For the most part, they are awful at it. I am a developer, but I am smart enough to let other non-developers have a bigger say in the design of interfaces.
I imagine once the voice interface takes root, this may all be moot. For now, STOP IT! Put that mouse down, walk away from the keyboard, and don’t come back until the drugs wear off. Geesh!
The sad thing is that, while it might have been the developers’ fault on GNU/Linux (since no one else cares about it), I’d bet on it being the upper level marketing execs and designers at Microsoft who thought this was a good idea. Fwiw, I agree completely. A unified interface just doesn’t make any sense, since the input requirements between keyboard/mouse and touch-based gestures are completely different from the start. They could have still served the same content, used the same core codebase, and just layered a proper user interface for the desired input method on top. Apparently though, that wasn’t shiny enough.
A good OS is like a good democracy, if you want it to work half respectably different groups must have a proportional say. Too much power on developers hand and Gnome 3 and KDE 4 happens, too much power on designers hand and Windows 10 kicks in. Even OS X was compromised because of Ive superpower since Jobs departure.
Unexpected. Unity drove me from Ubuntu into the arms of KUbuntu but it almost drove me from Linux altogether. I had several old laptops running Kubuntu, ‘kitchen’ devices. When Unity arrived I hated it so much that it made me search hard for another linux distro that gave a desktop experience, a distro that also supported the mix of hardware we then owned. In the end I gave up, scrapped the old lappies and reverted to new Windows devices, retaining Kubuntu as a virtual o/s on one host system. Ubuntu’s user base must have shrunk during this time and everyone of note realised it was a big mistake. Why do people have to convince themselves that change is a good thing? Only improvement is a good thing. Change that dumps the experience of the past is almost always a mistake. Build on what you have and you are unlikely to go very wrong.
The Unity/ space shuttle comparison? It was shit too.
Unity was pure trash
The concept was good but it brought by itself several problems with applications running on top of it.
Support from commercial products (IBM, Oracle ..) ended with 8.04, that being the very last LTS release that did not come with Unity.
For the small team that they do have (comparing to RedHat, Suse, Debian) they should have kept Unity as a side project and focus on improving their ecosystem.
Although I try not to use Unity, I wonder how fast it will be forked. Because it should. It is opensource, we fork everything.
Gnome2 became Mate (Gnome 3 was cloned to Cinnamon), libreoffice/openoffice, owncloud/nextcloud, and we can go on for ever.
So, who has some spare time to take Unity to a new level? I won’t use it off course.
That didn’t take long:
https://yunit.io/
I feel that Ubuntu has lost a lot of it’s momentum. They tried very hard to compete with with more established desktop operating systems, by releasing a modified version of Debian. I always felt that they added a lot of differentiation between themselves and other Linux distros. But ultimately they only ended up competing with other Linux desktops. When I first saw Unity it immediately reminded me of macOS with the dock pinned to the left hand side. This was certainly appreciated by Mac users, but I hardly think it won anyone over.
It appears however that Ubuntu made a good splash as server OS. I think that’s what they should focus on and get rid of the things that nobody wants or uses.
I’ll stick with Slackware, using FVWM as WM.