“Gnome 3 has received a lot of disapproval of late, from the Gnome foundation being charged with not taking care of its users, or losing mindshare, to Gnome 3 itself being an unusable mess. I’ve been using Gnome 3 myself for a few months to sort the truth from the fiction, and to try and understand just how the Gnome foundation expects their newest shell to be used. I will end with some thoughts on how Gnome 3 can be improved. The review will require a fairly lengthy preface, however.”
As a GNOME 3 user, I agree completely. GNOME 3 is not supposed to be the evolution of GNOME 2: making people realize that is it’s biggest challenge. In that regard, GNOME 3 is fine, albeit still young and with a long way to go, just like young GNOME 2. I still remember cursing about GNOME 2 and longing for my beloved departed GNOME 1.4.
But there is a lot of valid criticism. GNOME 2 is being seen by the GNOME 3 designers as a failure, which is pure nonsense. And GNOME 3 designers seems to be completely oblivious to user’s criticism. The GNOME 3 platform is wonderful, but it’s potential might be completely ruined due to the designer’s seemingly “blind and deaf” attitude.
I want GNOME 3 to grow. But I don’t want it to grow alone, as a designer’s experiment, out of everyone’s desktops, hearts and minds.
Maybe they shouldn’t have called it Gnome 3 then? I’m just sayin’.
You mean like how Mac OS 10 is actually Mac OS 9 with prettier Window borders? I thought it was a huge difference. Don’t recall seeing the dock and the Aqua interface in OS 9. People did not seem to mind the differences either.
Yeah, they should have called Gnome 3 the Garden Gnome XL Extremist Edition. That way people will get it that it is different and yet it is a product of the GNOME Foundation?
Edited 2012-09-24 06:16 UTC
Are you kidding?
http://en.wikipedia.org/wiki/File:MacOS922.png
There is a dock (it was introduced in Mac OS _7_ as the “Launcher”), there is a top window menu, even finder’s icon is the same.
And now compare it to:
http://en.wikipedia.org/wiki/File:OS_X_Mountain_Lion_Screenshot.jpg
So yes, for the user “Mac OS 10 is actually Mac OS 9 with prettier Window borders” (btw, Aqua is primarly a GUI theme). Yes, there is a huge difference between OSX and OS9, the Kernel, base systems, APIs and a more modern look, but a OS9 user felt at home right away in OSX, all the GUI elements were rather similar.
Now, as an exercice, try to count how many things are similar between Gnome 2 and 3.
3?
1) There’s a bar.
2) GTK
3) Nautilus
?
Nice try, but no, the launcher is not the same as the dock. This is how the launcher functions and looks like in Mac OS 9:
http://www.cbtcafe.com/mactutorials/launcher/launcher.html
The control strip for control panel items, which is in your pic, does nothing like the OS X dock:
http://www.macoptions.com/tips/os/csm.html
The Aqua interface has buttons for application icon, maximise, minimise and close located differently on the titlebar and the removal of window borders. So it is not just a reskin, the placement and behaviour has changed. For example, OS 9 did not support the use of right click menus on the title bar.
MacOS and MacOS X isn’t even the same OS.
Also MacOS was old crap with no future.
I don’t think the problem is people thinking Gnome 3 is an evolution of Gnome 2. It clearly isn’t. The problem many people have with Gnome 3 is that it simply isn’t what they want.
It would be good if the people who don’t like Gnome 3 (Shell) stop paying attention to it. It’s never going to be anything like Gnome 2. That part is dead and gone. It would also be good if the Gnome 3 (Shell) using people stopped telling other who don’t like it, to just give it a chance. They did and it’s like Brussels Sprouts, you either like it or you don’t.
But this will become one of those eternal fights, where the pro Gnome 3 people acuse the naysayers of being afraid of change and living in the past and where the anti Gnome 3 people acuse the opposition of adopting change for the sake of change and chasing the mobile paradigm…
People want to put the GNOME 1 x GNOME 2 transition as the same thing as GNOME 2 x GNOME 3 transition. It’s a real mistake to stand for this argument.
As far as I remember, GNOME 1.4 was basically a bottom panel with traditional menus, tasklist, workspace switcher & clock. Going to GNOME 2.x was just a matter of adapting to the menu distribution which was the same classic menu and the position of the panels, sporting then 2 panels with the applets in them. You had the same workflow.
With GNOME 3 is totally different. You need to push to Activities corner every frigging 10 seconds and switching applications is annoying. The workflow is totally different despite the “better technology behind it”.
So I don’t buy GNOME arguments about those transitions being the same. They have never been the same. I was alive and well and I remember it was a bit annoying but it was nothing like we have now – head up the arse.
There is THIS DESIGNER who told people that every change is naturally uncomfortable, of course, because it was the same reaction when GNOME 2 was released. For one thing I know, this guy was in diapers when GNOME 2 was released. Nothing has irritated me more than his comments.
Edited 2012-09-24 19:02 UTC
I’ve been recenly playing around with a mac book pro and OSX, and personally, I found OSX do be dull compared to GNOME 3, I’m one of those who enjoy GNOME Shell with all it’s defaults.
From playing around with Gnome 6.6 preview in Fedora 18 alpha, to me it seems that the percieved shortcomings with Gnome 3 are slowly but surely being addressed.
3.6 in places is looking to turn out quite well.
I still think however that the idea to get rid of the taskbar by default was a bad one and the icons on my 20″ monitor are far too large (and the tooltip text far too small, but better than before).
Overall though, Gnome 3 getting into a very useable position that could become a better product that Gnome 2.
I do beg to differ. Now that we all got huge screens, the latest fashion is to hide important stuff by default.
I mean, even my laptop has 1024p, my second screen is 24′ 1600×1200, why should there only be one mighty fullscreen app on it? There is not a single website that can fill this 16/9 or 16/10 space with meaningful AND readable stuff. I don’t even talk about applications in such a fullscreen environment. Give me back my tiled or tilable window manager.
I do like these workspaces concept, but why this hate with the hotspots? Do they mean to make every regular user (or average grand mother) flee while shouting about what this mess is? Make the interface explicit, stable, consistent in time. No, having an exposé mode, while practical, is by no way consistent, it’s a hack and a good ole icons only taskbar does the trick. The only advantage of a exposé or alt-tab interface, is that I can have a quick look at the content.
Repeat after me: people do not like when stuff pop out of nowhere. It’s confusing! And moving the cursor to a magical land should not be the way of popping up an event in a desktop. The start menu button is easy to understand ’cause when you push a button, you expect something to happen. Please don’t change that single simple and efficient expectation for the desktop visual purity sake.
About the lack of separate application contexts, I do totally agree.
“my second screen is 24′ 1600×1200, why should there only be one mighty fullscreen app on it?”
I completely agree. I was fully expecting to hate fullscreen apps, but I didn’t. An “app” used to mean something relatively simple with small windows (think “terminal”) but the new idea for an “app” is something integrated (think IDE, like Eclipse)
As for “the fashion”, I think it is perhaps because of these gigantic screens that people are looking to simplify the desktop. The “problem” is that you have these constant distractions pulling your eye and having you subconsciously thinking about them. Originally this was a UI thing of other applications “pulling focus” from the one you’re working on, but it’s going deeper into even showing you things that you shouldn’t care about right now so you don’t get into addictive behaviour patterns like constantly checking your email.
I had some trouble with your second last paragraph, so forgive me if I understood you wrong.
I care about productivity above all. In the end, you could “it’s confusing” your way to skeuomorphism, and you’d probably end up with something far more like iOS. The thing to remember is, there is no button.
And you know Kung Fu.
IANAD, but if you have a problem like that you could be suffering from OCD or worse, tourettes. While I doubt fullscreen apps will help with your condition, please seek professional help.
Edited 2012-09-24 06:19 UTC
Just to be clear, I was not attempting to attack the parent comment. I have mild OCD and have have suffered from it not knowing or understanding what it was. I have seen worse cases from other people at work and would always advise anyone having such behavioural problems to seek help.
It was about implicit behavior which is definitely not a desired pattern.
The hotspots are by nature invisible: that is fundamental bad design. An application, even a desktop, should expose its behavior, make it explicit and consistent. A desktop interface is widget driven, you input text in text areas, click on buttons, drag icons around: it’s visible, it’s clickable, it’s interactive. I expect something to happen when I interact with a visible widget. Even if my hands are not incredibly agile I can move the cursor around and then stop and click. First, choose the target interactive widget, then willfully click or type to achieve interaction. I choose, then validate and trigger something. An invisible hotspot is absolutely not consistent with these concepts: I move the cursor to a zone, then an unvalidated intrusive event happens. It’s not a flyover tooltip, it’s my entire screen content that gets replaced with another, unrelated context.
They could have at least used a workaround, like the “show desktop zone” of Windows 7 (bottom right) which is explicit.
Consistency is why Macs have only one button. The right button is the “mystery contextual menu”; you can hardly guess what the menu contains in an application. Of course, two or three or even more buttons (and keyboard shortcuts) are better for advanced users, but the point is that it leads to confusing, inconsistent and unexpected behaviors.
“that is fundamental bad design”
Design is a matter of opinion. Unfortunately when you have designers and programmers talking to one another, the programmers internalise what the designers say as hard law. In reality it’s more of a guide.
IMHO having these two modes — one for “using” the desktop, and the other for “controlling” it, seem fairly natural, but then I’m a vi guy. I’ve never understood Macs, they’ve always been strange and mysterious and I couldn’t ever get my work done on one, so your proclamations of “the right way” don’t really sit well with me.
Except, you’re so wrong about Macs mouses… they have exactly the kind of “invisible”, “fundamental bad design”, not exposing its behavior, not explicit and consistent behavior you criticize just a few lines earlier…
Apple Might Mouse does have more than one button, four of them actually. Or at least four-button behavior – thing is, they are made invisible (and so on). A mouse, a device made for clicking at things, hiding its buttons from view…
Worse, it forces to lift the left finger before right-clicking.
Magic Mouse and touchpad gestures are similarly bad – they are inherently invisible and non-discoverable.
And with the earlier Apple mouses which did have only one button, the OS was adopted to utilize it in combination with some keyboard keys, to recreate the behavior desired from the original multi-button Xerox mouse …not very discoverable (plus, it was likely largely a cost-cutting measure – back then, one vs two+ mouse buttons could make a noticeable difference in this regard)
Edited 2012-09-28 13:42 UTC
Funny. There is indeed no button, but your reference to The Matrix is deliciously ironic. Why is there a Matrix? So real human beings can be interfaced to the digital world, without mentally overburdoning them with incomprehensible digital representations. The Matrix is the ultimate in skeuomorphism.
It’s the same with our UI conventions. People need crutches to be able to communicate with digital entities. The more relatable something is, the easier it is to work with. So don’t discount “the button that wasn’t”. It helps tremendously with finding our way in the endless patterns of on and off.
Guess it depends if you want to be Neo or Cypher.
That is very black and white. Or you are the super human “savior” or you are basically “Judas”.
How about being one the billions of people who go about their day completely oblivious of the real structure of their reality? If it works and one is happy, what is the harm? (The other alternative in The Matrix would have been total annihilation for both species. Machines can’t “live” without energy. Humans can’t live in the scorched Earth they left behind.)
I’m getting too deep into this metaphor to make it make sense any more. I just have to get out before I get myself into an Inception metaphor.
My point was really simple actually, and it’s one used by detractors of skeuomorphism everywhere: that if a computer models something in the real world, it is then limited by that model, which doesn’t need to be the case. For example, having a “book” with “pages”, but the pages can scroll. It all gets a little silly, so pushing for a model in the real world at all is fruitless.
Computers are limited to humans. If there were no humans and computers did exist, I doubt there would be things as GUI’s and metaphors.
The limitations (including skeuomorphism) are not there for the computers, they are there to let their digitally inferior makers interface with them. So basically anything that makes interacting with these machines easier is a go, technically silly or not.
Which was stupid, depicting humans like we are perpetuum mobile…
I was kinda hoping for some major twist in 2nd-3rd part, one that would make sense of it – say, that the humans did in fact win the war (what should be expected – note how vulnerable the sentinels were to EMP, and nukes generate a lot of EMP), but we were too dependent and/or enjoying machine tech to discard it …so we enslaved the machines in the matrix – and “convincing” them that they are humans would be an even better method than the idea of matrix as depicted. Plus it would certainly explain some real-world abilities of Neo (after all, there’s virtually no way for humans to generate EMP, or to remain connected to matrix while unplugged), how knowledge, skills (fighting ones, usually), or Smith can be just uploaded (and “he’s a machine” while Cypher does this to Neo could be seen as a hint…), how the Architect, Oracle, Merovingian seem most, well, human.
Sure, they looked human also in the real world – which would be just a matter of properly modified software in sensors or in part of “brain” interpreting them.
But we just got more of the perpetuum mobile silliness…
Hmmm… I think technically the “matrix” would be more simulation than skeuomorphism. Though I guess you could consider the matrix skeuomorphic from the point of view of the agents, in that killing someone in the matrix is basically a skeuomorphic interface for killing someone in the real world.
Hm, or maybe it’s especially not skeuomorphic for the agents – for them, matrix was the only reality (but generally, the silliness of “you die in the matrix, you die in real world” – our minds can routinely survive such, in dreams; yes, there might be some mechanism enhancing the effects …but it would be stupid not to block it on the vessels of freed humans)
I’m not following current trends in Linux land, sticking with two years old installs, but I’m a bit sad that the app centric view is eventually dominating.
GUIs started with Lisp, Smalltalk and Document centric views (by “The Document Company”, sic…). In that perspective, applications should not exist as monolithic binaries but merely as dynamically loaded tools for tweaking documents.
Commercial software have always favoured app centric and proprietary file formats. I have the impression that open source developers eventually gave up this fight.
TBH I agree. I wanted to explain the models and not endorse one or the other. I really enjoyed spatial nautilus and was a fan overall of the document oriented workspace. It gave you lots of tiny windows and you generally felt like a multi-tasking king surveying your domain.
But people are less effective when multi-tasking, and the big issue with the document oriented desktop is organisation. Similar to how people used to stick all their email into well organised folders, but now it’s all in one big chunk and you just use search (which is faster anyway). That sort of “all my stuff is in a database” fits better with the application model, so I conceded.
The real evolution of the “document oriented†desktop is the “object-oriented” document desktop. Something very hard to achieve due the “hyper-high” level of software integration needed. Even Apple (OpenDock) and Microsoft (OLE) failed in this task. “application oriented” desktop is just a simplistic way of doing things only usefull for cellulars and 7″ tablets but is just an insult for a user trying to make some work in a desktop computer. Even being the Apple way (Sandboxing) and Microsoft way (Metro). It’s shame that Gnome 3 do the same thing against the user.
After Gnome 2 Xface is “The Desktop”. Gnome no more.
I think a lot of people have gotten insulted before really trying the desktop, and I did as well to begin with, which is why I wrote the review. The review itself was actually meant to criticise the Gnome 3 desktop, a sort of “I used Gnome 3 and honestly tried to make the best of it for 3 months and here’s why it sucks”. The situation isn’t so clear, however.
I have some criticisms in the review, but I think the Gnome 3 haters really need to get their hands dirty and give in depth criticisms: either stuff that can be improved or solid reasons why this approach should be abandoned altogether.
I’m pretty sure it went something like this for most of these haters (gnome shell & unity alike):
“Oh holy balls! This is different from what I’m used to. I’m gonna try it for 20 minutes and then give up.”
…20 minutes later…
“Well, I still don’t like this so it must be the worst atrocity in computing since Microsoft Bob.”
“Granted, I’ve been saying that the Linux desktop should be different from Windows but what I really mean is that it should be the same but with themes.”
You have a gnome desktop a lot of people like. You throw that away and put a different desktop in front of them. When they complain about the change you claim they are wrong.
I don’t know about anyone else, but why should I bend my established workflow to whatever some guy developing Gnome 3 or Unity has decided my workflow should be? I have a workflow, I’m happy with it; if your “platform” doesn’t support that workflow, it’s far easier for me to find one that does instead of changing my workflow.
Here is where you are wrong. Nobody needs to convince the Gnome developers that they have to adopt another vision than they have for Gnome 3. People either adopt it or they don’t.
The people who don’t adopt it should stop leveling criticism at Gnome 3. This desktop wasn’t written for them. Gnome 2 is dead, long live the myriad of alternatives.
The other side of the medal might be that Gnome 3 shouldn’t expect to take the same leading position with the distro’s it had with Gnome 2. After all, nobody is obligated to adopt Gnome 3. (The jury is still out on the question if Gnome 3 including Shell is loved by the majority).
Personally, I like most parts of Gnome 3 and I think those parts are substantial improvements. I even like the infrastructure that was written for and is underpinning Gnome Shell, but Gnome Shell itself… My personal opinion is that Gnome Shell should be incinerated at 5778 K, but I don’t have to use Shell to use the rest of Gnome 3, so I’ve come to terms with Shell’s existence and I can leave it to the people that do like this interface.
Fair cop, I guess I do agree with that.
No we don’t. It’s BECAUSE of people like yourself Gnome 3 became the God-forsaken mess it’s become.
I use Gnome Shell at work and think it’s great, but it all depends how you use it.
In general I don’t have any overlapping windows, and a consistent first 4 workspaces:
1. IDE
2. 2x Terminals (for compiling, git etc), File manager & Test Editor, tiled to use 1/4 of the screen each
3. Web browser
4. Email client
Temporary windows on new workspaces as needed
Fits my workflow very well and very few distractions I never have to go clicking through a tiny taskbar looking for the window I want
How do you maintain those 4 consistent desktops? The dynamic desktops are one of the things I found intolerable about gnome3. I would put certain things on certain desktops and if I accidentally closed the last one of them, gnome would delete the desktop and I’d have to rearrange everything again.
That, and their crazy dual screen handling where one screen changes per desktop but the other is static, just didn’t work with what I, as a developer, need daily out of a desktop.
um isn’t often a problem for me but something that helps is that you can drag a window into the space between two workspaces in the activities overview and it will create a new workspace in the middle.
As for the second monitor, I actually find it quite useful that it doesn’t change as I have a fullscreen terminal application connected to the hardware I’m working on on my second screen. This way I can always see output from the device on one screen and compare it to source code, bug report, text file, or an email on the other screen, all without ever moving a window. It is however pretty easy to change that particular behaviour.
You could still do that by making the terminal window sticky. If you wanted you could make your terminal window smaller, and put other windows on your second monitor which would change with workspaces. This could give you “half” a non-changing monitor (if you think of it that way).
it seems to me like gnome 3 and unity were made by people unaware of all the battles and failures that resulted in the modern windows and mac os desktop designs. it is like they forked their ideas from a time before windows 95 was a success, and so they’re like “well what about this?” they don’t realize they’re standing on a pile of corpses. they’re ignorant.
for comparison, the idiots at microsoft who tried to make the windows phone interface the standard windows desktop are just stupid
My considerations of a Gnome Shell notebook user:
1. Gnome Shell was stable and fast since 3.0, still on 3.4;
2. Dynamic desktops is a great idea and it works;
3. Hiding the tray and controlling the alerts on a centralized manner really works for productivity;
4. The second monitor works really well for me. I only develop on virtual machines (why people still write code on a host is something someone still has to explain to me). When presenting, I just put my vbox on the other monitor and still can use all the power on my screen without showing the audience my inner work.
Today I find Gnome to be perfect for notebooks and KDE 4.9 for desktops with a big screen (or even more than one).
Object oriented? Document centric? I guess I don’t understand. I mean, whatever advances the underpinnings of 3 may have over 2 are a completely separate issue from the UI, or? And the problems I see are really all related to the UI. And the designers are trying to rework the UI to be more tablet/finger friendly. And this is what baffles me.
Are the designers so delusional as to think that climbing aboard the tablet train early (and abandoning its loyal desktop users in the process) will gain Linux the market penetration that has eluded it on the desktop? If that were likely, a minor tweak of Mythbuntu would have already done that.
Linux has always been about work, which is why it’s been so successful on servers. Tablets have been about providing an appliance for browsing and reading, which is why the Kindle has been good enough for so many. Why on earth would we want to win that battle?
But, even more baffling to me is the conflict between paradigms (menus vs. icons) in the same shell, and the reinvention of the wheel. I mean, not that I care for them, but, since we already have them, what’s wrong with desktop icons for tablet users? Why does the entire interface need to be overhauled to accommodate them? Most of what tablet users need was already there. A simple theme could have taken care of this without depriving the rest of us. Even if they’re too small for fat fingers, things like scroll-bars still serve a purpose in a touch interface by informing the user of where in the document they are, and how much of that document is on-screen. One need not throw out the baby with the bathwater. All the desktop elements can happily remain for familiarity’s sake. The context menu need only be expanded on to make access to those features easier for touch screen users.
Maybe the Gnome crew are just better at designing code than they are at designing UIs. Or maybe there’s some grander plan that’s yet to be revealed, and we’re all missing the point. Or am I just not getting it?
Document oriented or Application oriented desktops are competing ideas. Neither is more advanced. However, there have been advances within each paradigm that make one more palatable (or fashionable) than the other.
I think you’re assuming that Gnome 3 is for the tablet (AFAIK, Gnome 3 isn’t designed for tablets) and then claiming that you could’ve used a Gnome 2’s ideas for a tablet anyway. I don’t think the Gnome guys are trying to win in the tablet space. At least, the idea of Gnome 3 having to evolve to tablets only recently came up, and there probably won’t be any real code along that line until past Gnome 3.6.
I tried to explain in the article how one could go about using Gnome 3. Perhaps I didn’t do a very good job, but basically, if you can’t “do work” in Gnome 3, maybe it would be worth writing down what doesn’t work for you in a very specific way, and using that as a basis for criticism. Gnome 3 is minimalist, to be sure, but the idea is to make the user more productive. If this isn’t true, the Gnome devs need to know exactly why.
As for why Gnome 3 has changed compared to Gnome 2, I guess I can’t really answer that. Why does any free source software change? To scratch an itch.
Lulz, when I hear anything about Gnome3 I remember
Vanesa Carlton singing:
Cause everything’s so wrong
And I don’t belong
Living in your
Precious memories.
So Gnome 3 still has a thousand miles to go, is that right?
This feature which has been described as incomplete, but with the potential to become a killer has been in KDE4 for a couple of years now.
Activities allow one to group apps and documents, save and pause the activities, stop the activity and relaunch it later, etc…
I’m unsure if GNOME knew about this very prominent KDE feature, but it seems as though they are not developing a feature which will differentiate them and allow them to be seen as unique and excellent for having it, but rather making the feature common amongst the two largest desktops, making anything without it seem lacking.
KDE also has the history by default through krunner and nepomuk/strigi, as well as having more advanced monitor and window management.
It seems as though, everything they are moving towards is already possible with KDE, and KDE have done it without removing features, but rather by adding them.
Edited 2012-09-24 06:41 UTC
That sounds pretty neat. Before I wrote this review I was actually in the process of moving to KDE from Unity. Then I thought I should use Gnome 3 in earnest to try and explain to myself exactly why I didn’t like it.
KDE’s always sounded better technically, but it always felt like a steep learning curve. Coupled with the fact that their desktop themes always look only just slightly askew, I never really took the plunge (except for the really early days on a slackware distro). Kubuntu has also been a bit of a bastard child of Ubuntu, not receiving enough love. I might want to try using KDE for a few months and see how it is.
“Not bad” is damning with faint praise at best. I could probably put a square steering wheel on my car and it would be “not bad”, i.e. I could still steer the car, but why bother?
So it might not be “that bad”, but exactly like Windows 8, it is a solution looking for a problem. There was and is nothing wrong with the document-oriented desktop paradigm. Nothing has fundamentally changed about how most computer users manipulate data or use applications. The desktop metaphor and the window management involved are solved problems.
Modelling UI:s after the flavour of the day – today’s flavour being the concessions made to phone and tablet operating systems to make them work with the very limited input and output devices of that hardware – is pandering to a fleeting fashion that we eventually will look back at and roll our eyes in embarassment.
I understand the need for trying to innovate by throwing out old ideas, and good things will probably eventually come from these efforts, once things calm down. However, making this highly experimental and volatile system the default and effectively forcing it on many users does little but create unnecessary resentment and confusion.
Not to mention that the “application-oriented” model, as outlined in the article, is so absolutely diametrically opposed to the traditional UNIX model that any GUI that tries to realize it fully while running on an underlying UNIX-like OS will become a lamentable, undiscoverable, opaque Frankenstein monster of a system. I wish folks would try to work with the OS instead of against it.
I tried gnome 3 for up to 2 months.. more total freezes & reboots I care to think about. These days I’m on KDE… with bi-weekly freezes… sigh, better ? not really… anyone who can recommend a linux desktop env that is actually reliable will get my genuine thanks :/ (and a lolly)
You could always use CentOS. It’s definitely stable even if it lacks bling.