It’s been one of my major pet peeves on both Android and iOS: the total and utter lack of consistency. Applications – whether first party or third party – all seem to live on islands, doing their own thing, making their own design choices regarding basic UI interactions, developing their own non-standard buttons and controls. Consistency died five years ago, and nobody seems to care but me.
As a proponent of what is now called the old school of UI design, I believe consistency is one of the most important aspects of proper user interface design. A consistent interface requires less learning, because users can carry over their experience from one application to the next, making the whole experience of using a UI more fluid. Applications should be the means to an end – not the end itself.
The release of the iPhone, and more specifically of the App Store, changed user interface design practically overnight. A lot of people focus on the shift to finger input as the biggest change the iPhone caused in UI design, but in reality, the move from thin stylus to fat stylus (the finger) isn’t that big a shift at all. No, for me, the biggest change in UI design caused by the iPhone, and later Android, is that ‘consistency’ lost its status as one of the main pillars of proper UI design.
Three reasons lie at the root of this underreported, massive shift. The first is conceptual, the second practical, and the third consequential.
Conceptual
iOS popularised a single-tasking interface, where only one application is visible at any given time – quite the far cry from desktops and laptops where we have lots of different applications running at once (or at least have the ability to do so). iOS obviously didn’t invent the concept; in fact, it’s as old as computing itself. Even in the mobile world, iOS just built on that which came before, since both PalmOS and Windows Mobile used the exact same single-tasking focus.
The consequence is that the application pretty much becomes the device. It’s the sole focus, the sole star of the show, with the operating system reduced to a status bar at the top of the screen. It’s quite similar to how game consoles have been operating for ages; in fact, older consoles generally couldn’t boot into a stand-alone, non-game operating system at all.*
As with anything under a spotlight, this has consequences. If all the user sees is a single application, deviations from UI standards and conventions don’t jump out as much, and developers can more easily experiment with changing the appearance or – worse yet – the behaviour of UI elements, or even create entirely new ones that have no equivalents in the rest of the operating system or application base. Since the user will never see applications side-by-side, these deviations don’t stand out visually (but they do stand out behaviourally).
Given this level of freedom to design the appearance of an application, the application itself becomes the focal point, instead of the task it is supposed to accomplish. We have entire websites dedicated to how an application looks, instead of to how it works. It is, perhaps, an analogy of how computer technology is perceived these days – style over substance, ‘it looks good so it must work well’. If some website reports on a new application for iOS or Android, the first few comments will almost inevitably be about how the application looks – not about if the application works.
We’ve reached a point where it’s entirely acceptable to reduce functionality just to make an application look good. We give more accolades to a developer who designs a pretty but functionally crippled application than to a developer who creates a highly functional and useful, but less pretty application. In my view, removing functionality because you don’t know how to properly integrate it into your UI is a massive cop-out, an admission of failure, the equivalent of throwing your hands in the air, shouting ‘I give up, I know my application needs this functionality but because I don’t know how to integrate it, I’ll just claim I didn’t include it because users don’t need it’.
Because the application itself has become the focal point, the designers have taken over, and they will inevitably feel constrained by the limits imposed upon them by human interface guidelines and commonly accepted user interface standards. The end result is that every application looks and works differently, making it very hard to carry over the experience from one application to the next.
Consistency suffers.
Practical
The smartphone market (and to a lesser degree, the tablet market) is divided up in two segments. iOS and Android are both highly desirable targets for mobile application developers, and as such, it’s becoming very hard to ignore one or the other and focus on just one. Instagram, Flipboard, Instapaper – even platform staples had to face the music and move to Android, sometimes kicking and screaming.
This has had a crucial effect on application development. I can’t count how many times I downloaded an Android application, only to realise it was a straight iOS port without any effort put into properly integrating it with Android UI conventions and standards. A logical consequence of the mobile application business not being as profitable as some make it out to be; most developers simply don’t have the time and money to do it properly.
Some applications take an entirely different approach to ‘solve’ this problem, by using lowest common denominator technologies. The official Google Gmail application for iOS is basically just a web page, and the Facebook application relies on HTML as well to display timelines. Both use entirely non-standard UI elements and conventions, of course (in addition, performance suffers for it, but that’s another story altogether).
Whatever the case – straight UI port or lowest common denominator technologies – consistency suffers.
Consequential
The third and final cause of the death of consistency is the sheer size of the App Store and the Google Play Store. Each of these is now populated by literally hundreds of thousands of applications, with every type of application having dozens, if not hundreds, of similar alternatives. In order to not drown in this highly competitive tidal wave of applications, you need to stand out from the crowd.
A highly distinctive interface is the best way to do this. If you were to follow all the standard UI conventions of a platform, you wouldn’t stand out at all, and would risk not being chosen among your flashier – but potentially less functional – competitors. It’s the equivalent of television commercials and web advertisements trying to stand out through motion, sound, pop-ups, screen-covers, flashing, and so on. “Hello, you there! Notice me! Notice me!”
If custom UI elements are required to stand out, they are added. If UI conventions need to be broken in order to differentiate from the crowd, so be it. If we lose functionality in the process – who cares, reviews will focus on how good we look anyway. Again – consistency suffers.
Mourning
In the smartphone and tablet age, the application has become the star. The days of yore, where the goal of an application was to disappear and blend in with the rest of the system as much as possible so that users could focus on getting the task done instead of on the application itself, are gone. Instead, applications have become goals in and of themselves, rather than just being the means to an end.
My ideal application is one that I don’t care about because it’s so non-distinctive, invisible and integrated into the system I barely notice it’s even there in the first place. During its heydays, GNOME 2.x represented this ideal better than anything else (in my view). GNOME 2.x sported an almost perfect behavioural and visual integration across the entire desktop environment, making it one of my personal favourite UIs of all time. KDE 3.x had incredibly tight behavioural integration, but, in my opinion, failed a bit on the visual side. Windows has been an utter mess for a long time, and Mac OS X started out okay, but once brushed metal and the wood panelling were introduced, it pretty much went downhill from there – and is still going down.
And my desire for applications to be invisible is, of course, the exact problem. A highly consistent interface is one where applications do not stand out, where they are designed specifically to blend in instead of drawing attention. This goes against the very fibres of many designers, who, understandably, want to make a statement, a demonstration of their abilities. On top of that, they need to stand out among the loads and loads of applications in the application stores.
Sadly – I, as a user, suffer from it. I don’t like using iOS. I don’t like using Android. Almost every application does things just a little bit differently, has elements in just a slightly different place, and looks just a bit different from everything else. I have to think too much about the application itself, when I should be dedicating all my attention to the task at hand.
I know this is a lost battle. I know I’m not even in the majority anymore – consistency lost its status as one of the main pillars of proper UI design almost instantly after the release of the iPhone. People who stood next to me on the barricades, demanding proper UI design, people who blasted Apple for brushed metal, people who blasted Windows for its lack of consistency, those same people smiled nervously while they stopped advocating consistency virtually overnight.
Consistency became a casualty almost nobody ever talked about. A dead body we silently buried in the forest, with an unwritten and unmentioned pact never to talk about the incident ever again. Consistency is now a dirty word, something that is seen as a burden, a yoke, a nuisance, a restriction. It has to be avoided at all costs to ensure that your application stands out; it has to be avoided at all costs to let the designer have free reign.
I mourn the death of consistency. I may be alone in doing so, but every death deserves a tear.
* Although older consoles/computers sometimes blurred the line between computer and console, I think the Playstation was the first true console that could launch into a non-game GUI to be able to organise files on memory cards and such. Please correct me if I’m wrong – my knowledge on the history of gaming consoles isn’t particularly extensive.
Technically it was beaten to market in that respect by both the Sega Saturn (one month earlier in ’94) and the 3DO (’93).
Also, completely agreed with respect to mourning the death of consistency in UI.
Edited 2012-06-18 12:50 UTC
The Amiga CD32 (September 1993) was basically an Amiga 1200 dressed up as a game console and it could easily be turned in to one.
And the Commodore CDTV came even earlier, March 1991. It was an Amiga 500 turned in to an interactive CD playing device.
Thanks, I’d completely forgotten about the CD32 (and come to think of it, the CD-I). Either way though, the point stands that the PSX is far from being the first to offer that functionality, though it may well have been the first to achieve any real market penetration.
The CD-I is from Phillips, the Commodore CDTV was a comparable device, both in function as in failure.
But Amiga CDTV and CD32 don’t really boot into anything if there’s no media, IIRC (only one buddy with CDTV, one with CD32, everybody else had 600, and it’s been some time). They just display an intro of sorts, encouraging you to put the CD in (kinda like all the other Amigas & their floppy animation)
Yeah, they can play audio CDs, but I imagine that NEC TurboGrafx-CD (1988) also does that – and anyway, PS1 added nice audio visualisations (then there’s still that memory card manager)
I think you can hold both mouse buttons down to enter some kind of boot menu on the CD32, but I don’t think the CDTV can do this too.
But, see, CD32 didn’t include a mouse in the retail package (a boot menu to what?)
BTW, all this talk reminded me about one game (most likely also with floppy release at least for 1200, like virtually all CD32 games), a platformer in which a player-controlled dog is tasked with saving his sleepwalking boy master – does it ring a bell WRT its title? (you are into retro after all…)
Well, just a boot menu. It doesn’t make much sense on a stock CD32, but a CD32 is really a Amiga 1200. During the loading of some games or demos you could see a flash of the Workbench screen. Hell, some programs even allowed you to exit it and it would dump you on a WB screen. These were “normal” Amiga programs someone decided to put on a cd.
The CD32 also had one kB(!) of memory for save games, but I’m not sure you could manage this. I wanted to check this on my CD32, but for some reason the video cable it missing.
The game you refer to is most likely… Sleepwalker.
You can hold down the first and second (red and blue?) buttons on the controller to simulate the two mouse buttons.
You could also enter an interface which let you manage the in built battery backed memory (all 1kb of it) that was used for storing save game information…
You are not alone.
He surely is not.
I have devoted my whole projects main focus to consistency and a good interface. I am sure that users will come as I start to get enough features to be able to compete with the “big guys”.
Absolutely confident that a user will choose a pretty and consistent UI over vomit if the app provides roughly the same features/stability/speed.
If just had more time I wouldn’t stop at a music player but create great and good looking apps in ALL areas where a consistent UI is missing.
So… maybe lonely but not alone.
Indeed, you are not alone.
Even command line apps can’t find consistency. Hell, here’s one of my favorite pet peeves. Does “–help” go to STDERR or STDOUT?
I totally agree. In fact I often prefer to miss some functionality to installing another application that would look, feel and behave alien. The unholy mess of custom UI elements several times even forced me to uninstall the app right after first run.
In fact I was even considered building a custom GNOME-based firmwares for my phone and tablet, though I’m quite confident that I don’t have enough time and knowledge to make it usable.
You might have better luck building on top of (or just using) other ~Gnome-based phone and tablet efforts…
http://en.wikipedia.org/wiki/GPE
http://en.wikipedia.org/wiki/SHR_(operating_system)
plus, the original Maemo lineage is ~Gnome-related.
-2 Form should follow function, not the other way around.
I care. What’s worse, this lazy, half-assed design is seeping into desktop operating systems. It’s a big step backwards.
The faddish trend to push flat, grey touch-style interface elements into desktop apps is also bugging me quite a lot.
– chrish
Nokia designers managed to nail this problem well in Harmattan. They produced integrated experiences, where the user focuses on the task, rather than on the application. Unfortunately the whole effort was wasted as Nokia management sabotaged their own project.
Metro UI also has such focus – actually, it can be quite safely described as pushing it much more than N9 UI.
(Thom doesn’t complain about, doesn’t mention Windows Phone UI at all in the above article)
But I don’t suppose you can give it that, considering you think Harmattan was so dandy that only sabotage explains its demise…
UI consistency died much earlier. Web apps…
Much earlier than that, actually….. Widgets
Then again, anything outside a terminal session is a mess in my book.
Not even the terminal is consistent:
Parameters are non-standardised (most annoying of all, the -h / -? / –help switches). And the way in which commands are chained can vary too (eg GNU tools tend to allow single char switches which can be linked as one parameter, where as Sun (RIP) preferred full descriptive terms that had to be individually specified.
You also have some apps abusing STDERR (most notably outputting command usage to STDERR instead of STDOUT).
There’s the mismatch of true CLIs which are optimised for shell scripting and text UIs (such as ncurses) which can’t be piped at all.
And dadly the problems aren’t limited to execution either as app config files can vary massively too. From basic Windows INI style configs to shell parsable envs, XML docs, JSON structures and even completely bespoke layouts.
And don’t get me started on incompatibilities that can arise from using the wrong character set or even $TERM.
Going back to the wider problem though – the one Thom raised – I do miss consistency in design elements. But sadly I think it’s an inescapable situation. The real problem is as long as you allow 3rd parties to develop software, those developers will have their own ideas about usability. So you either have to accept that inconsistencies will arise, or write the entire stack inhouse. Neither solution is optimal, but such is life.
I agree with all your points about handling and have experienced many of those annoyances myself. Even with most basic usage, having to alternate between “up,left,right,down” Vs “h,j,k,l”, or emacs Vs vi bindings can be a mindf*ck (though thankfully I could change that with most applications I’ve run), though your point about different config ‘styles’ will come into play then.
I was talking more about appearance though, in that they’re all limited to 16/88/256 colors, character map and typeface. With the exception of some apps preferring *color0 over *backgroundColour, I haven’t really seen much variation within the same userland/shell, though obviously my experiences (I’ve never run Sun or NT before) are limited compared to yours.
Either way, I apologize for straying somewhat off topic.
Edited 2012-06-18 21:22 UTC
Websites in general (yeah, that kinda includes OSNews ).
Large part of those Android or iOS applications are basically “web 3.0” – they are often little more than custom UI to a single website or feed.
And majority of what Thom wrote in Conceptual part applies to web pages. Plus, WRT one other bit in the article…
Maybe that’s not necessarily perceived as a good thing for app makers (or website owners), users not being very used to particular app UI, and able to effortlessly move to others. As far as those who make them are concerned, apps are the end itself.
Overall, really, I’d say UI consistency never was very strictly followed; a bit stillborn.
Edited 2012-06-18 14:57 UTC
I think there’s often more consistency on the web than there is between different applications in an OS.
Every website is running within the same browser, using the same window controls, keyboard shortcuts, toolbars, menus, mouse gestures and so on. If I right click on a link, or highlight and drag a block of text, or open a file dialog, or perform loads of other interactions, they’ll all work pretty consistently across different websites.
Even when it comes to the websites themselves, there are certain design elements that are relatively consistent between similar sites.
Most of the forums and blogs I read have a similar layout and comment system. If I go to one I haven’t visited before it’s highly unlikely that there’s anything new I’ll have to learn.
I’ve been shopping around for some PC components recently. Before even visiting a new online store I can guess where navigation elements will be placed, and how product listings will be laid out. The vast majority follow roughly the same template.
In my experience, websites that go their own way and break conventions with a radically different design have to be very well thought out, otherwise the inconsistency will really annoy users.
WinAmp anyone? 1997, one of the earliest extremely wide spread application which used a “skin”. That was 15 years ago…
Been watching sparkly vampires again, eh?
——————————————-
PS. No, I actually just recently lambasted Microsoft for completely ignoring UI consistency and design on my Google+, noting that they themselves are one of the worst offenders on Windows-platform. On Linux if you stick to only GNOME-applications or only KDE-applications you actually get a whole lot more consistent look and feel, but that obviously does not help people using other OSes.
The thing is that UI consistency cannot really be fixed by random people, the push for consistency must come from the OS/desktop environment/hardware manufacturer/developer. The people in charge just think “let’s make our app stand out like a stick in the eye so that people will remember it!” unless there is a real disincentive for doing that, so something like e.g. taking extra tax on Windows Store/App Store/etc. applications that break UI consistency could possibly work. Another approach would be to push such applications out of the front-pages and to favor applications that stick to UI consistency – guidelines.
I never really paid attention to UI design until I started reading OSNews. Thom, your desire of UI design has inspired me to think more about user experience as a developer and I thank you for that. I think there are a lot of real-word issues that indirectly affect UI design in addition to what you are describing. I am sure a lot of these issues are inferred from your article, but I would like to put it out there to see what any other developers might say.
There are a lot of designers out there…but very few UI designers. I think when companies hire a designer they are looking for someone to make a pretty app and not provide a good UI experience. So instead of looking for someone who is qualified in UI design, they instead look for someone who has an art degree, a pretty portfolio, and MAYBE knows something about UI design in general. Nearly all the designers/UI designers I have worked with were never trained specifically in UI design, but Advertising Design. There are also a lot of times when the Designer is just the person who came up with the idea for the app as well or that person has the most say in the design.
I have not done much research, so this is all anecdotal, but it also seems to me UI Design education programs (or students of) are very rare relative to the amount of apps being marketed. It’s very rare to find designers whose sole focus is UI Design, ‘Human Interaction Design’, UX Design, etc.. Perhaps its not rare as much as it’s just the ratio of real UI designers to app developers is very low.
Another big fault is that in a lot of development shops, the ‘Designer’ is/was a very experienced developer who is now doing design and/or architecting. Again, in my experience, this has been very black and white; either the ‘Designer’ is very good or not so good. Regardless, it just seems to me that once you have a deep understanding of what goes on behind the curtain, you have an even more challenging time designing the whole entire show. I just think your focus is more on engineering with parts of a program than interacting with a person. Some developers get over this hurdle well whereas others don’t.
Although I have only developed a handful of mobile apps, reading your article made me realize something that I feel every mobile developer probably struggles with. When we are coding our app, we are going back and forth between our code and the app on the device (virtual or physical). That’s it. We’re not spending time in any other area of the OS worrying about integration. When Q&A get’s it, they spend all of their time inside the app only. When the designer is designing, he/she is looking at a screen of solely that App and nothing else. It’s like their canvas. They don’t think about how it might look in the Gallery, so-to-speak.
I guess what I am trying to say is when developing an app, the app development process is never thought on the terms of “Ok we are developing an iOS app” and really absorbing that idea. It’s more like “Ok we’ve got this app and we have to get it out on the most popular platform that’s out there right now. Today that platform is: [insert OS/web here] and this app needs to be out the door yesterday.”
Anyhow, this is all my anecdotal experience/opinion. I appreciate all the designers I’ve worked with and please know this is just from the point of view of a lowly developer. I have never been in the position of designer and don’t know what are all the challenges they must face. I am curious to hear from any other developer. Overall though I really appreciate your passion for UI Design and hope more people pick it up.
Developers are generally the people who write the actual functionality of the software, whereas designers are the people whose job it is to figure out which functionality should be exposed where and how. Some developers make fod good designers too, but most often than not the thing is that coders just don’t have the required eye for visual design. That’s why there are actual UI guidelines written for e.g. both KDE and GNOME that go to quite extreme lengths to explain how an application should present itself and its functionality in order not to feel out-of-place on the desktop. Such UI guidelines are what I personally wish every platform would mandate developers to adhere to.
As a developer you might not always have a designer at hand, so there are a few things to keep in mind: separate all — or atleast most — of your functions to obtaining/generating content/data – ones and to displaying the said content/data – ones, and then try to imagine flowcharts of how one would accomplish this or that task; if a user is in the ‘default’ view what are the steps needed, can these steps be simplified, are these steps ones that are often needed and should therefore be prominently present in the UI, what if the user not in the ‘default’ view, and so on. It may feel boring and tiresome at first if you haven’t done that before, but you’ll get used to it and eventually you’ll notice you’re already doing these flowcharts in your head at the same time as you’re doing the code itself, too.
I suggest reading, The Inmates Are Running The Asylum, by Alan Cooper. It gives a lot of in-depth insight into why UIs are so bad and how it happens.
Even though it was ugly as Hell at least NextStep or OpenStep had a consistent UI for the OS and its apps. GnuStep followed the same model but few embraced the concept in Linux, instead opting for the “other” WM’s like KDE and Gnome.
Developing apps for GnuStep is relatively simple since you don’t have to reinvent the UI in each new app.
Sorry, but I disagree and I wrote a long answer…
Is consistency really necessary to fluidity and ease of use? I do not think so.
-Because the learning phase is short anyway, the cost is not very high.
-We are still inventing new, original way to make an application work. Consistency kills a lot of potential inventivity in UI interaction, especially moving to the rather recent and unexploited touch-based input (think that trackpads did not have multi-touch before! Two-finger scrolling should have been there earlier, instead of the shitty invisible “zones” on the sides of the trackpad).
-I have a really hard time accepting what is implied: that there is out there a “one-size-fits-all” set of UI guidelines. That seems to me a belief without grounds. Sure, it does look sensible; is it? Another illustration that the difference between theory and practice is practice.
-There is another fundamental difference between desktop and mobile computing you overlooked: mobile apps are “glanced at” in a hurry, in a time-out, in-between real-life activities or in the middle of one. The large differences between apps helps me get instantly which one it is. I am sure I launched the right one, and if I come back to my device, I know which one I am looking at or switching to.
Nevertheless, I see the interest in having some core functions of a device being consistent. That is already the case, as Apple and Google usually develop those applications themselves: email, text, contacts, maps, app management, etc.
But would you recriminate against the fact a FPS does not have the same controls than a flight sim or a RTS? Those are all games, aren’t they? Why should I relearn how to do things? (Do you see the flaw?)
You just proved my point. Equating applications to games is the very problem! A game is entertainment, an experience – it doesn’t help you accomplish a task. The game itself is the goal.
A game in and of itself is an end – not the means to anything.
An application, on the other hand, should be the means to an end – not the end itself.
See the flaw?
Edited 2012-06-18 15:29 UTC
No, I don’t. The fact that a game defines its own goal(s) is irrelevant. And are you sure a game UI “doesn’t help you accomplish a task”? You must have played only really bad games 😛
More to the point, the “entertainment” or “experience” is more related to the content of the game, not to its UI. You can argue the opposite since the apprition of kinect/move, but for the vast majority of games, it is just keyboard+mouse and really diverse, genre-specific UI and behaviors (right-click gives a different response in each game genre, and even between RTS titles, for example).
Once a task is defined – whatever it is, whoever defines it -, the best way to get it done may not be one “standard” UI, and games are an extreme proof of that.
If you think there is a standard UI which is able to encompass all possible data input and manipulation – or a set rich enough to feel “complete” -, I expect to see a proof of that. I am genuinely interested.
You’d be surprised just how similar controls games in the same genre have. Play one FPS, and you can play them all.
As far as actual game GUIs go (menus and such)… I hate how every game has entirely different in-game menus. Some apply when you change a setting, some require you to select an on-screen button to save settings, some require you to press a specific button on the controller. Some have HUD settings in the gameplay menu, some have it in the graphics menu. And so on.
Game UIs are often pretty terrible – crazy fonts, weird colour schemes, too small fonts, etc. etc.
But even then, you can’t just equate games with applications – especially not mobile applications. You usually play a the same game for longer periods – days, maybe even weeks. Try playing Fallout for a while, then switch to Left 4 Dead. Curse the failed reloads and jumps – because the controls are different.
Now, imagine making this shift not once every week or once every few weeks (as with games), but several times per minute. Mobile applications are in-out-in-out, very rapidly. You don’t spend a lot of time on each.
But hold on, games is way too broad an area like that. Different game styles do different things, and so different controls are a necessity. FPSs and flight sims are as different as word processors and calculators. Take FPSs though – they tend to all have similar controls and even displays. You can go from one game to another by a different publisher and just start playing, no need to re-learn everything. WASD and the mouse and you’re sorted…
I agree. On a desktop, consistency in menus and dialog boxes is about all the consistency I need, knowing where to open files, print, create a new document, how to navigate to directories … etc.
On a tablet running iOS where “only one application is visible at any given time”, isn’t that a virtue for a limited screen?
What would a good alternative look like? That’s not a rhetorical question, I would really like to know.
Thom, the MegaCD had a non-game BIOS you could boot to if there were no disc in the drive. You could use it to set the clock and play audio CDs.
I can’t think of a non-CD games console with a ‘BIOS’ though, that seems to be the start of it all.
I don’t see it as a problem. The only thing people need to know it you can touch or click on stuff. It seems millions can quickly figure out how stuff works. Even kids can.
Personally I like the variety of looks, feels and controls.
Once it’s been set up and properly explained, sure.
Oh the amount of calls for help I’ve gotten over the years about people buying iPhones and iPads asking me to set it up for them and how it works.
They called you because they have your number. If they didn’t they would have figured it out themselves.
At work people ask the most stupid questions, but at home they can figure out how to download warez, movies and the latest music albums.
I don’t believe you. I work with people with brain injuries that have never used iPhones before and they are very successful at it. The interface is not hard to learn at all.
One of the first things I do with a friend’s shiny new HDTV is turn off all the glaring edge enhancement, saturation and contrast that obliterate image quality. It’s just there to make the TV stand out in a line, not to give you a good picture. Similar to inconsistent UIs.
I would suggest that Google promote a ‘Holo’ limelight area for Android like the The Verge list: http://www.theverge.com/2012/4/8/2933271/android-holo-themed-apps-o…
Kind of the Nexus line for applications. Call them Nexus Class Apps. They have to have the Holo UI and support all of the input types inc keyboard and mouse.
FYI I have an Ainol Novo 7 Aurora and I love Vanilla ICS.
Basing my next purchase on which phones offer the least branding. Padfone and Eluga Power are top of the list.
A key part of UI consistency for traditional operating systems is the key-binding schema. Windows, Linux, and cross-platform GUI applications tend to use CUA keybindings from MS-DOS EDIT.COM, and there’s a long *nix history of using either vi or emacs keybindings for everything, down to the very shell (set -o vi and set -o emacs).
I’ve never been one to think consistency was all that matters to usability.
Was WinAmp consistent with the windows UI? Nope, but it was very usable and a very popular music player.
Usability is very important. But usability is not dependent on consistency.
Usability is driven from many areas, one of which is consistency.
Other are:
Domain. Winamp looks like a traditional media player. Ebook readers can emulate paper to turn pages…
Tradition. You might want to comply with a new paradigm, but people are still used to the old one. It might be more usable to switch over gradually.
Application Model. Sometimes the usability of an application goes beyond the standard model. Perhaps to fit complexity. Perhaps to experiment with new UI paradigms. Kinda of like the Ribbon in MS applications or complex applications like autocad…
…
The article never said so.
It didn’t really cease to be popular. On http://store.steampowered.com/hwsurvey/ (sure, certainly not the most accurate, but it gives some idea) iTunes has 30%, Winamp 20% – quite comparable.
Plus it appears to be delineated geographically (which most likely also influences the Steam ranking) – iTunes dominant in some places, Winamp in others (from what I see, it is still widely used in the former Warsaw Pact for example)
And yeah, one can argue Winamp was consistent – with the UI of typical CD player; and afterwards with itself.
I know that probably 90% of all android apps that do not follow the standard guidelines for UI design are either lazy ports (like instagram), poorly thought UI, or not even thinking about the user experience in the first place. But, on the other side of the spectrum, we something as Windows Phone, which is very consistent, but very little customizable. A windows phone is identical to every other windows phone on the market.
I’d rather pay the price of inconsistency for the benefit of being able to think in new ways to display information and interact with the user.
Besides the fact that the mobile UI with touch (compared to stylus) opened up the doorway for people to experiment, I have always assumed that the lack of UI inconsistency is due to the the way the app was developed.
There are rapid application SDKs out there that allow a seasoned html or php developer, with little knowledge of UI discipline, to whip up an android or an ios app.
Great article, it articulated something I intuitively realized for a while but couldn’t quite put my finger on. Consistency in UI is important for ease of use and minimizing learning. Too bad so few understand that anymore. I think it’s partially because of the nature of the App Store, and partially because so many young developers want to write an app to get rich before they’ve matured in computer science (they have no sense you actually have to learn something before becoming the next famous hacker lionized by the media).
Amen to that article. I really don’t want office suites and image editors to feature gesture-sensitive fluffy bunnies and animated paperclips, no matter how pretty that looks, I want something that works well, is easy to understand, and gets the job done. The problem, I guess, is that this is incompatible with the way software is developed nowadays.
In the mobile world, developers are treated like crap. They have to sell their work for a ridiculously low price, lose a third of that meager income to the OS vendor, and drown their hard work in the invisible depths of a unique vendor-controlled repository, from which it can be banned at any moment. In these conditions, no one wants to, nor is able to, develop quality software, and there is a shortage of good developers who have more interesting stuff to work on elsewhere.
Edited 2012-06-18 16:58 UTC
Or is not possible, but the cost overweight the benefit. Let’s say you’re Adobe, and you want to make your software that is consistent with its own software. So you care to have fewer issues with your user. You will invest a bit more to be consistent with the OS, but you cannot warrant that all software is looking the same. Continuing to build on what Adobe has, will want to extend using current frameworks, so if it wants to add a new product in its list of products, it simply have to add the new functionality, but cannot extend it to all OS/platform combinations.
In the end I think that consistency is killed by the wish of companies to differentiate themselves, and this is bad and good too. Imagine cars looking the same and behaving the same, why not to improve the usability, even at expense of consistency?
At the end consistency can be made just if the framework will not let you change too much your application. Firefox or Libreoffice does not look consistent in Gnome, but who would spend an year of his life reviewing all dialogs and make sure that they behave the same? At the end, some things were inconsistent all-together like “tabs on top” which all browser use today.
Cars don’t help your argument… they do, in fact, behave virtually the same – control scheme is very standardised (pretty much optimal; and attempts at “improvement” – side-swinging ~joystick at the front of central column, for example – didn’t really work out; maybe it will come with autonomous cars, this one).
They also look pretty much the same – differences aren’t as large as marketers want us to believe: I think that, when we look at the past cars, our minds see them primarily as “20s-30s cars”, “50s-60s cars”, “70s-80s cars” or such; collectively.
I am on the fence about this…
I used to think that having a consistent UI was important, but now I am not so sure. Everything we use in the physical world has a UI specifically designed for it’s purpose. Do you think that all door handles, should look the same no matter what house or building they are on? Do you think we should use door handles on cupboards, refrigerators, and cars too?
Our physical world is all about “apps” (i.e. discreet packages of functionality). There is very little consistency between many things we interact with on a daily basis. Why should we expect it to be any different on a “general purpose” computing device? Computers are used for so much more now than they ever were, that I don’t think we could ever come-up with a consistent experience that wouldn’t suck.
OS developers have no idea what apps people will be writing. There is no way we can expect them to come-up with a UI that will work for everything. We need people to be able to challenge UI assumptions so that we can continue to develop and evolve these ideas. If we all just leave it up to the OS developer to decide on how we should interact with the device, I think that we have a good chance of hindering some really cool advances in human/computer interaction.
Poor analogy. Door handles actually tend to be very consistent within the set they belong to, ie. it would look extremely silly if every cupboard door et. al. in your kitchen had differently coloured, differently placed and differently shaped handles, wouldn’t it? Similarly, in a house the doors that belong to a certain set, e.g. full-size doors meant to allow or disallow passage for humans, almost invariably share similar set of handles unless necessitated by outside requirements/restrictions.
In other words, there is actually a lot of consistency there and you just basically shot your own argument down. Consistency does *not* mean that everything must look and feel the same even if makes it harder for the user and/or task to accomplish a goal or a step to a goal.
Edited 2012-06-18 18:52 UTC
I think you missed my point. The article happened to spend a lot of time talking about how things look. To quote it here:
“Almost every application does things just a little bit differently, has elements in just a slightly different place, and looks just a bit different from everything else.”
My point was that real world UI can be very different for similar actions. Cupboard doors and car doors are very different from UI standpoint, yet they essentially do the same thing – open a door. In some cases we twist a handle, in other cases we pull-up on a lever, and yet in others we just pull on a knob. There are doors of all sizes on all kinds of things, and they many of them can behave very differently. We don’t expect everything in the real world to behave the same way, that would be ridiculous, so why do we expect that from our applications?
Edited 2012-06-18 19:35 UTC
We do expect everything in the real world to behave the same way. All kinds of doors are very similar from UI standpoint, behave very alike. Real world UI is very similar for similar actions. There is very high consistency between many things we interact with on a daily basis.
BTW, we describe the rules and similarities in which physical objects work in real world under the physics umbrella term… (and while “common sense” physics is flawed, as evidenced for example by many silly ideas before Newtonian mechanics came along, it is close enough)
OTOH, computer displays don’t have such limitations, and it often shows. Come on, look at the currently-trademark interactions on capacitive touchscreens, that of swiping things while barely exerting any pressure by your finger, enlarging them with two-finger-gesture, or grabbing and moving objects causing them to become “transparent” without much concern for any ~barriers in their path – virtually nothing works like that in the real world (yet of course we embraced those, we like them; but many other – not really)
Edited 2012-06-19 13:21 UTC
Do you think that all door handles, should look the same no matter what house or building they are on?
Consistency is more about function than about looks. All door handles work the same, you push them down and they open the door.
I once was at an airport with round door handles on the bathroom stalls. The locking mechanism was also combined into the door handle. I couldn’t figure out how to lock or even make the door stay closed so I had to take a shit while keeping the door shut with my hand.
I would highly apreciated this door handle and lock to be consistent with other door handles and locks.
Edited 2012-06-19 11:32 UTC
To me consistency is about guiding principals more than a standardized widget set. The designer should have specific goals in mind when designing UI.
For example, in Metro we follow a few guidelines which state that content should be at the forefront. It should be fast, fluid, beautiful. Swiss typography playing a huge part in the influence.
So one Metro app may not look exactly like the next, but the philosophy behind it is all the same.
What is defined and set in stone are things like text spacing, margins, and font sizing. Some animations are pre-canned too but again, beyond that the canvas is yours.
If you look here: http://pocketnow.com/2012/06/17/which-looks-better-a-comparison-bet…
You’ll see no app looks really the same as the next, but they all accomplish the same goal. Its clean, fast, fluid, and prioritizes direct interaction with the content over superfluous chrome.
This philosophy of putting consistency on this giant pedestal reminds me of the strict non-expressive philosophy of 60s modernist graphic design. They use Helvetica for everything, because Helvetica is neutral, believe that type should never be expressive because the meaning is in the content. Today many like the style, but very few share the philosophy that design should not be expressive.
Today graphic design is expressive. You can look at a poster and guess the content based on the typeface, colors, texture, etc. I think we are seeing a similar development in the world of UI design. Having the notes app actually sort-of look like handwritten notes gives people visual cues to what this application and makes it easy to understand. I would not be surprised if such visual cues makes the app disappear than a pure consistent app would. These apps are only different visually, in behavior they are often very consistent. You also have apps that take things much futher, such as Convertbot, Clear or Paper. I think breaking UI conventions is completely acceptable if they make the experience better. Personally I find Paper to be far more invisible than the other more consistent sketching-apps for iPad.
I think there is more than enough room for both philosophies (and everything between). Vote with your wallet and buy the apps that work well for you.
Both Android and iOS have certain requirements in the application that nearly ALL applications abide by – the button on iOS that moves back to the main screen, and the 4 buttons (back, home, menu, and search) on Android. These all provide a very basic consistency on the platform.
The rest of the application is as applications have always been – fluid, integrated in ways that make sense for the application, and more.
I haven’t gone over iOS programming models yet, but Android from the basic model for developing it puts in place consistency in how things interact. Sure, the applications Activity interface may differ significantly – but look at applications on OS X, Windows, and Linux; it’s no different there no matter how far back in time you look.
Consistency is not necessarily about how something looks, but how it interacts with the user. And in nearly all cases for iOS and Android that I have seen that consistency is there – from both first-party (Google, Apple) and third-party applications; from newbie developers to entrenched development houses.
Such a depressing statement… made even more so because it’s true. Just look at Firefox as they try to rip off all of Google’s Chrome worst designs, which seemed to have taken Apple’s design ideas to the extreme.
I have to admit that there was one area where I didn’t care so much about consistency: audio players. Winamp, Sonique, QCD, QMP… they all looked nice and resembled, in a way, a traditional hardware audio player and were fully functional. Still, those like Foobar2000 are nice too in their own way, with their traditional UIs that fit in with the rest of the OS. Now software audio players these days… where they get rid of the stop and pause buttons, leaving you with only one big button that changes between the two based on the playback state and a back and forward button beside it… I better shut up, I’m starting to feel like I have to puke just thinking about it.
Edited 2012-06-18 22:09 UTC
I totally agree. Browsers are over the top (and not speaking about the content they display).
This is why I value SeaMonkey so much: traditional, proven interface design with a powerful engine. And I remember when “Firefox” should have been more lightweight than the Mozilla suite. now it is almost worse…
I agree good UIs are fewer and further between because of the lack of consistency. I submit, however, that the cause may be generational in nature.
First, you should note that “different” can cause confusion which can lead to frustration, and perhaps, eventually anger. Consistency is a mechanism that minimizes unnecessary differences, and hence, confusion and frustration, and perhaps even anger. That said….
As we age we become more and more creatures of habit. When confronted with new devices [and interfaces] we tend to expect to trigger “common” functionality in ways consistent with our prior experiences. Unnecessary differences confuse and frustrate us.
What is interesting to me is that the younger generation seems to adapt to newness better than most old folk, perhaps because they are not encumbered with a lot of prior experiences. And because today’s younger designers do not feel the confusion and frustration, consistency is not a major consideration in their designs.
But don’t give up. Have hope. The consistency pendulum will swing back as this younger generation gains experience, gets confused, and feels the frustration. Our challenge is to live long enough to see the resurgence. One can only hope.
…my 2 cents,
Larry, one old codger that appreciates consistency 😉
I can only agree. User experience is nowadays terrible.
I don’t care nor follow the tablet/smartphone market very much, but I see its influence in how the desktop application gets ruined. The Macintosh gets iphonized every new release. Some apps get absolutely horrible pieces of interface just to “mimick” other touch devices. I’m thinking of skype for example! Or the way iTunes and the mac Finder developed.
But there is something even worse: web applications. The “mighty cloud” makes terrible applications with inconsitent user interfaces, poor element design. And they even need to change from release to release so that the application feels “fresh”.
Gone is the time of beautiful NeXT applications, clean Macintosh Human Interface Guidelines…
That’s why today I can just work on GNUstep. Of course other users will try it and toss it: it doesn’ t have those Brushy windows apple haas, or other useless flashy effects.
Bauhaus design taught that From follows Function. The user interface is part of the “form”. But today the first thing must be the looks, the function must be somehow “fit in”.
Don’t over-dramatise… user experience nowadays enables for tons of people a quite comfortable use of their ~computers – much more so than was the case in the past. Many (most?) even like the new ~smartphone models of interaction, new paths and experiments, which will likely help many more in efficient usage of ~computers.
OTOH, you hold on to a GUI …basically from the beginning of GUI. It’s a virtual certainty that’s it’s far from optimal (and it didn’t see very much uptake or strict cloning).
Movies or music are always better in the old times…
dont even get me started thom
dont even
okay, so android has a back button that works IN EVERYTHING but does different things in different places! WHAT! and it has a menu at the top and sometimes at the bottom AND THE ICON IS DOTS, WHAT DO DOTS MEAN?????? and long pressing for options is an ingenious use of a touch screen BUT IT USUALLY SUCKS and did I mention you arent allowed to close or minimize or pause apps, it all happens automatically BECAUSE WE ARE RETARDED? NO, stop it, stop it, we arent going there right now, we arent going to the bad place
Mr. Holwerda,
Why didn’t you consider the fact that, the screen MEDIUM FORMAT for handhelds is NEW, as is the way we interact with handhelds? IMHO, the reason developers go for custom UI elements basically is that this is a new format for displaying information and interactivity, therefore the way it’s being used hasn’t yet set into the conformity, which is the ‘big screen’ UI, like windows and MacOS has! But the main experience of constructing a useable UI comes from those exact ‘Big Screen’ devices
The rules of interpreting and understanding what you see and do on a small handheld device used by your fingers or stylus is rather different from ‘usual’ computing devices, meaning, these rules haven’t even found a best practice yet. What UI framework aesthetic works for email apps doesn’t necessarily work for a space simulation game, etc! So the developers are busy trying to explore the boundaries of what is possible, practical and beautiful on these type of devices!
Give it time, and there will be a standardization of UI! I have faith…
You could call it Format-specific UI evolution!
Consistency is overrated.
In your ideal world an OS maker would come up with guidelines and all developers need to follow these? What happens if the OS maker has bad guidelines or stops innovating?
UI is organic. Consistency to a certain degree is very useful (especially behaving like the user expects, feed-forward if you will) but too much of if will stall innovation. Good things come from experimentations.
If people don’t like it, the app in question will not sell and therefore die a horrible death.
Also I think a UI should be consistent with itself, not too much across the board of an OS. I like different scrollbars when I’m editing a video, I like big buttons when I’m 75 years old, etc. Consistency stands in the way of giving the app personality.
I bet people even think an app is easier if it’s inconsistent to another one. You quickly recognize how you should behave and act so the app doesn’t blend in with other apps.
People get used to ‘inconsistent GUIs’ very quickly, a good thing
I kind of agree, in that on a desktop platform I always valued consistency, and Windows was a pain to use because of the lack of it. That’s why I enjoy the mac and formerly linux much more.
However as usual you tend towards the hyperbole.
This isn’t backed up. Both platforms have extensive UI guidelines. Many things about them are more consistent than on a PC. For example, both provide a much more extensive and standard set of classes and widgets. While Windows is a hodgepodge of dozens of toolkits, there is only one major standard on iOS and Android. So 95% of the table views you’ll see are the standard platform ones. Same with 95% of the toolbars, buttons, toggle switches, etc, etc, etc.
If an application has gone out of its way to look good, then people will comment on it. But that doesn’t give the app a free pass if it doesn’t work. The thing is that the apps where people have made the effort to make it look good are also the ones that tend to have the attention paid to working well.
Name one example where an application removed functionality in exchange for looks and demonstrate how users thought that was acceptable. I don’t believe you.]
Again, you just made that up. Who gives more accolades? Certainly not on this site. Show us the reviews where that is happening. I mean real reviews, not some random person posting on twitter.
One example of that happening, just one.
No. A good app is the best way to do this. Good apps look and work nicely. I have never seen a single app that looks good but doesn’t work, and yet is highly rated amongst its peers.
Sadly, you are in the minority. I love using my phone, and hundreds of millions of people do too.
And now a note about consistency from someone that actually designs user interfaces. Functional consistency is important, but visual consistency isn’t (to a point of course).
It doesn’t matter that certain apps have toolbars that are blue, and others are black, and others are green. This does not hurt usage of the app one iota and doesn’t require extra thought. If you disagree, please point to a usability study that shows that interface colours and graphics decrease interface performance.
Consistency in behaviour is important, but also not critical to a good application. As others have mentioned, creative UI can be more productive than standard UI. Winamp is one example (very compact UI that could live in a titlebar when always on top). Another example that I use every day is the Remember the Milk app. The new version uses a very cool UI with multiple cards that can be swiped into view, and it is significantly faster to work with than the old “consistent” version.
Consistency is a crutch. It’s a useful crutch, and in most cases it will make a perfectly good UI, but it is still a crutch. Most people that throw away standard components to make their own will make a worse replacement. However the 2% that are actually good designers and pour hundreds of hours into designing their own table view that perfectly matches their app will make brilliant things by smashing through the requirement for consistency.
Edited 2012-06-20 01:36 UTC
The release of the iPhone, and more specifically of the App Store, changed user interface design practically overnight. A lot of people focus on the shift to finger input as the biggest change the iPhone caused in UI design, but in reality, the move from thin stylus to fat stylus (the finger) isn’t that big a shift at all. No, for me, the biggest change in UI design caused by the iPhone, and later Android, is that ‘consistency’ lost its status as one of the main pillars of proper UI design.
Thom, the shift was not from stylus to finger. It was from pointer to multitouch. Instead of dragging on a scroll bar with a stylus, you just flick the content up or down. This directness in interaction makes a huge difference. It seems obvious now, but no one was doing it at the time.
Also, consistency is not just about how widgets look. Apps are more immersive than ever before. While people used to talk about interface, they now talk about interaction. What used to be merely UI is now UX. Old school UI, like you said, is just that – old school. UI by itself is not enough.
You are focusing on the half empty glass. I see an overflowing glass. Developers have never before put so much emphasis on UX as they are do now. People are experimenting with the new media. And there’s a lot of them. Think about it. We used to have a keyboard and a pointer. We now have gestures on a 2D plane (multitouch), gestures in 3D (kinect, accelerometers, gyroscopes), voice assistants (siri), locality (GPS, NFC). Sure, access to all this is new, and people will make mistakes. But believe me, developers have never been more acutely aware of interaction design than now. That’s what I’m seeing in the trenches.
I remember back in the day when people used to give Apple a hard time for encouraging everyone to adhere to a set of UI guidelines.
On the DOS (then Windows) side of the road, you could code your UI pretty much the way you wanted.
Freedom was the cry. Why be limited to a companies perceived ideals for an interface.
Now I hear we have too much freedom in this arena.
Actually, Apple still encourages us to adhere to guidelines for both OS X and iOS. When you use XCode, it helps encourage you to create the UI in the “Apple” way, giving you (in most cases) guides right down to the pixel.
The problem is, the “people” expect something a little more (sorry for the word) “sexy”. If you have two apps, one is a boring stock standard app, and one has a sexy (sorry again) UI that just smokes and sparkles, then the nicer looking one will get the $$, not the boring one, even if the boring one is easier to use, has more features and so forth. Well, geeks and some users work out which is better, but a lot of $$’s is tied up better most people don’t.
I don’t have an answer, personally, I prefer the boring guidelines way of doing things, but I really appreciate sparkle at times. Garageband is a good example, dials, foot pedals and so on make sense.
I guess in the end, keep as much consistency as you can, but have fun with your apps too, and where appropriate, even create something new.
Maybe the “real world” is the best UI guideline of all…
The same happend with movies. Old movies are slow and simple. Modern movies use many symbols (like using blur to represent dreams) that we know because we grew up with them.
The same goes for applications. The repertoire is bigger now as users have more GUI experience. However, what is productive depends on context. A DAW can be less intuitive, but still provide a better workflow. You will be able to learn Abelton or Cubase in 2 evenings. That is OK.
For a single visit website Immediate productivity is imperative and you have to play up to the sites or GUIs the target audience is used to.
it does annoy me however when ios designers dont use swipes for page turning. or when ios safari doesnt seem to have undo. or when websites sends me to an inferior mobile version of their site. or when ios and other touch guis require precision touch, like getting rid of autocorrection. so some consistency with external use is important, but not on the widget level.
…comes not from designers… it comes from people higher-up in the company hierarchy who think because they project leaders and CEOs that they automatically have design knowledge and know more about UI design then their designers. They insist that there ideas (make it pink – I like pink, make that bigger no one will see it…hmm that makes everything else look less important, make those other things bigger too, and add icons so that people know which bits they should be loooking at…) make it in to the design until what you have is a horrible mess. Sadly their ideas are more often than not, based on their person preferences.
Edited 2012-06-21 09:47 UTC
I remember the Amiga Style Guide, which came with Kickstart/Workbench 2.0 (1990) and most serious app developers adhering to it. And I remember switching to Windows 95 when C= b it the dust and being very annoyed with the lack of consistency. (amongst alot of other things)
Some of the first apps to annoy me with inconsistency were CD and MP3 player apps. For some reason it was unquestionable that such apps should try to look like a Hi-Fi system rather than a computer program. They weren’t even allowed to have normal window frames.
Sorry but this entire article could have been summed up as “I have OCD and I fear change”. Just because you have problems with things being different or out of place doesn’t make them worse – these inconsistencies have finally made computing open to the masses. Consistency never helped “normal” people use a computer – iOS has along with the apps.
These things irk me as well but I can see that I am wrong as I am sure can you. I suspect that your cutlery is all the same way up in the drawer, and a slightly open drawer must be closed