Nowadays smartphones, tablets and desktop/laptop computers are all siblings. They use the same UI paradigms and follow the same idea of a programmable and flexible machine that’s available to everyone. Only their hardware feature set and form factor differentiate them from each other. In this context, does it still make sense to consider them as separate devices as far as software development is concerned? Wouldn’t it be a much better idea to consider them as multiple variations of the same concept, and release a unified software platform which spreads across all of them? This article aims at describing what has been done in this area already, and what’s left to do.
Operating systems spreading across personal computers
The first step in that direction would be to create an operating system which works well on all platforms. As it turns out, major actors of the OS market have already made some steps in that direction, probably realizing the benefits of this approach in terms of development resources usage, cost, and UI consistency.
Apple’s iOS is the most blatant example of this. First because it reuses a lot of Mac OS X code at the core. Second because the iPad version is only a slightly tweaked variant of the handheld version (and is even able to run the exact same applications, just zooming them in). Third because what the upcoming Mac OS X “Lion” brings on the table is essentially a big copy+paste of features from iOS, with the Mac Store even including several ported iOS applications, clearly showing that both OSs have a common fate
On their side, Microsoft are also working on this, although more quietly. Their various OSs don’t share a common kernel at the time being, but the recently showcased port of Windows of ARM is a step in that direction. .Net is already a universal development platform across all Microsoft devices. Windows 7 comes with better touchscreen and pen input support, and the yet-to-be-released Windows 8 is rumored to go one step towards smartphone look&feel by including an application store
Google is a much younger player in the OS market, but as a first-class citizen of the mobile OS space they have decided to follow Apple’s track by porting their smartphone OS–Android–to tablets in its 3.0 “Honeycomb” release
The sad state of third-party software
So the “unified operating system accross all personal computers” concept has clearly made its way in the big men’s head, and they are obviously all working on that. Now, one of the core points of a personal computer is that it’s a flexible machine which adapts itself to the needs of its users, as long as its hardware is up to what they’re looking for. So having a universal operating system (or at least a universal development platform) is not enough. Creating a universal personal computing platform is also about having third-party software which works well everywhere, without needing to have some of their parts (like, say, their UI) completely re-done each time a new sort of personal computer comes out…
And this is where current designs fall short.
Let’s first consider what is a priori the easiest path: making smartphone-oriented applications run on a more heavy-duty device, like a netbook or a tablet. You have more hardware resources than before, so it should be a trivial thing to do, right? Well, as it turns out, it’s not. Applications are designed with a specific form factor and fixed-size controls in mind. Positions and sizes are hard-coded in the code, either in centimeters/inches, or even worse in pixels. So the only way to make a phone application use more screen estate without completely re-writing its UI is to use zooming and blurring, blindly multiplying positions and sizes by a factor without knowing what they represent, kind of like what Apple does when running iPhone applications on the iPad.
This kind of blind upscaling from the operating system, without any knowledge of what’s actually happening, is not a good idea at all. First because you waste screen estate by making gigantic-sized buttons, menus, and text, while (hopefully) people’s fingers and eyes remain the same. Second because you completely destroy usability by having people make gigantic-sized gestures to go from one button to another while they only had to move the thumb around on their phone. Third, maybe most important of all, because you keep using a phone-oriented UI, simplistic to the point where it’s cumbersome in places because it’s supposed to fit in a 4″ screen, on a much larger screen where the space constraint is no more, making people wonder what’s the point of having this larger screen at all if applications do not benefit from it anyway.
The last issue in particular is interesting, in that it makes us realize that limited hardware capabilities cramp a developer’s creativity, by forcing him to adapt his software to the technical constraints of the hardware he’s writing it for. So while we’d spontaneously say that upscaling is the easiest process, it is not necessarily true. In order to fully make use of some hardware capabilities, software must have been designed while having in mind hardware that has these capabilities or more. When put on more capable hardware, phone applications can’t invent new functions which weren’t useful or usable on a 4″ screen. On the other hand, it is easily possible to imagine hiding some complexity when porting a tablet application to a smaller device, so that it still works.
Since upscaling is not such a good idea after all, we must try to find a real-world example of downscaling in order to see if it actually works better. Thankfully, we have one: Microsoft have announced that Windows 7 should work properly on netbook and touchscreen-based devices. However, in practice, it does not work that well either. While Windows itself and the applications that come bundled with it may work acceptably well (and still they feel a bit clunky already), third-party apps are simply a usability disaster. Most controls (buttons, edits, etc…) are way too small to be targeted with a finger, and still hard to target with a stylus. On a small screen, like that of a netbook or a tablet, toolbars and menus go out of the screen, requiring constant scrolling and playing with menus before finding the most common items. In short: everything feels messy, overly compact, and extremely complicated. These applications just try to do too much on that hardware, and end up being nearly unusable.
Who’s the culprit there? Again, the problem is that the OS (or, more exactly, its UI toolkit) is supposed to help applications adapt themselves to the device they run on, without having a single clue about what they’re doing, and without being able to do anything without violating a set of specifications given by the applications. Controls have hard-coded sizes which cannot be overriden without completely messing up an app’s layout. Toolbars and menus are designed without prioritizing some features among others, based on the thought that everything will fit on the screen anyway. In short, applications are basing themselves on a very strong set of assumption about the hardware they run on. If these assumptions are not verified, the result is a failure in the realm of usability.
One possible way to solve this problem
So let’s sum up what we’ve concluded so far…
OSs are slowly starting to work on multiple kinds of personal computers, but third-party applications are lagging behind as their user interface must still be redesigned on a per-device basis. This redesign must take place because they can’t use extra screen estate wisely, nor adapt themselves to a reduced screen size well.
Fundamentally, UIs can hardly make use of hardware capabilities that weren’t there when they were designed. The only path through which applications can adapt themselves to a wide range of hardware is if they are designed for powerful machines first, then somehow manage to ditch some UI functionality as they run on less and less powerful computers, so that they remain easy and pleasant to use. Compromises related to a reduction of hardware capabilities should be handled at runtime by the operating system, and not at design time by the developer.
Now, how could this work in practice?
In order to adapt user interfaces to the hardware they run on, the operating system first requires some freedom to do that from the application developer. Said developer should only specify some constraints which actually matter on its UI, and leave the rest to the operating system. As an example, he wouldn’t specify button position and size in pixels by hand, but rather say “There’s a Cancel button in the bottomright corner of this window and an OK button at the left of it”. That’s all. It’s up to the operating system’s UI toolkit to decide how big they are and where they are based on these constraints. When designing a game for touchscreen devices as a whole (and not for tablets or phones in particular), a possible constraint could be “those buttons should be on the edge of the screen (for finger accessibility), and close to each other (to quickly move the thumb from one to another)”.
Then, now that it has the power to decide how it will render the UI to some extent, the OS must be able to use this power wisely. Since we’re talking about moving applications from a bigger screen to a smaller screen, the main task of the OS will be to remove controls from the application’s UI when their screen estate cost is higher than the usability benefit of having them at hand. Question is: which controls should it remove, when, and how?
This is all a matter of having the UI designer define some priorities. We all unconsciously do this when we design a toolbar for a desktop application: the rightmost buttons of the toolbars are the first to disappear when the window’s size is reduced, and the user only notices them after examining the leftmost buttons (assuming that he reads from left to right), so we only put the most minor functions there. But when we’re talking about an app which is supposed to work on everything from a desktop PC to a touchscreen phone, that simple rule alone is not enough anymore. Put some modern office suite on a phone without modifying it, and all you’ll get will be a screen covered by a mess of menus, truncated toolbars and ribbons.
To avoid this, the OS must be able to hide whole toolbars, menus, status bars. To collapse groups of buttons into a pop-up menu so that they take less screen space, giving only direct access to the most frequently used function. To not only hide, but sometimes even totally disable functionalities from menus so that they become shorter. All these possible ways of freeing up some screen space and simplifying the app will affect a various range of controls, making our desktop app less discoverable in some way, so it’s a screen estate consumption versus usability compromise. Of course, there’s no way a computer program alone can take such decisions, so he must receive some help from the developers, describing in some way which controls can be forgotten first and what should be kept at all cost.
Let’s take a word processor, as an example.
Obviously, the area where the actual document is is the most important, and shouldn’t be hidden. Now, in terms of controls, let’s examine the various things which a user has at hand but could do without if necessary:
- On a small device where all applications are full screen, it’s not necessary to specify the name of the opened application in the titlebar. The menu in the topleft corner is also quite unnecessary.
- The menubar is used for functions which are only invoked infrequently, like page breaks. If needed, it could as well be collapsed in a single button — like say this arrow button in the topleft corner.
- Most people will only use the formatting toolbar in everyday use. We could ditch the two others if we need to save some space.
- In that toolbar itself, not everything is equal. Only advanced word processor users use styles, so we could reconsider dedicating a whole big combobox to them when we lack screen estate, and just leave the button instead. Or even ditch it altogether if we really don’t have much screen estate left.
- Then, if that is still not sufficient, we can consider dropping the comboboxes for fonts and font sizes, and hide that behind a “font settings” button.
- Then we can hide the rulers and the increase/decrease indent buttons.
- Then we can hide scrollbars and zooming controls, considering that small devices are equipped with multitouch screens and that additional scrolling/zooming controls are thus superfluous. We could only leave a small indicator during scrolling, showing where we are in the document.
- Then we can merge the align left/right/center/justify buttons into a pop-up menu, only displaying the currently applied alignment.
- Then we can hide the rulers and the “indent” buttons.
- Then we can remove the highlight and background color settings.
At this point, this is what we get:
This would be an acceptable smartphone office suite UI, so our software could be in the end capable of adapting itself to everything from a desktop computer to a smartphone without any UI rewrite.
So what is the key towards cross-device software portability without modification? Defining numerically how important each feature is compared to the amount of space it takes on screen, then letting the OS do its auto-removing job based on those priorities and screen estate constraints. Though UIs would take more time to be designed (because of that additional priority attribution thing), they would adapt themselves to a much wider range of devices once that design process is completed. And we could finally have some universal personal computing platform, definitely putting those silly “my smartphone is bigger than your desktop” debates to rest.
I found it hard to continue to take the article seriously after this. Surely Linux is a better example, or if you want to argue it’s “just a kernel” you could say Maemo/Meego.
Would you call Linux (alone) a major actor of the personal computer market, comparable in size to Windows, iOS, Android, or even Mac OS X, without a smile ? Would you say that usual Linux distros adapt themselves well to tablet or smartphone use, that they do anything in the realm of cross-device portability ?
Linux has its place in this article, but in its Android fork only, in my opinion. Thus I mentioned it. The “vanilla” Linux world remains a minor actor, and most distros are desktop/laptop-only. Meego is not even released, and Maemo’s market is even smaller than desktop linux’s one.
Edited 2011-01-11 14:22 UTC
Yes. Very yes. What is a smart phone if not a personal computer? If by “Linux” as distinct from “Android” you mean the traditional userland stack on top of Linux, then the answer is still yes.
The “usual distro” of OS X isn’t used on the iPhone, either, nor is the “usual” Windows stack. I’m not talking about “usual” desktop UIs, and neither are you, I’m talking about mobile UIs. Have you been following the UI work being done for Meego? Have you been following recent KDE UI work?
I know what you mean is “Cross-device runtime UI portability“, but you don’t say it. If you want portability Linux is certainly worth mentioning since it is (arguably) the king of portability. If you mean “UI portability,” interfaces that dynamically adapt to the current screen and input method, then even there some good work has lately been done.
Android certainly deserves mention, but you leave it as an afterthought following a thick paragraph about Windows, of all things, which is all rumor and maybes. And, a quibble: All Microsoft OSes do share a common kernel, easily as much as iOS and OS X do. Windows deserves far less mention here than does Meego, much less Android.
By “Linux”, I mean “operating systems using the Linux kernel”, as opposed to the Android kernel, which is considered a fork since they decided to completely reinvent some parts of it and thus got their patches refused.
If the answer is still yes, can you show some numbers proving it ?
What I was thinking about is a single distribution which can run on a variety of devices with only a recompilation with different flags or something similar in the way.
As far as I know, iOS on the iPad is exactly that : you take iOS for iPhone, you change the hardcoded screen size somewhere, and you get the end result. Now, on Linux, I know of netbook-oriented distros and desktop/laptop-oriented distros, with each having a highly different UI (in fact both UIs are managed with different software), and that’s about all. Well, there’s Meego, sure, but the handset part is still far from being stable and release-ready, yet alone being a major player of the personal computing market, though I’d sure like to see this happen.
Again, Meego for anything but a desktop/laptop is not even out of the door yet. I was talking about major players. Android’s paragraph is shorter because as far as I know they do less and are late. Though again, Google are relatively new on that OS market, so it’s normal that they have less developer power and do less.
Edited 2011-01-11 14:42 UTC
There is no even a comparison. iOS runs only on Apple’s devices. Period. Porting is not even an option. Linux runs on a whole bunch of embedded and portable platforms, and porting is a goal.
I agree to the above – Linux is the king of portability. iOS is just a what was named here a wallet garden
Edited 2011-01-11 16:46 UTC
Android did not get their patches refused b/c they completely re-invented some parts, but rather b/c they did not (i) submit proper patches, (ii) ignored what the devs said, and (iii) chose a different path from the main-line kernel. They were invited to submit proper patches against the main-line kernel that were within the realms of what was going on with the kernel.
Again, look at KDE. They are targeting Desktop, Netbook, and phones all with the same code-base and widgets. Write a Plasmoid for KDE Desktop and it’ll run on KDE Netbook too – likely without changes or even re-compile.
Can’t do that with Mac, and certainly not Windows.
You mean like Gentoo? Which supports x86, x86-64, PPC, Sparc, and several other processors. Plus you have Desktop, Server, Hardened Server, and even Embedded targets. No other distribution matches Gentoo for range of devices. Nor does Windows or Mac.
Again, look at what KDE is doing. At most you’ll need a re-compile. But KDE, and Linux in general are far more varied than iOS in what is supported.
You don’t even have to go to Meego. Gentoo Embedded is already out the door. For that matter so is OpenEmbedded (http://www.openembedded.org) – again, numerous devices and you just have to set up your environment to support the device you want.
Although the heading talks only about “personal computers”, the first sentence of the article expands the scope of discussion considerably to include “smartphones, tablets and desktop/laptop computers”. In the latter context, Linux is a significant player.
If one also considers a further ambition beyond mere “cross device compatibility” within one or other OS family, one might also talk about “cross platform compatibility” as well as cross device compatibility.
Your determination to try to dismiss Linux/OSS from the main discussion has IMO caused you to miss an interesting technology in the very arena of the topic.
http://en.wikipedia.org/wiki/Qt_Quick
http://en.wikipedia.org/wiki/QML
http://qt.nokia.com/products/qt-quick/
Edited 2011-01-11 14:33 UTC
When I say “personal computer”, I mean a computer which is designed to be owned by an unskilled individual. Desktops, laptops, tablets, smartphones, and netbooks are all personal computers.
As I said previously, if you’re going to say that Linux (not Android) is a significant player of this market, please show how.
Cross-device compatibility means cross-platform, nowadays. Most mobile devices are based on ARM, while most desktops, laptops, and netbooks are based on x86(_64). Cross-device is a superset of cross-platform, in that you not only have to adapt yourself to various CPU architectures and internals with the same peripherals plugged in, but also to various displays and human interface devices (which is at least just as tricky)
Well, I’ve yet to find a proper, clear, and concise introduction to the subject, but each time I read about it it sounds like some kind of CSS for desktop apps, with mandatory pixel-based control positioning as an ugly bonus.
CSS is a step in the right direction, in that it forces separation of user interface from the program’s internals. But it still doesn’t make websites or applications magically adapt themselves well to a big change of screen size. Website developers still have to work around that all by themselves using some ugly javascript to say that if screen size is smaller than x then you must hide feature y. They have to do that design process by hand. This is not the same as true cross-device portability, where the UI toolkit does that job for you, only given some data regarding how important each element is.
Edited 2011-01-11 15:06 UTC
Qt Quick is Qt Quick, Qt Stylesheets is Qt Stylesheets. One is largely inspired by CSS, the other is applications.
I.e., you have confounded those two technologies
As I said, I’ve yet to find a good introduction to the subject to clear up my vision of it. What I’m looking for is something written by a QT developer or enthusiast which explains in a few paragraphs, without going in technicalities…
* What’s the point of those new QT technologies, what Nokia designed them for.
* Why I should use them as a developer, what they bring on the table, how they solve the problem which they were designed to solve.
If you have some links which do exactly that, please share them !
Edited 2011-01-11 15:56 UTC
– Easier coding of “nice” user interfaces with free-form animations. You can translate “designer” angle quite directly to code (“this button is right of this image, at the bottom of the view”)
– Need for speed. Unless you get great framerate (preferable the magical 60fps) on a phone these days, you fail to attract users. Most of iPhone attraction comes from framerate, users just don’t know it ;-).
Future (internal) Nokia UI innovation will all happen on top of QML, for a good reason. It’s also gathering momentum outside Nokia, e.g. KDE community.
QML is also the *only* UI technology we fully support for external developers in coming devices (both MeeGo and Symbian). Unless you count OpenGL, but that is targeted at an entirely difference class of programmers (3d games).
If you are a desktop application developer, old style QWidgets are the easiest way forward for now. Desktop applications don’t need to be too flashy, and usage paradigm is always quite predictable.
How:
– Declarative programming style endorsed (property binding, anchors)
– Only support features that can be made fast on GPU. Complex stuff composed with these fast elementary features
– Nice syntax
Thank you very much for the explanation !
What about GTK+ on Meego? It’s supposed to be supported too.
It’s “community supported”. Better supported on Netbook, but on MeeGo phones it’s anyone’s guess at this point how well Gtk+ will work. Of course you can compile the Gtk+ and run apps using it (hey, it’s Linux), but they will probably look like crap.
If you are interested in Gtk/Hildon on MeeGo, follow the Maego project:
http://talk.maemo.org/showthread.php?t=56822
Perhaps you could put aside some time to watch some videos:
http://qt.nokia.com/developer/learning/online/talks/developerdays20…
Possibly this:
http://qt.nokia.com/developer/learning/online/talks/developerdays20…
Well, I recently got some flash plugin issue which makes every sound coming from it horribly painful to hear (32kbps MP3 is what comes to my mind), so I can’t listen to all of this at the moment, but arriving at 17 minutes (when he’s done with his Edit example and starts to move on to another interface components), I think I get the overall idea.
As I learned programming with Delphi, I still feel a bit nostalgic about its GUI UI designer, but I must admit that this looks indeed as fun as a text-based UI design tool can get. Also, love the auto-completion features of QT Creator.
On the other hand, you also confirmed to me that this likely won’t solve the problem which I’m talking about in this article, despite what lemur2 was implying. You can write UIs for various devices using this same tool right, but if I’m not misunderstood you still have to design UIs on a per-device basis.
Edited 2011-01-11 21:23 UTC
“CSS is a step in the right direction, in that it forces separation of user interface from the program’s internals. But it still doesn’t make websites or applications magically adapt themselves well to a big change of screen size. Website developers still have to work around that all by themselves using some ugly javascript to say that if screen size is smaller than x then you must hide feature y. They have to do that design process by hand. This is not the same as true cross-device portability, where the UI toolkit does that job for you, only given some data regarding how important each element is.”
Actually webdevelopers are starting to understand how to do this.
Websites can just show less or with different layout on different devices with different form-factors using just CSS.
Have a look at a the blog from this designer:
http://www.hicksdesign.co.uk/journal/
Just resize your screen from smaller to larger in Firefox, Chrome or Opera, Safari it even kind of works in IE8.
Very impressive indeed, and I must admit that it didn’t took him a single line of JS to do that, contrary to my expectations. It still takes a lot of if-then work before achieving this effect, though…
Edited 2011-01-11 18:09 UTC
The if-then work is mostly just implementation of the different ‘designs’. Every form factor had a different design to hopefully fit the screen in the most usable way.
Also it is his first project where he did this. 🙂
I would like to add it is also possible to load large images for large screens with these kinds of tricks.
I don’t know why people don’t get it, HTML/JS/CSS is the new API/SDK 😉
Edited 2011-01-11 18:35 UTC
To give you folks an idea of what is already possible, here are some demos by Paul Rouget from Mozilla:
HTML5/CSS3, WebGL, video-tag, Cancas-pixel manipulation, hardware acceleration a little bit of file API demos:
http://www.youtube.com/watch?v=gFmuNApHFec
Why not add (multi) touch ?:
http://www.youtube.com/watch?v=GL2dwXa1_gw
Here is an other demo, which includes device API which allows access to for example your webcam of USB-stick (so you can drag files to webpages):
http://www.youtube.com/watch?v=nbSFvb9dWtg
There are many other things which are many other things in HTML5, things like built-in color-pickers, better font support, offline use (!) and so on.
But who says you need a server/website anyway ? You can also write you app-store applications in HTML5/JS/CSS and get direct native API-access like with the http://www.phonegap.com/ project.
Edited 2011-01-12 18:09 UTC
Given that OSX and iOS are both basically versions of NeXTSTEP from the late 80’s, that phones are higher-powered than the workstations that OS ran on originally, and that the development frameworks differ almost exclusively by the UI widgets… what’s your problem with iOS as an example? C/Obj-C based, Unix, nice UI… what, it’s not X or open source? I guess I didn’t realize that was a requirement for OS convergence.
UI differences between ‘hide this menu’ vs. ‘re-think how to use screen space’ are the difference between getting a Windows Mobile type of app (what the example looks like) or a clean one.
You’re going to be very busy building the ultimate rules framework and tons of code to support it instead of making the best app you can if you decide to ‘save time’ avoiding spending time on the UI. Or it will be second rate.
But then you will have to re-code your app’s UI and users will have to re-learn it each time you move to a new device. It’s the good old single-platform vs multi-platform debate, really… Only this time, multi-platform is something more interesting than just a way of supporting niche OSs.
So far, it has not been proven that the concept of multiplatform apps is fundamentally wrong. Only that Windows Mobile sucks on a touchscreen. Which is not relevant in this context, considering that 1/WM was designed for stylus use to begin with and 2/WM apps are not ported Windows apps, contrary to popular belief.
Edited 2011-01-11 15:50 UTC
“This would be an acceptable smartphone office suite UI”. (Not your quote, from the original article.) That sums it up for me. Acceptable. Yep, it will pretty much work, but that’s it.
I’m definitely aware that the WM apps are meant for stylus and that they’re not just re-compiles of their desktop counterparts – that’s why I’d leave WM out of the convergence discussion. I mention it purely as a point of UI – the reduced UI in the example looks like a WM app (drop some menu items here and there, call it a mobile app), not like a well thought out portable version. I’ll totally ignore that the backend app was a rewrite because I don’t think that users know/care for the most part. Look at a Word for Windows Mobile screenshot – fine, they put the buttons on the bottom at some point, but… and?
The evidence in multi-platform apps being ‘wrong’ is in iOS + Android market share. I’m talking UI here, not the idea that you could compile the same app code to work across devices – I agree that we’re pretty well there already and it’s where things are heading.
FWIW, you don’t have to “re-code” your apps UI at this point. At least on iOS you’re going to be doing more re-layout work and re-thinking what it means for a user experience with maybe a few code path tweaks, but you’ll spend more time making the layout nice, doing other artwork in some cases, etc., than re-coding anything.
Does a user have to re-learn using the app? Maybe slightly, but not at any deep level, just to the device bring its best capabilities to bear and mitigate any shortcomings (CPU/screen space).
We’d be looking at zoom sliders instead of pinch-to-zoom, next/previous buttons instead of swiping, etc. if we tried to do the suggested UIs. Sadly, it may be acceptable for most users, but it’s definitely un-interesting from a capabilities perspective. And it’s really what most of the industry was bringing to the table until Apple lit a flame under their collective ass. Code convergence on the back end shouldn’t be seen as a reason to avoid improving the user experience, and looking to save time across device variants has that feel.
And I really hope I don’t come across as combative or anything – I just don’t agree with the conclusion, though it’s certainly a question worth asking. I’m drawing my conclusions from my own development and from industry trends.
No offence. I just wonder if we have tried cross-device UIs hard enough before dismissing them. If the question wouldn’t be worth reconsidering more carefully now that cellphones, tablets, etc… all begin to try to do the same thing, making the app compatibility distinction a bit technical and artificial from a user’s point of view.
Apple clearly chose the easiest path by asking developers to re-code apps. No question about that. Putting a real cross-device UI toolkit (where, as you say, zoom sliders don’t remain on touchscreens ) on rails would be very difficult. This article is meant as a description of the core idea of a cross-device UI, but I don’t pretend to have fully solved the problem. It’d take months (years?) to get an implemented, stable, and working version of this, if it’s possible at all…
However, wouldn’t it be worth it ? Like it was worth putting money in fundamental research about lasers after all…
Edited 2011-01-11 16:38 UTC
Happily (or sadly?) I’ve been coding since there were other cross platform frameworks, and there’s not one instance that has worked well. And that’s just trying to bring mostly the same functionality to similar screens.
Coming up with a different set of UI frameworks certainly wasn’t the easiest path for Apple – they could have done nothing and just had you build Cocoa UIs for iOS, so I’m not sure I agree. It goes far beyond pinch-to-zoom, there’s a lot of basic and extended behavior that just doesn’t fit the mold of a desktop app.
Based on what I’ve seen over a long time, it’s never worked, or at least never worked well. You end up with a lowest common denominator. That does have a place – I can see where you can save some time – but if you extend it to the point of really having a UI that shines, you may as well have done a custom layout. And that’s having used Java/VC++/Delphi/C#/ObjC/assembly/C/…. :/
I’m not sure we are talking about the same thing.
If I’m not misunderstood, you are talking about frameworks which spread across multiple OSs. Like GTK, QT, Java MIDP…
While it’s devatable that you can’t create good applications using those frameworks (Audacity is neither bad on Windows nor it is on Linux), the fact is they must struggle with various incompatible (including from a usability point of view, see the whole QT on OSX issue) toolkits which they have no control on.
This is not the case with what I’m talking about. You only have one OS to run on. One API and look and feel to mimick. I just want said OS to do its job : if it pretends to run on several different devices, fine, but it has to provide the same look&feel and the same applications.
Pie Menus.
Edit: mainly as a means of overcoming inconsistencies between touch/mouse-driven interfacen.
Edited 2011-01-11 14:56 UTC
Well, if I keep using my favorite OO example…
http://img37.imageshack.us/i/bigmenu.png/
How do I turn that into a pie menu that fits on a smartphone screen ?
(PS : More seriously, a problem which I have with pie menus is the absence of text in them. If you try to include text, they become gigantic. On the other hand, not everything can be explained through the use of icons. And on a touchscreen, you can’t hover icons and read the tooltip…
For these reasons, I do think that big scrollable menus are a better fit for touchscreen devices)
Edited 2011-01-11 15:21 UTC
Gripe one’s solution: Sub-menus.From 8 main choices you can get 7 sub-options. You can also get a different menu from 3 buttons on the mouse, and you have modifier keys on the keyboard that can more than triple that.
With a multitouch screen on any handset you can support up to 3 individual menus. 1-finger, 2-finger, and 3-finger, further expandable by tap or tap-and-hold.
Many of the options in that example don’t even need to be menu items. A key command to bring up a dialogue or simply having a dialogue for some of them already visible makes much more sense to me.
Not everything can be explained with icons, but icons can be explained, and hopefully remembered.
I think too much effort is put into discoverability in UI, and not enough into actual usability.
Often I find user interfaces that are meant to be easy to navigate the first time you use them to become cumbersome as you familiarise yourself with them.
I’ve put alot of thought into this (drawn out diagrams, thought about how they could be configured) and done research (books, papers, old articles on the subject) and I really think user-interfaces have gone down the toilet due to programmers patronising and looking down on ‘users’.
It’s hurt accessibility, and widened the gap between programmers and so-called ‘users’.
“No one wants to learn a programming language to set up a program!” No one wants to use a program to accomplish a task. No one wants to do anything to get that task done. They want the end result. Effort is rewarded, and making things ‘simpler’ could be re-defined as ‘minimalising the capabilities of a system’.
People don’t want to read manuals before they start using something, but end up reading 10 websites and reading and re-reading these menu options, digging through Help pages _after_ they start…
It all seems… sideways to me. Like opening the hood of your car and tightening things until you have to read the manual and/or call a mechanic anyway.
Can you tell I live in Emacs?
Edited 2011-01-11 16:04 UTC
If you didn’t mention it, I’d have thought that you use Vim
The problem you raise is very interesting, and is one I’ve been thinking for some time since I’ve first bought a (very good) book on software usability out of curiosity. In a chapter which I’d translate in English as “Top ten myths about usability”, a point which I found particularly relevant was “it’s not usable if my grandmother can’t use it”.
The author pointed out that you always have one target user base in mind, and that you must optimize for *that* target user base. Not your grandmother. That overly guided interfaces are bad when you put them in the hand of a specialist audience.
That being said, for a phone/tablet OS, whose target audience is composed of maybe 90% of computer newbies and 10% of computer literates, I do think it’s important to optimize for discoverability. Moreover, users will be used to big menus (since most of the phone’s interface is built using them), whereas pie menus will be totally new for them. By using a pie menu, you violate interface conventions.
Now, if you’re advocating designing the whole OS’ UI around pie menus, that would be interesting indeed. But I fear that we would go back to the manual age then. Users don’t read manuals these days…
Edited 2011-01-11 16:21 UTC
I prefer to solve problems that hack around them ;p
I see that as a problem. The solution? People should read the manuals.
Many people don’t read books much either, and that, too, is a problem.
Well, then we don’t agree here
In my everyday life, I try to get as much annoyed by my tools as necessary, but no further. Basic operation of my phone shouldn’t require reading a manual, since current design prove that it’s not necessary for the features I want. Only more advanced operation should require me to learn something, when it’s actually needed.
On my computer, I use Linux because it’s simpler for my usage patterns. But I don’t spend my life in a terminal because most of the time I find GUI alternatives which do not require me to learn a bunch of commands by heart to be simpler. I use a terminal when I need it.
It’s about being lazy except for things which actually matter. And in that regard, I think that the disappearance of manuals for non-professional products is very good news ^^
*advocates using Occam’s razor for usability matters*
Edited 2011-01-11 16:49 UTC
I find the shell to be simpler, because it _will_ do what I want, unless I do something wrong.
A turing-complete language without major bugs, and good documentation will do the same thing every time.
It’s also not doing anything when I don’t want anything happening.
I use ‘gui’ applications when there’s no other solution that makes sense or works properly. I use qbittorent because it has more working features than the cli/curses alternatives.
I use easytag because there are no simple cli tagging applications. The GIMP and Inkscape for art for obvious reasons…
I use Conkeror for browsing because it’s precise, and I don’t need to fudge around with the mouse as much if a website is designed well.
I wish more programs used Conkeror’s UI model. Hinting and keyboard command sequences are efficient once you learn them. Once you have a series of menu options memorised there’s no real difference between those and a keyboard sequence. At a certain point it becomes muscle memory, but with a keyboard sequence you’re not dealing with window/menu position variations.
I was also about to reply: “pie menus”. I think this is the best UI configuration for menus. I am currently writing a very configurable pie menu for SWT (in Java): it has a zone below the circle that displays the text that would have been displayed in a classical contextual menu. It also has a “brief” description in a styled big balloon that uses HTML for the presentation.
Granted, I did it with the desktop in mind but the configuration is there to cope with different settings. It isn’t finished yet as it’s much more work than I initially thought. Moreover, I’m planning on these menus to open in an overlay on top of the parent pie menu, with the overlay being a transparent layer that darken the background (and hence the parent).
EDIT: the only problem is finding appropriate icons.
Edited 2011-01-12 16:37 UTC
I can also see this kind of priority-based UI management being beneficial to desktop users. It’d make tiling WMs and non-maximized windows more useful on screens smaller than about 1.5 times whatever the app was designed on.
I have two 1280×1024 LCDs, but I can’t tile all of my apps beyond “maximized, one per monitor” because the functions I use in some of the ones I use frequently are laid out to assume at least 800 pixels of screen width, almost none are OK with 480px, and I’d probably waste as much time as I’d save if I didn’t spend a week fine-tuning the tiling algorithm with a table mapping apps to minimum dimensions actually producing usable UIs.
It’s why I’m working so hard to make the desktop and mobile versions of my current web app project comfortable while differing by little more than “mouse vs. finger” for the button sizing guidelines. (Among other things, I’ve got the current testing layout down to Chrome with scrollbars at 800×600 I’m now working on a linearization scheme to bring the minimum usable width down further)
This was actually done for MIDP (i.e. old crappy J2ME phone app) menus. You couldn’t specify the order or even tree structure of the menu, just a load of menu options and priorities.
I have to say it was inflexible and didn’t work very well, but that may have been just because MIDP was a pile of crap.
I think the biggest problem was: you did a hell of a lot of reasoning to get from your desktop UI to mobile UI example. Programming that reasoning is definitely going to be more work than just creating another UI. Besides you’re going to have to create a new UI anyway due to all the little details that are different.
I read through about half and while on some angles I can see how the reasoning goes. But the reasoning is like trying to figure out how to live on the moon, assuming it has oxygen and water and everything we have on earth, but still having the 1/6th gravity that we do here and not taking the latter into account.
Microsoft does NOT have the correction direction when it comes to OSs or input methods. They are, as almost always, flailing about in multiple directions trying to figure out how to keep from getting left even further behind.
As for Apple, it was totally obvious for them to upscale in the way they did. Being able to run iPhone/Touch applications was NEVER considered a long time solution but a short time solution. It was always expected that programmers would create a new interface befitting the iPad.
It’s like having a bike rack on a car and thinking that it would be very awkward to ride the bike while attached to the car instead of realizing the car would only be carrying the bike and you would be driving the car instead.
There has to be some slick way of doing it. You’d need not only a nice system of doing it, but a good implementation of that on a variety of platforms. Will respond further when thoughts are more organized. Nice thought provoking discussion topic.