Multitasking
Since I was a heavy Windows Mobile user back in the PDA era, proper multitasking on a mobile device was second nature to me. At least on my PDAs, multitasking in Windows Mobile didn’t seem to affect the battery all that much (and they had wifi and BlueTooth), so it was never a special or significant feature for me – it’s a pocket computer, of course it does multitasking.
Why was multitasking on mobile something normal back then (Windows Mobile, Symbian), while with the iPhone, it became something special? Something of a holy grail? Why did it suddenly become so hard to do multitasking without affecting the battery too much? Why, almost five years down the line, does the iPhone still not have anything other than My First Multitaskingâ„¢ (which, for all intents and purposes, is barely one notch above a list of recently used applications)?
There are several reasons. First and foremost, whereas operating systems like Symbian and Windows CE were designed from the ground up to be small and efficient on constrained hardware (which explains why WP7.5 kicks Android’s and iOS’ butt in the performance department), operating systems like iOS and Android are basically comparatively heavy desktop and server-class operating systems shoehorned into mobile devices. This obviously comes at a price.
In addition, processors have progressed much faster than battery technology, and while my PDAs already had wifi (and I even had a Windows Mobile smartphone, the HTC Artemis), modern day wifi is more powerful and probably sucks more power. I don’t believe the story about how we use our smartphones more often. In fact, I used my PDAs more often than I do my phones today, because now I also have a slim ultrabook and an iPad to take on most of the PDA’s duties.
With my history as a PDA user, I don’t consider iOS to have proper multitasking – and by proper multitasking I mean that the user has the ability to run multiple programs simultaneously. This is important, because iOS itself, of course, fully supports multitasking and all that comes with it – it just doesn’t expose it to the user. I don’t consider the capability for applications to run a few select pre-defined, Apple-approved tasks in the background to be multitasking.
In iOS, application programmers can only perform the following seven tasks in the background:
- Background audio – application continues to run in the background as long as it is playing audio or video content
- Voice over IP – application is suspended when a phone call is not in progress
- Background location – application is notified of location changes
- Push notifications
- Local notifications – application schedules local notifications to be delivered at a predetermined time
- Task completion – application asks the system for extra time to complete a given task
- Fast app switching – application does not execute any code and may be removed from memory at any time
Many would argue that this covers most use cases, and while that may be true, none of them address the fundamental issue with iOS and multitasking: it feels bolted-on. An afterthought. Something added only to shut up all the complainers. Here, you can continue to play audio while you run another application. Now shut up, that’s multitasking..
However, on iOS, applications still don’t really know whatever’s happening to other applications. Switching between multiple applications really means switching between applications; when you switch to another application, the previous one is killed. It can continue to do any of the above seven tasks, but that’s it. On iOS, whenever I need to switch to a different application, I feel hesitant, because I never know what’s going to happen to the previous application. Will it remember the text I just typed in? Will it remember my location inside the application?
It’s all made worse by the fact that switching between applications on iOS is a very cumbersome process. Pressing the home button twice is not a comfortable operation, because it’s very error-prone for me – I regularly end up long-pressing the button, or the second press isn’t firm enough and I end up at the home screen. It feels like Apple never intended for this functionality to be part of iOS, but after all the complaints, was forced to add it anyway.
On Android, multitasking has clearly been a design consideration from day one (mostly, but we’ll get to that in a minute), and the back button plays a major role here. The major advantage comes from the fact that applications can do a lot more while they’re not front and centre than the seven tasks applications on iOS can perform. The key to why I prefer Android’s multitasking to the seven background tasks in iOS is Android’s concept of Application Components (see Dianne Hackborn’s article on the subject for a good primer, or the more in-depth article at the Android Developers page).
Application Components “are the essential building blocks of an Android application”. There are four different types of application components, the most important of which is the activity. Each activity is a single screen with a user interface. Each application (generally) consists of multiple activities; Gmail, for instance, has an activity which displays the list of emails, an activity for composing email, and so on. Twicca, my favourite Android Twitter client, has activities for the timeline, a user profile page, compose tweet, and so on. Heck, even your homescreen is just an activity.
Second, there’s the service, which, as the name implies, is a component which runs in the background to perform tasks that take a while, and do not have a user interface. Examples are audio playing in the background, or fetching data over the network. These continue to run even if the user switches to another application.
Third, we have the content provider, which manages a shared set of application data, which can be stored in the file system, an SQLite database, or any other persistent storage location. Other applications can access or modify this data through this content provider (assuming permission has been given). Lastly, there’s the broadcast receiver, which takes care of receiving messages from other applications (e.g., telling applications a certain download has completed).
The unique aspect of Android multitasking is that each application can start another application’s components. You can see this at work when browsing the web and encountering a link to a tweet, or a Twitter user’s profile page. While on iOS it will simply open the Twitter.com webpage, on Android, you can tell the phone to open your Twitter application instead – however, instead of launching the application as a whole and switching to it, only the relevant activity is launched.
So, in the case of tapping on a web page link to a tweet, Twicca’s activity for showing a tweet is launched and shown to the user – without actually launching the entire Twitter client. The same happens when tapping a link to a profile page; Twicca’s activity for user profile pages is launched, again without launching the entire application. In both cases, a single tap of the back button brings you back to the browser. In addition, if you open the Task Manager, you’ll see that Twicca didn’t actually launch at all.
It’s precisely this ability that makes Android’s true multitasking feel so smooth and natural. Instead of hopping from silo to silo (iOS) without really knowing what the application you’re leaving behind is going to do, which can be jarring, you’re just quickly opening a single activity, and hop back trough a single tap on the back button.
It’s difficult to explain if you haven’t used Android for any extended period of time, but this is really one of the things that pleasantly surprised me. I just figured applications ran in the background, and that the concept of using multitasking on Android was effectively the same as on iOS. In reality, though, it’s a lot more well-thought out and integrated into the operating system and its user interface than is the case on iOS.
It’s not all roses and unicorns, however. While opening activities from another application and hopping back is easy and fluent, switching between applications as a whole is just as annoying on Android as it is on iOS. You long-press the home button, and a dialog pops up containing the icons of the six most recently used applications. It’s a little bit worse than iOS, even, since you’re limited to those six applications, whereas on iOS you can scroll back pretty far.
On the whole, though, multitasking on Android is far superior to the seven pre-approved background operations in iOS. I honestly hadn’t expected it would make that much of a difference, but after a few months on Android, I find my iPad cumbersome and jarring when it comes to using multiple applications.
History is written by the victors.
BTW: Preaching to the choir somewhat?
Geeks btw come in different sizes, my friend describes himself as a design geek. I describe myself as a bicycle geek before a computer geek (you wouldn’t know the stuff I do about bottom brackets … swedish bicycle you say??)
Some people label themselves geeks because they are into gadgets. I would say some UK football supporters are geeks because they know soo much about it, it is on obsession.
I love the idea that there is some type of Geekism hierarchy and by knowing the history (that nobody except us care about) of how Smartphones evolved makes you the lord of nerds or something.
I hope you know about the true scotsman fallacy?
Early smartphone history will be nothing except a curiosity.
Edited 2011-12-19 20:33 UTC
I thought it was a rather sensible approach? Looking at smartphones as PDA replacements, and dragging in the older windows CE devices as references for what has been possible in this space before, helps place his views and requirements in a context. Merely saying “I think iOS has worse multitasking than android” is less interesting than a compare/contrast with earlier useful solutions/iOS/android.
Edited 2011-12-19 20:50 UTC
Way to go to miss the point. FFS it like swimming through custard on here sometimes.
He was stating that other people stating themselves as geeks were wrong (in his first few sentences) because they didn’t know the history.
I was saying that he was being someone egotistical to say he knew what is geek and what isn’t. If he said “smart phone experts” I wouldn’t have had an issue with it, but then again I care about accuracy when it matters.
Thus me referencing the true scotsman fallacy.
But never mind … Carry on.
Edited 2011-12-19 21:03 UTC
I didn’t say they were wrong (and no need to get so angry, by the way).
I merely believe that having a good sense of history is important when discussing things like this. Framing 30 years of mobile computing history as “Android vs. iOS” is idiotic.
That isn’t what you said.
It is typical elitism. I know lots of guys are Lord of the Ring Geeks, but I don’t bring myself above them because I have read the Chronicles of Conan.
BTW someone saying “FFS” and actually being venomous doesn’t mean they are angry it means they are passionate.
Edited 2011-12-19 21:16 UTC
I think you were the one who missed the point of what he was saying. As I read it he was complaining about the people who call themselves ‘geeks’, particularly tech geeks, but only know things about iOS and Android. The sort of people who think ‘I am a geek because i rooted my iPhone/Galaxy’ and know next to nothing about about the technology with which they deal or, like Thom said, think that the portable industry was in the proverbial dark-ages before iOS/Android came to save us from it.
That’s passive-aggressive behaviour.
Well, duh. There’s no such thing as a non-elitist geek.
Have you tried the Amazon App Store? It does a much better job than the Market of finding apps. Not to mention the free app of the day is fun.
It’s only officially available in the US, so using it anywhere else requires some work. (e.g. http://www.xalate.com/blog/2011/04/use-amazon-appstore-outside-the-… )
Edited 2011-12-19 20:38 UTC
This is the best article ever! [1]
[1] I meant not that
I wouldn’t go that far. But it certainly is great work.
I am really glad Thom found the time to finish this, the world would be poorer place if this review hadn’t found its way to the intertubes.
So thanks for sharing and keep up the good work!
“As a final example: some applications use proper bounce-back when scrolling, while others just come to a dead stop. I have no idea why some applications decide this is a good idea; heck, even several Google applications do not implement bounce-back! There’s no excuse for not implementing bounce-back scrolling in your Android application when Android itself clearly supports it. The fact that Android even gives you the option is bad enough as it is.”
This is actually not true. Bounce-back scrolling is not in Android, which instead uses an edge glow. That is something Samsung bolted on, which is why it doesn’t work everywhere. The inconsistency you are seeing is the result of TouchWiz.
Ohhh! And here I thought edge-glow was a Gingerbread thing, since my phone’s 2.3 update ditched all the bouncebacks. I happen to prefer the edge-glow since it’s less eye-boggling.
The other reason Samsung ditched bounceback is because Apple successfully sued them over it:
http://arstechnica.com/apple/news/2011/12/us-court-denies-prelimina…
Edited 2011-12-20 14:27 UTC
I applaud Android’s customizability. However, after seeing many people’s Android customizations (Thom’s included) it makes me think of MySpace and Facebook. iOS is kinda like Facebook in that it’s clean and no frills but that’s as far as it will go. Android reminds me of MySpace in that most people’s “designs” look very cluttered and down right ugly. Obviously, beauty is in the eye of the beholder and if someone wants that then more power to them. I use to have the same problem when I would Jailbreak my iPhone and customize it ( I’m no designer! ). I did like Thom’s lock screen though.
Ok, I’m sorry, but “Facebook” and “clean and no frills” just don’t go together anymore. Spam is everywhere, the interface is cumbersome and there’s no way to filter out updates from pages. Quite frankly, that’s why I’ve left Facebook for the refuge of Google+ (no app spam here).
I was given a Palm Treo Pro running Windows Mobile 6.1 to use at work, and I hate it so much. Just like The Gimp, I don’t give a damn what its capabilities are; there’s just something to be said about piss-poor usability, and anything that requires a stylus as the main input method fails. Of course, that’s just my personal opinion and if yours differs, that’s ok… there’s no shame in being wrong
As for task switching, I think ice cream sandwich will address some of your complaints, as it has a dedicated task switching button. However, there’s a gesture where you can ‘swipe away’ apps to get them off the list, but that doesn’t actually close down the apps – just removes them from the list. Google seems hellbent on making sure users don’t have the ability to close down apps when they know they’re finished with them, and anybody who says apps in the background aren’t really running has obviously never used the Navigation app.
Task switching is one of the things that has bugged me the most about Honeycomb (Moto Xoom)… I end up rebooting every 3-4 days, just to clean up the list of running applications. And the task switching mechanism seems terribly clunky after using WebOS or the PlaybookOS, but that’s another rant.
The task switching in ICS is different… when you pull up the list, you can ‘swipe to remove’, and the same goes for notifications. They stole fizzy lifting drinks from WebOS
Matias reimplemented his ideas, more like it.
Only an app that registers a background service (which the navigation app obviously has to do since you don’t want your gps nav to stop running because you want to go check something else) continue to run in the background. Other aspects of apps get suspended/serialized when the app leaves focus. Such an app appears in the recent task list, but is most definitely not actually running any more.
BTW, you said Ice Cream Sandwich was released 2 months ago, that’s not quite true.. 2 months ago it was DEMOED in the Hong Kong event.. The source itself came out about a month ago.
ok so maybe a month isn’t such a huge difference, but really, all the manufacturers didn’t have an early access to the code, so they’re just beginning to get a hold of it and adapt it to their devices and hardware, and custom software.
My bet is ICS updates will come WAY faster than Android 2.2 or 2.3… At least for the devices that have been deemed updatable by the OEMs And if Sony is being more forthcoming and actually letting people try early ports of ICS on their phones.. I really do feel things have changed for the better. Crossing my fingers!
spot on with the Introduction, wish more people knew a bit of history in the area.
You missed one thing: you compared the 3GS with a brand new Android.
Try comparing an iPhone 4 running iOS 5 with a new Android. iTunes is mostly out:
* You don’t need it anymore for OS upgrades (you connect to iTunes only for restores).
* You don’t need it anymore for the first time usage.
* You never needed it for installing and updating applications.
* You don’t need it to buy music. You do however need it for transferring music to the iPhone if it’s not bought from Apple. You can avoid some of that by using iTunes Match, but you still need to add the music to the iTunes library.
My iOS devices (iPhone 3GS, iPhone 4 and iPad 1) haven’t been sync-ed with iTunes in a long time and they still get all the apps, music and stuff. I backup to iCloud so if I need to restore or replace the device I don’t restore with iTunes.
What really bothers me about the iTunes Store is:
1) Content in some EU countries is not available in others
2) If I move to another EU country and change the iTunes store, the apps don’t show up anymore in the purchased list of apps.
Honest to God, we have the 4 fundamental freedoms in the EU which clearly imply that my account from one EU country should work in all of them. But I guess that it’s better than with the Sony PlayStation Store that isn’t even available in all EU countries. I had to forge a US account with Sony to be able to use my $600 PS3, ultimately to get screwed over with the Linux support and a few others.
I guess it’s the price for the better device, to get screwed over by the online store experience. The PS3 is, for me, 10 times better than the XBox360 or the Wii, so I’m putting up with this. Same with the iPhone.
A few months ago a friend from the States came to visit for a few weeks and gave me his Android (don’t remember the brand/model) to install a local Prepaid sim card. It took me 30 minutes to figure out where to change the APN settings from in order to give her internet. I was always able to configure any Nokia, Apple, Microsoft device before that and it puzzled me how complicated it was.
Can you create and edit more than one playlist without iTunes?
Yes
It took me 30 minutes to figure out where to change the APN settings from in order to give her internet. I was always able to configure any Nokia, Apple, Microsoft device before that and it puzzled me how complicated it was.
Complicated?
Menu button –> Settings –> Wireless and network –> Mobile networks (Set options for roaming, networks, APNs)
Either I possess mad skillz or you exaggerate.
The hardware buttons on Android devices are not consistent. It depends on both the phone manufacturer AND the phone model on which hardware buttons are included (home, menu, back; some add a fourth button), and in which order.
My wife’s LG Eve has “home, menu, back” while my SE Xperia Pro has “back, home, menu”.
And, the latest Android phone (Samsung Galaxy Nexus) doesn’t have any hardware buttons. In fact, Android 4.0 uses a row of software buttons instead of the hardware buttons, which means that more and more phones will be dropping the hardware buttons going forward.
It would have been nice if Google had mandated the order, placement, and type of hardware buttons, though.
I agree. Coming from a Samsung (M/H/B) to an LG (M/H/B) to an SE phone (B/H/M), I still find myself hitting ‘Menu’ sometimes while I wanted to go ‘back’…
Edited 2011-12-20 08:26 UTC
I’ve noticed this as well – especially when it comes to mobile-centric blogs & news sites (the saying about “one-eyed men in the land of the blind” comes to mind). Just as with “social media experts,” there seem to be plenty of mobile “experts” who use the technology heavily, yet don’t even have a basic understanding of how it works under the hood.
While I agree with the parallel you’re making, one difference is that PalmOS came by its limitations “honestly”. In other words, they were genuine limitations of the platform & not anti-features.
PalmOS did have some multitasking ability (at least in later versions – I started with the Treo 650), it’s just that it was limited to cooperative multitasking. I remember using a 3rd-party SSH client on my Treo that would keep running/stay connected in the background.
Which is why I haven’t really been able to muster much enthusiasm for most “post-iPhone” mobile devices. Sure, the players have changed, but you’re still largely stuck choosing between polished but locked-down, or unpolished but flexible. Worse yet, people are starting to take that situation as a given and concluding that polish & openness are mutually-exclusive in some fundamental way… which is one of the reasons I had high hopes for WebOS: it seemed to be almost the “golden mean” between iOS and Android, but we all know how that worked out.
This may be due to TouchWiz and not Android.
On my Xperia Pro, there’s only 1 copy/paste interface: long-press on a word, menu pops up asking if you want to select all or just word, word is highlighted in blue with triangle-thingies on either side, you drag those thingies around to select the text, touch the screen, menu pops up where you choose either cut or copy. And it works the same in all text input boxes.
Don’t recall how it works on the wife’s Eve.
Thats the same way it works on my att galaxy s2. I’m wondering what Thom means about two interfaces.
This also depends on the music application you use, the skin you use, and the lock screen you use. This is not an Android thing per se.
The default music player is pretty crappy. Try PowerAmp for a better music experience. And enable the lock screen control inside PowerAmp. Then, whenever PA is running, the lock screen is controlled by PA, which shows the player controls, playlist, cover art, etc.
I found this to be a little confusing. On my s2 you can’t drag down the notifications tray on the lock screen. If you’re playing music and you lock the phone, the music controls appear on the lock screen at the bottom. Otherwise the controls appear on the notifications tray when you pull it down.
It sounds like on thom’s s2 you can get to the notification tray while your phone is locked.
I suppose Symbian deserves to be mentioned in all this, as it was as well a flexible and powerful mobile OS, with interesting variants (both the Nokia developments and the stylus oriented UIQ from Ericsson)
The att s2 is a little more square than the tmobile version. Att also has the galaxy s2 skyrocket, which is a 4g lte s2 with a 4.5in. screen.
As for crapware, on my s2 I was able to remove whatever I wanted without rooting the phone. I guess thats another att specific thing.
Edited 2011-12-19 22:47 UTC
This is controlled by the launcher, and not Android itself. This “6 applications” limitation is from TouchWiz, in your case.
I’ve replaced the SE launcher on my Xperia Pro with Go Launcher Ex, and the recently used applications list shows many more than 6 icons (I’ve seen up to 12, 3 rows of 4 icons).
And, with Android 4.0, this gets even better as the recently used applications list shows icons for apps that aren’t running, and thumbnails of apps that are running, making it even more useful. Of course, the phone vendor is free to screw this up along with everything else.
Have You tried K-9 mail (https://market.android.com/details?id=com.fsck.k9)?
Anyway, nice article, thanks.
The default e-mail app is also fairly nice, and can access GMail via IMAP.
Yeah, that stands to reason since K-9 Mail is based on the stock client’s source, and their ultimate goal is to push their improvements back upstream so that everyone has access to stuff like PGP/GPG encryption in their stock client. K-9 Mail is so good, in fact, that it even worked fine on cwhuang’s horribly unstable Honeycomb x86 builds.
Would it really have been so hard to put the English translation for that into the text? Afterall, the entire review is written in English.
For the benefit of the rest of the readers, here’s the Google Translate version: every advantage has its disadvantage
The best football is also depending the country you reside in.
In Germany many think it is Pele (although some might think Zidane or a few Franz Beckenbauer)
I think most French will think of Zidane.
I guess people from Argentina will think it is Messi.
Etc.
One thing Android seem to have not allowed yet is customisation of the existing “system” icons/text in the notification area. I have two major gripes on this:
* Why are there no seconds displayed in the system clock? Maybe people would consider that would use more battery power, but surely that’s up to me to decide and not Google? BTW, the Windows task bar has this very same awful “feature” – and yet apparently no-one in the world needs a task bar clock with seconds in it (hint: I do). At least Linux beats the rest with this – my GNOME desktop clock can have seconds or not and can also show the date above the time too.
* The battery indicator is completely horrid. For a start, there is no percentage figure or time remaining – both of which are present on pretty well every desktop OS going – and secondly, the poor granulation of the battery life indicator is made worse when you charge the battery (it flashes the leading edge of the battery level – arrgh!).
Yes, I know you can get other clocks and battery indicators that display in the notification area, but they don’t *replace* the system clock or battery indicator and who wants to run 2 clocks or indicators (unless you’re Microsoft and Windows 7 with its stupid clock widget 🙂 ).
As Thom says, Android’s system components need to be replaceable via Market downloads – a few of them can, but not letting you fully customise something that’s displayed all the time on your Android device is maddening.
If I recall correctly, you _can_ tweek the stock android to do what you want (using some tool I dont remember the name of right now) _or_ install replacements.
I’m still a Maemo nostalgic user but I enjoyed your article a lot! Thank you Thom!
My Nexus S is running the official ICS ROM already.
The point still stands, though.
I wasn’t aware that George Best spoke any Dutch.
It’s wrong anyway. It’s “Ieder nadeel heeft zijn voordeel” which translates to “Every disadvantage has its advantage”.
“Elk nadeel heb z’n voordeel.” is the correct quote.
“They’re all things I simply wouldn’t expect to see in either iOS or Windows Phone 7.5, and there are indeed times when these issues break flow, like an unnecessary, comma.”
Was it necessary to actually add in the unnecessary comma to make your point? :p
This is Samsung’s addition and ICS has that well integrated.
Also fixed in ICS
This is still the case, but ICS handles that with a swipe. See Android Market as an example, but a bit better.
Fixed that in ICS.
Android supports fadelight effect, not bounceback. The bounceback is Samsung’s copy from iOS(or wherever)
Disabling applications in ICS works exactly like that.
If you did have the same thing as in iOS it would overload your notifications. And Android does not group anything, apps themselves chose how to show notifications.
The number of activities opened in the task list is more like webOS cards. You can scroll through a long list of activities with previews and swipe away. For you ICS will not relieve the need to long press the home button, however… As a Galaxy Nexus and Nexus S owner, the recent apps button is a great design consideration…
My Nexus S is on ICS as I write this. BTW: Source was released a month and 6 days ago(14th Nov).
PS: Your “next” smartphone will be SGS2 with ICS on it. Because ICS transforms Android into something much better than previously was available.
This is what I would’ve posted if I wasn’t lazy
Sounds like most of your problems are with either Samsung’s changes to Android, or are things that are fixed in ICS, Thom.
I’m a very happy Nexus S with ICS user. I think the SGS2 can be even better than my Nexus S if only Samsung would do an update to vanilla ICS.
Until there is better access to open drivers and a willingness from manufacturers to actually use them; Android is the best of the bunch.
I used to think that free software was enough for everyone. But as long as companies like Nvidia et al release binary/proprietary only hardware drivers, things like replicant (http://replicant.us/) are facing a real uphill battle.
Edited 2011-12-20 02:21 UTC
While I understand where you’re coming from, there’s no way that drag & drop can be considered a major feature of android
The lack of drag & drop, however, can be considered _the_ major drawback of iOS and it is the main reason why I’ve never, ever, considered buying an iDevice ever since my dad bought an iPod (generation 1) and I bought an iRiver harddisk music player…
All my mp3 players since than have been UMS enabled and I would have considered a smartphone that wasn’t, a step back in evolution.
I don’t know about WP7 (I guess it’s also UMS/MTP), but to my knowledge iOS is the only one which still clings desperately to its proprietary software…
I agree – I would be hard pushed to buy a phone or MP3 player that didn’t support mass storage, and I was very disappointed that Nikon dropped mass storage support on some of their SLR cameras. They still have PTP but that’s not quite as useful.
As for phones, I like the Symbian system: connect USB to the phone and you get 4 options: Mass Storage, PTP, PC Suite (for their proprietary software) and tether. Nice!
Sounds to me like they wanted to make this available to legacy/existing apps as well when they added it.
@Thom
No, it is suspended. Unless you ask not to use iOS 4.x+ multitasking feature, your app is suspended and not killed.
Also, who are we kidding? True multitasking on any mobile device without a swap file?!?
Android or iOS, apps in background (as well as services, which are usually restarted whenever the system is able to do it) can be killed at any time if another app needs resources badly.
That’s the same thing you should be worried about on Android, it is ultimately up to the app developers to make sure that they save and restore the complete application state (as it makes sense for the app of course) whenever the application is paused/stopped/killed and resumed/restarted. I think you, as in Thom, have a false sense of security in Android.
It is true that Android did go some way to make their kind of multitasking possible, but it comes with a good set of side-effects.
Unless you access memory which Google cannot yet regulate well enough (not as they would want to) with the NDK, the resources you can allocate are quite limited even on fairly beefy handsets.
Want to use more than 32 MB of RAM in your Android App on a device with 512 MB like the Xperia Play? Good luck without the NDK, as your heap is limited by design (a limit that is being raised, but considering the devices ICS is meant to be running for, that is devices with about 1 GB of RAM, …).
Having a high level language like Java is nice, less so when you have to mix it “native code” through JNI. Debugging applications that mix both approach is also not my definition of fun (which is why I prefer the Objective-C approach Apple took and how easy it is to mix C and C++ code with it… and debug the whole thing). Another approach could have been the C#/.NET way of integrating native code (P/Invoke would have a close relative in the Java field called JNA, but Google preferred to keep full old school Java for a reason that seems academical and not practical).
Technically not true, you can also open other applications, URI’s are not just for webpages . Ever tried sending an e-mail using iOS’s mail client from your own app? It requires a few lines of code and works basically the same way. How about an SMS?
What you discover using both, and coding for both, IMHO is that Apple has been more pragmatical in the software and hardware choices they took, maybe a bit too restrained in some of them… not forward looking enough, where Android has kept the Linux attitude of let’s keep adding any new cool sounding feature sometimes forgetting to fix long standing bugs and improving on what the OS already delivers…
Was NFC support in the OS really the best way to spend Google’s limited Android Software R&D budget (it must be limited, else I’d expect a different SDK and development tools experience for starters… more attention to bugs being reported, better TOOLS and SDK web sites, etc…)?
How about lots of Android handsets still suffering from clock drift? Thanks to the fact that Android only uses NITZ by default (instead of also supporting NTP for example) and a lot of mobile networks do not support it, handsets like the Xperia Play (which I own) suffer from a clock drift of several seconds a day (Android does not let you use apps to automatically fix the clock’s seconds component without being root… you cannot do it by hand either). This does not work too well with GTalk’s chat for example (unless you correct it when it goes over 30 seconds of clock drift… you have this kind of granularity).
Apple has its fair share of problems, but it has 1.) more experience in Software R&D (OS, development tools, developers support, documentation [seeing a new OpenGL ES extension in the changelog and being redirected, upon click, to the 0x hexadecimal string definition in the Javadoc documentation instead of the Khronos web page detailing what the extension actually does is quite nasty], etc…) and 2.) it seems more inclined to withhold something 90% complete than Google which seems to be happier about releasing unpolished applications and let the bake over time.
I’m sorry, but I very much prefer Android’s Reference to iOS’s. Much easier to navigate. Though if you’re not used to JavaDoc style documentation, I can understand the issue but I do not share it. And Eclipse is Eclipse.
And having Diane, Romain and Xavier on the android-developers Google Group is just marvelous. Does Apple allow their highest ranked developers help app developers?
As for API bugs, Android is pretty solid. There are bugs, but they are cleaned up with new releases.
And I share your concern about the time float, though Galaxy Nexus seems to be much better with it.
Why will not iOS have something like J2ME CHAPI? Really, why are all “Share” buttons result in such horrible experience on iOS. While being wonderfully easy on Androoid – a list of applications that handle sharing. And as Thom said – can the URL https://twitter.com/#!/YouTube/status/149281734347857920 result in opening of your Twitter client and show that status on iOS? At least in latest incarnation.
Uhh, iOS has custom URL support for app launch and 3rd party app intercommunication strategies: http://bit.ly/rM4jhI
Please consider getting at least a little informed about a subject before ranting.
Why anyone pays any attention to these “cross-platform” critiques from people with such little knowledge is beyond me. They are nothing more than arrogant displays of my arbitrary preferences make more sense than yours. Seems like a waste of time and energy.
This isn’t the same as Application Components. The review is spends quite some words on that.
Really, the mailto: and sms: custom URL shortcuts… what do they do? I can launch an e-mail from my own app without having to switch to the full e-mail application.
Still, while you have to work a bit harder to reproduce the functionality you want… it is doable. Activities and broadcast Intents do make it a more explicit. Still, it does not come for free on Android either. You can design a single Activity based application just fine.
Edited 2011-12-21 12:46 UTC
As other commenters have already pointed out content handling is most certainly a component model of software architecture but one that relies a little more heavily on convention for proper decoupling.
The key difference lies in the process model between iOS and Android. In addition, Android also allows the user to select (via system preferences app) which among several competing registrations for the same content request to use whereas on iOS it is the last run app that wins. Finally, Android allows for the “auto-install” of functionality on the device when the content handler requested is not present which is not possible on iOS. For example, a shopping list app that uses a UPC or QR bar code scanning activity will cause the OS to download and install it when the app makes the first request.
It really is a strawman argument however as the whole component model has never been much of a success no matter whether it has been Google or Microsoft who pushes it. Just generally leads to too disjointed a user experience.
FYI: Please, whenever I post anything consider it not to be for you, or about you or anything related to you. I leave you be, you let me be. Kthnxbye…
Sorry, but when you post complete nonsense that is factually wrong as “evidence” of some ranting view you are doing an injustice to anyone who reads it and accepts your propaganda as having any basis in reality.
You made an absurd assertion ranting about how bad iOS was that it didn’t provide a feature for handling content the way J2ME CHAPI did. I provided a link for you showing exactly how it in fact does. I can’t help if that troubles your worldview. The interwebz is supposed to be about human progress not a forum for you to spread FUD in exchange for fanboish affirmation.
It is not the style of the documentation. It is the fact that at times it is outdated (the tools.android.com website is constantly out of date), at times it contains not functional samples, at times it is not intuitive at all in its explanations (method which does not throw an IOException if the operation does not succeed, but in the Throws notes it is said that it does throw an IOException if it cannot perform that operation ?!?), some key parts are very poorly explained or barely touched (good luck on properly understanding LinearLayout weights based on their UI documentation alone), and quite often you are presented with method parameters which are not explained in structure and use anywhere near the method (and its documentation) in which you are supposed to use them in, but to be fair Apple’s documentation sometimes suffer from this issue as well.
Javadoc style documentation usually means programmers making notes and briefly documenting parameters –> automatic tool generates some HTML. That is not enough if you really mean to properly maintain what is known as “developers support/relations”. I am not saying that the Android engineers do not put effort in, but unfortunately it does not feel like they have staffed a team dedicated solely to documentation and technical writing…
Good documentation, IMHO, goes beyond describing what the class and methods should do (in general terms), what the functions are named, and how many and of what type its arguments are.
Something Microsoft and Apple do understand. Look at Facebook’s iOS SDK for example… they make it trivial to use, they document it really well, and provide you with clear and to the point samples to get you started as fast as possible.
Eclipse is not a bad IDE, it does a lot for you, it just does not seem that good of a fit for Android. It’s big, slow, complex, and I do not know if it is helping Google to design the best possible IDE for Android development as possible. UI and usability wise, neither Eclipse nor the ADT tools win any award IMHO. With an Eclipse plugin, using JNI and a not even very recent GCC release for the NDK and for C/C++ code compatibility… well, it does not look like what a company that size, dealing with a project of this much importance and weight, would do. Not even a customized and ready-from-start for Android Eclipse set-up like Aptana provides (in addition to their Eclipse plugin)?
Eclipse works really well with Auto-completion and code snippets –> the new onClick… CTRL+SPACE action saves a lot of time and allows you to do a good deal of what you would normally accomplish with Lambda expressions (Blocks in Objective-C, but obviously not as powerful).
How much have you used a recent Xcode release (4.x series)?
Let say that Chris Lattner (head of the LLVM project and Apple employee) regularly posts on their developers forums . Many Apple employees regularly post on their forums, ask for bug reports (you mention something on the forum, an Apple developer who works on the area might ask you to please file a bug report and post the bug report number [the general developers bug reports team is probably another separate team from the one working on the Clang compiler for example]), give users some help, etc… Generally, they are quite reactive to developers reported bug reports, answering developers and trying to gather more information and test cases. I have had some good help from Tor Norbye as far as the ADT plugin and the layout editor are concerned, so I am not calling into question the quality of the people involved. They just feel overworked and understaffed.
Also, along with the yearly iOS developer license you purchase, you get 2 technical incidents requests in which you can ask Apple for technical help on your app directly.
http://applookup.com/2010/09/iphone-apps-with-special-url-shortcuts…
http://wiki.akosma.com/IPhone_URL_Schemes#Facebook
You have pragmatical ways of doing what you ask (open an app from another app) on iOS too. Apple does not throw hundreds of thousands of ways of doing something if 90% of them might be replaced by something quite as powerful, but easier to use and to implement.
You might be against Apple as a company, you might not like the limits they place on you as developer, you might not like their App store model, etc…, but you cannot judge their tools, their support, the quality and polish of the user and developer experience they provide as being anything but great (I did not say perfect, they are not of course).
In order to use Xcode (any version), I need to buy a Mac.
On the other hand, to develop for Android I use my Linux computer, another colleague uses his Windows machine and another one use iMac. How’s that for freedom?
In order to deploy/test my application on Android, I just plug in the phone and debug. For iPhone, I need to pay for a developer account, turn iPhone into a development device (by installing certificates and other stuff on it). Freedom… is it important to you?
announcement of ultra-biased article…
…yes, multitouch is same as using stylus.
Tom, do you have anything smarter to do in life?
I frankly still dont understand the obsssion with multi touch
After using multitouch for a while on a Mac, it feels very clunky and slow to me when I go back to a single touch trackpad. It’s probably less of an issue on phones as the single touch isn’t required for the mouse, but even simply for zooming in and out it’s very intuitive and makes zooming on a single-touch device feel cumbersome.
Pinch zoomis annoying to do on the phone with one hand. i would much rather have what i had on my winmo phone….draw circle clockwise to zoom and counter clockwise to zoom out. maybe its just me 🙂
After using multitouch trackpads on both a Macbook pro from 2009 and my Asus laptop for a while, I have decided to always disable part of the gestures, only keeping the vertical scroll.
The rest is very rarely needed in my experience, and tends to trigger itself randomly at the worst moment.
I agree that dual-finger vertical scroll is much nicer than anything before though.
Edited 2011-12-22 09:02 UTC
“No matter where I am, no matter what application I’m using, the menu button always brings up access to settings, and the back button always takes me back one screen.”
I cannot really agree with this. Open the “Market” and your at the market home screen, select some app and then e.g. select one of the suggested apps from there. if you press the back button you’ll get back to the first app. However, if you press the back button again you’re not going to see the main screen of the market again but instead the market will be closed.
I’m using a HTC Desire HD (with MIUI Android but it has been like this also with the original HRC firmware).
In my opinion Android really lacks consistency in look and feel. Even the google apps do not have a consistent look and feel, which is what I do not understand at all (e.g. google docs vs. google mail). I think google should really focus on that and also “enforce” a interface design standard on apps (e.g. by marking apps as gold/silver/bronze android compliant or something like that).
I once owned a iPhone 3G which – at least in my memories – was a lot more consistent than Android is now. Besides all that criticism I prefer Android over iOS because of its more open nature (though I’m mourning the death of MeeGo)
I just remembered another thing that bothers me about Android vs. iOS. I don’t like having to mount my sd-card as a usb mass storage device, copy music to a certain location (which I always forget, because there are several directories named *music*, probably created by different music player apps that I tried) and having to unmount it afterwards. Instead I’d prefer a MTP interface or something similar.
Regards,
Michael
Pressing the back button again takes me back to the main Market screen. I even tried going through subsections and then following several suggested apps, but the back button always takes me back to the previous screen, eventually going all the way back to the front page.
Same here.
For what it’s worth – the back button does not do always what you would expect it to do, however it is very consistent.
Some apps do screw it up, but those are not in the majority.
“My Galaxy SII is a true computer, instead of a mere smartphone.”
– A ‘real’ computer sports something akin to a GNU userland. While Android is certainly more *customizable* than iOS, it remains firmly in the realm where *smartphone* OS’s belong.
“Swiftkey X allows me to do something no other smartphone keyboard can: work with two autocorrect/suggest dictionaries at the same time.”
– Maemo5/N900 sports dual dictionaries out of the box.
you can actually install ubuntu on an android device if you want, so you can get access to a full gnu userland.
http://androlinux.com/android-ubuntu-development/how-to-install-ubu…
Software == hardware?
Or maybe easier said: If you install Ubuntu, Android still has no GNU userland. So, eh?
Edited 2011-12-20 18:03 UTC
well, you are installing ubuntu as a chroot environment inside android, which for me would be good enough. i wouldn’t use it for running things like x anyway, although, it looks like you could if you wanted to, http://androlinux.com/wp-content/uploads/2011/02/android-linux-18.j…
running ubuntu in a chroot on android has me wondering if its possible to run meego in a chroot. maybe i should root my galaxy s2 and give it a try.
LOLWUT?
http://en.wikipedia.org/wiki/Computer
Nothing about a GNU tool chain there or anything that is similar to one.
I think I should have put the ” in bold and underlined them.
“Swiftkey X allows me to do something no other smartphone keyboard can: work with two autocorrect/suggest dictionaries at the same time.”
Nokia E52, Symbian v9.3 (two versions before current S^3) offers this out of the box too.
I’m somewhat surprised to see the history of smartphone OSs reduced to iOS and Android as the heirs of PalmOS and Windows Mobile, especially in an article written in Europe.
I’ve been a happy Symbian S60 user fo years and I’ll be a happy Meego Harmattan user when my N9 arrives.
As a sidenote, I’m not a true geek, but I understand the N9, not an Android phone, is what a true geek would want and qualify as a real computer in the pocket.
Having never had an [ugly] Windows Mobile device I can not say Android is this generations’ version of it. But having used a Sharp Zaurus, it sounds to me more like that one. The Zaurus was way ahead of its time but never took hold. It also had many different ROMs available, none of which were satisfactory to me.
In the end the Zaurus was a good exercise in learning Linux but ultimately not as satisfying as any Palm in doing what they were designed to do: being a PDA. Having different ROMs to play with is nice in itself but tinkering can become a huge waste of time, and, in the case of the Zaurus, none were particularly better than the default.
As it is right now, I am happy with my WP7. It seems to me a generation beyond what iOS and Android currently has to offer.
Edited 2011-12-20 17:01 UTC
I used to support a couple hundred Palm devices. I know a lot of people that had problems syncing them but we had very few issues with the people we supported.
Palms were great for keeping track of contacts and doing small programs. Big programs or remoting into computers wasn’t what they were designed for. The interface was very good for what it did.
The interface for Windows Mobile was as stupid and ugly as any device I’ve ever seen. Yes it could do mostly everything but it was like a whole team of people picked ugly bats and whacked away at it, for a LONG time. It was like everything that might not have been ugly they worked at until it was ugly and stupid. Dependability was also poor. But then I think having to reboot a device more than a couple times a years = poor design.
I’m a geek. I spent years working with 8 different OSs on “PCs” every week. I loved doing that. I loved programming for all of them. I even use to program for mainframes and mini-frames. I know how to operate and program for devices. I was the go to person for anything that was new up to the middle aughts (2000-2010). Then I got burned out from working too many 60+ hour weeks.
Android reminds me a lot of Windows mobile. Sure it can do a lot but it has MANY problems that I’m just not willing to put up with. I don’t enjoy debugging/fixing devices anymore. Maybe it is my artistic side which has been put on hold for all these years but I just get tired of half assed devices.
Imagine Microsoft or Google designing a half assed toilet. Pick your half that they leave off. No matter what half is left out it is going to be really ugly. That is what Windows mobile was and Android is.
Don’t get me wrong. iPhones are not perfect. There is a lot I would change. The difference is that there are only a couple hundred things I would change instead of in the tens of thousands.
I can’t believe that companies with over 40.000 employes can come out with such beta crap and put it out on the market. If it is beta, admit it.
Nice writeup, albeit quite thin at times and, well, just plain wrong at others.
Firstly, I wouldn’t call the present day smartphone UI paradigm WIMP, since they’re mostly, and iOS to a large extent, void of any user-manipulatable windows or menus. One is better off to define this post-WIMP paradigm as FICT : Fullscreen Icon Column Touch. One could argue that these are mere details and the one is just an alternate form of the other. One would be very wrong at making this statement.
Why would one be? Because the move from WIMP to a Post-WIMP environment allows for a whole other UI paradigm altogether in terms of user interaction, directly leading to the abolishment of the traditional HI derived interfaces, and the rise of the Skeumorphic UI design language. A lot of people who have their heads and hearts in the past don’t seem to like this, stating lack of UI consistency and plain dumbness of the device as its hurdles. What they fail to see is that it’s the paradigm of the whole device itself that’s shifting : Post-PC devices are no longer “UI’s in a box” like their predecessors were, and by the definition of their interaction characteristics and computational capabilities, do no longer require this traditional paradigm in order to function properly. In fact, as the history of tablet computers can testify to greatly : merely treating them as such has only made them fail in the marketplace, since they just end up doing a worse job than traditional personal computers. Thus, for any post-pc device aspiring to be truly successful, it must throw these conventions out of the window in favor of a more direct way of communicating with the user and to better facilitate the user excerting control over the device.
Over the decades, the “box” in the “UI in a box” has been reduced to the point where it’s been to regulated to a quasi non-existant state : from room size, to fridge size, to shoebox size, to book size, to frame size, its evolution has been quite staggering. In terms of handling, the box itself has fallen in the league of traditional portable single task devices like calculators, portable music players, etc. With the addition of touch on the UI level and better graphical capabilities, Skeumorphic designs have gained a clear edge: in a traditional WIMP paradigm, they have often proven to be an infuriating and frustrating design to work with. With Touch based hardware to drive them, they are a much more natural fit. Skeumorphic designs really shine on Post-PC devices and they are certainly one of the reasons why certain Post-PC devices have become so popular : They peel away a layer of abstract convention between the user and the device, making the interaction more natural and direct. There is very little UI convention to learn on a Post PC device simply because there is so little of UI in the first place. What is left is a simple grid, of which each item represents a virtual device. The perks of the traditional WIMP device is, frankly, just a casualty along the way of taking user interaction to the next level. On a Post PC device, WIMP is just a dead end. The carcasses of the ill-fated of pre-iPad tablets are all ugly witnesses to this. On a larger scale, WIMP is silently on its way to becoming an epîsode in the history of computing, just like the command line interfaces before them. Will WIMP dissapear completely? If history is destined to repeat itself, its highly unlikely, although WIMP will be regulated to an ever smaller growing group of users rather than being the mainstream. With both hardware and operating software seemingly reduced to its barest essentials, and increasingly becoming one and the same thing; what will remain for the user in the future will in the scope of things be merely its function.
The history of trying to build a Post-WIMP paradigm has been long in the making. One of the earliest examples we can find in Apple’s products is not found on a portable device, but on the Macintosh platform, as At Ease. It did not do away with the WIMP convention an sich; but it certainly did away with some of its earliest and less user-friendly derived conventions, most notably its file system and desktop metaphors. Instead, it introduced a fixed grid with single clickable buttons, each button being either a program or application. While seasoned PC users would raise more than one eyebrow at having such a crude and dumbed down tool to work with on a desktop computer, it certainly lowered the bar for a lot of users in an emerging desktop computer world.
I think you’ll be hard pressed to find people to state that iOS is a direct descendant of the Newton. One would be much better off not to draw direct lines between iterations of mobile communication concepts as its structure holds much more queues to biological evolution than to linear algebra. The Newton is one dead branch on the tree of mobile device evolution. PalmOS is another. However, in the tree, Android is sitting awfully close and on top of iOS, and anyone which bothered to check the facts surrounding these two know iOS inspired Android to such an extent that its development took quite an U-Turn in terms of user interface. Just like there’s no denying that Palm took quite a few queues of the devices that came before it and improved on existing conventions vastly, just as the iPhone did on the its predecessors and upped the bar on previous generations smartphones significantly. Downplaying the importance of this is like saying Dinosaurs weren’t a significant step forward in the evolution of life on earth, simply because they look a lot like reptiles : However : the changes heralded in dinosaurs allowed them to go become much more successful and be the dominant species for , thus ending up being dramatically more successful than its cold blooded ancestors, and altering the face of life on earth. Just like Android and its spiritual father iOS have augmented modern smartphone handset usecases significantly and consequently changed the face of the mobile computing landscape.
You might also want to look up the definition of crapware, Crapware and bundled software are not the same. While crapware is a form of bundled software, crapware is third party software which ships on a device and for which the device manufacturer was paid for, but is low quality or of little value to its user. on the windows side, MSN Messenger, MineSweeper, or Terminal Client are not crapware, and neither is Photo Booth, iBooks or Youtube.
On customizability, after years of tweaking and tinkering with UI’s, window managers, icons, hacking ICNS and other resources (Anyone remember Kaleidoscope?) I must say I’ve come to a similar zen-like conclusion than Anarcho-syndicalist Hakim Bey had about technology as a whole : They offer great toys, but are terrible distractions. The purpose for the UI is to facilitate user interaction, not initiate it. When time progresses, all UI paradigms and conventions will eventually fade anyway, and resizing windows, flicking trough screens, or tapping icons will look as old hat as olivetti typewriters or mechanical calculators.
I happen to have at hand a nice book about usability in software UIs which I find very well-written. In its argumentation, it exposes 12 pillars of software usability :
1/Architecture (Content is logically hierarchized in a way that makes it easy to find)
2/Visual organization (Every piece of UI is designed in a way that makes it easy to understand, noticeably by avoiding information overflow without hiding stuff in obscure corners. Information hierarchy plays a big role there)
3/Coherence (The UI behaves in a consistent way)
4/Conventions (The UI is consistent with other UIs that the users are familiar with)
5/Information (The UI informs the user about what’s going on, at the right moment, and gives feedback to user action)
6/Comprehension (Words and symbols have a clear meaning, in particular icons are not used to replace words except when their meaning is perfectly unambiguous to all users)
7/Assistance (The UI helps the user at its task and guides him/her, noticeably by using affordant elements at the right place)
8/Error management (The UI allows the user to make mistakes, actively tries to prevent them, and helps correcting them)
9/Speed (Tasks are performed as fast as possible, especially when they are common, with minimal redundancy)
10/Freedom (The user must, under any circumstance, stay in control)
11/Accessibility (The UI can be used by all target users, including if they have bad sight, Parkinson disease, or whatever)
12/Satisfaction (In the end, users are happy and feel that it was a pleasant experience)
For your information, skeumorphic UIs on cellphone-sized touchscreens fail at
-Visual organization (Why use clearly labeled and visible controls when you can use obscure gestures instead ?)
-Coherence (Need to explain ?)
-Conventions (Because in the end, your touchscreen remains a flat surface that does not behave like any other real-world object, except maybe sheets of paper. As a developer, attempting to mimick real world objects on a touchscreen is simply cutting yourself from the well-established PC usage conventions and forcing users to learn new UI conventions *once again*, except this time it’s one new UI convention per application)
-Information (Modern cellphones are already bad at this due to the limitations of touchscreen hardware, combined with a tendency to manufacture them in a very small form factor. Attempting to mimick large objects on such a small screen is only a way to further reduce the allowed information density)
-Comprehension (Mostly a limitation of touchscreens rather than skeumorphic design, but since touchscreens offer no form of “hover” feedback and mobile phone screens are way too small, developers often resort to obscure icons in order to shoehorn their UIs in small form factors)
-Error management (When you try to mimick real-world objects, you have to ditch most of the WIMP error feedback mechanisms, without being able to use real world objects’ ones because they are strongly related to their three dimensional shape)
-Speed (Software UIs can offer physically impossible workflows that are much faster than anything real-world objects can do. If you want to mimick the physical world, you have to lose this asset, without losing the intrinsically slow interaction of human beings with touchscreens)
-Accessibility (Give an touchscreen to your old grandpa who has got Parkinson, and see how well he fares with these small interfaces without any haptic feedback. Not a problem with computer mices, which are relative pointers whose sensitivity can be reduced at will)
I believe that most of this still holds for tablets, although some problems related to the small screen sizes of cellphones are lifted.
Edited 2011-12-21 11:11 UTC
I happened to have read a couple of them as well. Most of them were written for the traditional WIMP paradigm. While WIMP has served us well, they don’t really take into account the unique features of these new devices.
Your perception might differ, but the current surge of smartphones in the marketplace don’t really make them a product failure now does it.
For smartphones, mostly screen estate and handling. There’s simply no room for a conventional menu paradigm on a smartphone. But a skeuomorphic design does not need to imply that things are not labeled. You could design a skeuomorphic virtual amplifier where the knobs are labeled (Treble, Reverb, Volume, …), for example. Only manipulating a knob with a WIMP design is awkward. With touch it becomes a breeze.
Coherence is meant to facilitate predictability. The need for predictability by convention implies the paradigm itself is too complex to be self-explanatory. The first commercial WIMP devices were conceived to be self explanatory in the first place. The menu bar was invented because people would not have to remember commands. It was an essential part of the design that made a Mac to be as self learning as possible. The Xerox Alto and Star, void of application menu bars still required users to remember all the commands by heart, just like the more primitive programs on CP/M and DOS. The goal of the first commercial WIMP computers was that you would not need a manual to operate the computer. (i said goal, if they succeeded is another matter). The point of menus is that you can lookup commands fast, at will and execute them directly if need be. When processor power increased, so did the feature set of applications, and new applications overshot the design limitations of the initial WIMP devices for a great deal, leading to these giant monolithic applications, where most users don’t even know or use 95% of the entire application.
I think you’re failing to see the ingenuity of post-WIMP interaces here. I’ll give a simple example : The game of Puzzle Bobble. On a traditional WIMP devised system like a desktop computer, its played with the keyboard. Because it is, the imput methods are highly abstracted from the game play itself and gap between what the user sees and what needs to be done to control the slingshot is quite big. So there’s an initial barrier to overcome before these movements are stored in motor memory and the control becomes natural. Compared to keyboards, the mouse pointer already lowered this barrier a great deal, albeit not completely. One could design Puzzle Bobble to be played with a mouse pointer, which would lower the bar, but still have quite a few limitations in terms of presicion, and muscular strain when playing for extended periods of time. On a more general note, people who never used a mouse before initially struggle with it as well. On a post-wimp smartphone device, the barrier is much lower than the keyboard or even the mouse. In Puzzle Bobble, the player can manipulate the slingshot directly. He he can play the entire game with one finger, instead of using a multitude of buttons. Because he is able to manipulate the object so directly, things like controlling the velocity of the ball become possible. Things like controlling velocity or force have always been awkward with buttons.
Lets take another example : a PDF reader. You could design its UI with traditional UI elements : menus, resizable windows, scrollbars, … or you could design it just fullscreen and that you can flick a page with your finger. Which one of the two is more intuitive and more adapted to a smartphone screen real estate?
Thats why you have things like tap to focus and pinch to zoom, to complement information density.
I’d argue that comprehension is the biggest drawback of a traditional WIMP design because :
– its interfaces as we know them are highly abstract, there is little correlation between what we see on screenand what humans know outside the world of the computer screen,
– The set of objects in traditional WIMP interfaces are quite limited. This was less of an issue when computers weren’t all that powerful and thus couldn’t do that much, but the system has since grown way beyond its initial boundaries, making featureful applications overly complex.
You could make more direct error messages instead of having to rely on the primitive WIMP feedback mechanisms like dialogs.
I don’t agree with you here. Traditional WIMP UI’s can be inherently slower as well depending on the use cases. Consider an application that allows you to control the speed and the pitch of audio in real time. Implement it in WIMP-driven desktop or laptop first using the normal HI conventions, then implement it in a Skeuomorphic way on a touch screen. Which will be faster to use? on a WIMP device, you only have one pointer. So you’re never able to manipulate both pitch and speed at the same time, requiring you to jump from one to the other all the time with your pointer. On a post PC with multitouch, this problem does not exist.
Another example : Lets make a software synthesizer. Doing it in a WIMP fashion It will most probably consist of an array of sliders, buttons and labeled input fields. A skeuomorphic one will be composed out of virtual knobs and a virtual keyboard. While the first one might be more precise, the latter one will be a lot more intuitive and be a lot more inviting to tinkering and experimenting, triggering creativity a lot more. And it will be a lot more fun to use!
Traditional WIMP interfaces fail at persons who are blind. Your point being? And, I bet my old grandpa (if he were still alive) would have a much easier time searching whatever he’s forgotten today with Siri, rather than typing things into a google like interface on your WIMP device.
[q]
I believe that most of this still holds for tablets, although some problems related to the
Well, it seems to me the aforementioned principles are very general and could apply to non-software UIs such as that of coffee machines or dish washers.
Being commercially successful is not strongly related to usability or technical merits. For two example of commercially successful yet technically terrible products, consider Microsoft Windows and QWERTY computer keyboards.
So, how did keypad-based cellphones running s40 and friends manage to use this very paradigm for years without confusing anyone ?
Depends if the real-world object you mimick does have labels or not.
A big problem which I have with this design trend is that it seems to believe that past designs were perfect and that shoehorning them on a computer is automatically the best solution. But as it turns out, modern desktop computers have ended up gradually dropping the 90s desktop metaphor for very good reasons…
Disagree. Virtual knobs are still quite awkward on a touchscreen, because like everything else on a touchscreen they are desperately flat and slippery. When you turn a virtual knob on a touchscreen, you need to constantly focus a part of your attention on keeping your hand on the virtual knob, which is a non-issue on physical buttons which mechanically keep your hand in place.
Well, that is a given. Only few very simple devices, such as knives, can have a self-explanatory design. As soon as you get into a workflow that is a tiny bit complex, you need to cut it in smaller steps, preferably steps that are easy to learn.
What you are talking about is feature bloat, which is not an intrinsic problem of WIMP. As an example, modern cars are bloated with features no one knows or care about. The reason why they remain usable in spite of this feature overflow is that visual information is organized in such a way that users does not have to care.
Information hierarchization is something which WIMP can do, and which any “post-WIMP” paradigm would have to integrate for powerful applications to be produced. Zooming user interfaces is an interesting example of how this can be done on touchscreens, by the way.
Tell that to video game consoles, which have had pressure-sensitive buttons for ages The reason why desktop computers did not get those is that they were designed for work rather than for fun.
Now, I agree that skeumorphic interfaces can be quite nice for games, especially when coupled with other technologies such as accelerometers. My problem is their apparent lack of generality : it is not obvious what the answer of “post-WIMP” to common computer problems, such as office work or programming. Does it fail at being a general-purpose interface design like WIMP is ?
This kind of paradigm works for simple tasks, but breaks down as soon as you want to do stuff that is a tiny bit complex. How about printing that PDF, as an example ? Or jumping between chapters and reading a summary when you deal with technical documentation that’s hundreds of pages long ? Or finding a specific paragraph in such a long PDF ? Or selectively copying and pasting pictures or text ?
It is not impossible to do on a touchscreen interface, and many cellphone PDF readers offer that kind of features. They simply use menus for that, because they offer clearly labeled feature in a high-density display. And what is the issue with that ?
And on one application, a tap will zoom, on another application, it will activate an undiscoverable on-screen control, on a third application will require a double-tap, whereas on a fourth application said double tap will open a context menu…
Beyond a few very simple tasks, such as activating buttons, scrolling, and zooming, gestures are a new form of command line, with more error-prone detection as an extra “feature”. They are not a magical way to increase the control density of an application up to infinity without adding a bit of discoverable chrome to this end.
Oh, come on ! This was a valid criticism when microcomputers were all new, but nowadays most of what we do involves a computer screen in some way. Pretty much everyone out there knows how to operate icons, menus, and various computer input peripherals. The problem is with the way application which use these interface components are designed, not with the components themselves !
Basing a human-machine interface on a small number of basic controls is necessary for a wide number of reasons, including but not limited to ease of learning, reduction of the technical complexity, and API code reusability.
Adding millions of nonstandard widgets to increase an application’s vocabulary is possible in a WIMP design, good programmers only avoid it because they know how much of a usability disaster that turns out to be.
Such as ?
A common argument that has never been proven to hold in the real world. When I’m in front of an analog audio mixing console, I generally only manipulate one control at a time, except for turning it off, because otherwise I can’t separate the effects of the two controls in the sensorial feedback that reaches my ear. More generally, it has been scientifically proven many times that human beings suck at doing multiple tasks at the same time.
Introducing ZynAddSubFX : http://zynaddsubfx.sourceforge.net/images/screenshot02.png
A very nice piece of open-source software, truly, although it takes some time to get used to. It has both a range of preset patches for quick fun and very extensive synthesis control capabilities for the most perfectionist of us. And it does its thing using only a small amount of nonstandard GUI widgets, that have for once been well thought-out.
While you’re mentioning on-screen keyboards on touchscreens, ZynAddSubFX is more clever by using the keyboard that comes with every computer in a creative way instead. The QSDF key row is used for white piano keys, whereas the AZERTY row is used for black piano keys. Of course, you only get a limited amount of notes this way, just like on a tablet-sized touchscreen, but any serious musician will use a more comfortable and powerful external MIDI keyboard anyway.
Why would one need a touchscreen for software synthesis that works like on a real-world synthesizer ?
No, it won’t be any more intuitive than a well-done regular synthesizer GUI. Actually, if it aims at mimicking a real-world synthesizer, it may turn out to be as overwhelmingly loaded with controls as a real-world synthesizer, which is arguably ZynAddSubFX’s biggest problem.
A well-designed WIMP program, to the contrary, could use information hierarchy to hide “advanced controls” away from direct user sight, in a fashion that make those accessible for experienced users without harming newcomer’s user experience. This allows software interface to have a softer learning curve than analog appliances, arguably voiding the core point of skeumorphism advocates.
That touchscreens are just as much of a mess as mices for blind people (even more so, because they cannot embrace a spoken hover feedback as current touchscreens are not capable of hover feedback), but actually cater to a much smaller range of users.
Siri is a command-based voice interface designed for very specific use cases that are hard-coded at the OS level, I thought we were talking about general-purpose touchscreen GUIs so far ?
Do you really want dishwashers and coffee machines with resizable windows, double clickable icons and a pointer device?
True, but in this case of an emerging customer driven smartphone market, I tend to disagree.
Certainly not by using a WIMP paradigm.
Oh, I’m not saying skeuomorphic designs are the answer to everything. I’m saying that for quite a few applications on a Post-PC device, they make a lot more sense than a traditional WIMP paradigm would.
How many times do you keep your hand on a knob?
As all physical devices, WIMP and Post-WIMP devices do, so no difference there.
Well, it kinda is, by design. It wasn’t anticipated when they first came up with it, so it has become a problem for the paradigm. Solvable by conventions, yes, but conventions are more a bandaid than a real fix are they.
Then what were all those PC’s doing in our homes in the nineties?
Of course is not obvious. Do you think coming up with a working WIMP paradigm was all that obvious to begin with? Just look at the multitude of WIMP-based GUI solutions that were out there in the eighties. Nowadays pretty much everyone is emulating the Mac. Its still early days for post-pc.
Then aren’t we the luckiest guys on earth that smartphones are a perfect fit for simple tasks?
These are all possible on present day devices, so i fail to see your point here.
I have yet to run into this issue with my iOS device, maybe because in iOS its an API feature and its consistent over all applications.
Nothing is perfect. I disagree though that its a much better idea to use a classical WIMP design instead. Customers seem to agree, and since they are voting with their wallets and make developers like you come to work everyday, i think that’s what matters in the end.
You’d be surprised. In the comfortable confinements of the world you live in, that might be the case, but my experience tells me something completely different in that regard, and I’m not even really “out there” like some others are.
That might be the case on a desktop or laptop, but is not so much the case on a post-pc device for already mentioned reasons.
One could use color, sound, vibrations, …
Oh, thats strange. I hear DJ’s and sound engineers do it all the time.
The application you mention uses a mixture of WIMP controls and Skeuomorphic elements, its hardly a synthesizer designed in the traditional WIMP paradigm. If it was, it wouldn’t have the knobs or presets. If it were, it would have save files and sliders instead. Have you ever worked with real audio equipment? You’ll notice that they also use knobs and come with presets.
My Commodore back in the day already did that. What an awkward way to play notes! Hardly usable at all. One of the worst ideas ever.
For the same reason as why we needed general purpose computers in the first place. Versatility of function. No need to carry around heavy boxes of devices which only do one thing. Instead we can cope with only a few devices which can each do a multitude of things.
For a lot of musicians, it will.
Autism patients seem to disagree with you, touch based tablets are transforming their lives and have allowed them to communicate with the world around them, something traditional computers never did. Also, using voiceover to control a GUI a very broken concept, its just a bandaid, it doesn’t fix what’s broken in the first place in these use cases.
Why would we limit the user interface of a device to the visual aspect? Its not because traditional WIMP interfaces were only visual because of technology constraints at the time, that we should keep this limitation in the future devices that get built.
You are missing the point. I’m talking about the aforementioned principles of architecture, visual organization, coherence, etc.
I hate coffee machines with exotic coin slots and inconsistent button behavior just as well as I hate this kind of stuff in desktop software.
Well, in fact, if I take a typical keypad-based OS like Nokia s40, you get…
-> Windows. Each software has its own content area, which is not shared with other software.
-> Icons. Well, they are all over the place really, but let’s consider the main menu. Depending on screen estate and user preference, icons can either be directly labeled in a list form, or they can take a grid layout with icon label appearing when an icon is hovered.
-> Menus. Considering how little screen estate old phones had, it’s no surprising that menus are used all over the place in order to get a reasonable feature set to “fit in”.
-> Pointers. To select something in a menu, you typically point it using the arrows of a 4-directional pad, then click it using the center of said pad.
I have never needed to help anyone using a s40 phone, except for very specific tasks. On the other hand, I regularly help people dealing with the intricacies of iOS.
And this I can agree with. In my opinion, skeumorphic designs fail at being a universal UI paradigms, but when you want to mimick existing hardware’s functionality instead of exploiting the full power of a computer, they fit the job well and are fun.
For audio mixing, I can spend quite a lot of time carefully tweaking something while listening to an audio loop. By the way, I believe faders and sliders are better for that kind of task when space and cost are not an issue, but that’s another story…
No, it isn’t. Modern cars are feature-bloated, home heater controller are feature-bloated, dedicated stopwatches used to be feature bloated before cellphones started to eat up that functionality, and I could go on and on.
As soon as you put a processor in a device family, no matter what its UI paradigm is, engineers become able to implement as many features as they want. It then takes mental discipline to prevent the feature set from expanding too much, and usability expertise to devise a good information hierarchy when the feature set really has to be big. That kind of competence is precious on any digital hardware, no matter what kind of UI it uses.
I’m not sure I understand what you mean, sorry.
Well, so far you have defined post-WIMP UIs as some sort of postmodern user interface that breaks free from all conventions and does whatever is suitable for the task at work. Does not sound like a good start to create a paradigm.
See zooming user interfaces for an example of what a touchscreen-specific user interface paradigm could be like : http://en.wikipedia.org/wiki/Zooming_user_interface
Sure, but aren’t we talking about tablets and other post-PC devices which aim at being more than a portable video game console with a touchscreen or a cellphone with toys installed on it ?
My point was that menu-based WIMP workflows are reintroduced for that kind of tasks, which shows that skeumorphic designs are not a universal UI paradigm, as you have acknowledged earlier in this post.
So you are telling me that if you use a pinching gesture or a double tap in any (and I really mean any) iOS application, it will have a consistent behaviour ?
Doubt it.
Sorry, I do not develop software for a living, it’s more like a hobby This way, I can avoid overhyped technology, underpaid hackjobs, and boring programming environments, and focus on what I like to do.
Such as ?
So you think that WIMP interfaces are unable to use feedback based on color and sounds ? I think you should spend more time using them.
Vibration feedback is hardware-specific, and the desktop market has decided overall that it does not need it, but if every laptop included a vibrating mechanism, I guess there would be a standard API + support in the widget toolkit for that, as is the case for gamepads where rumbling is a common feature.
Edited 2011-12-24 13:52 UTC
I think there is a misunderstanding between us as for what WIMP means. For me, WIMP means Windows, Icons, Menus, and Pointer, and that’s all it is about. The rest is left to each individual toolkit implementation, whose widget toolkit typically includes a full equivalent of the de facto standard electronics UI (push and bistable buttons, faders, knobs), plus hierarchical and scrollable controls and displays that are not possible on regular electronics (such as list views and comboboxes).
WIMP does not imply use of a mouse, and does not limit the set of usable widgets. It just happens that clever developers attempt not to overflow their user with a ton of new widgets to get familiar with at once, and that using the standard widgets of your toolkit is the perfect way to do that.
WIMP does not imply that data must necessarily be accessed through explicit loading and saving either. As an example, Mac OS X, which is as you said is the stereotypical WIMP GUI of today, recently introduced a file manipulation scheme in which you do not have to manually save files anymore, which I like quite a lot.
So for the sake of convention respect, it is a good thing to replicate those hardware UI mechanisms within a WIMP computer interface, right ?
If you have a close look at virtual audio hardware, you’ll notice that its knob are generally not manipulated by a circular gesture, but by a linear gesture. This is something which would be physically impossible in the real world, but which allows for more precise value setting (your knob becomes like a fader) and is much more comfortable in the context of a pointer-based interface.
Well, this is pretty much what I think of touchscreen-based keyboards too, but as I said before real musicians use a MIDI keyboard and nontechnical people who just want to have fun enjoy it anyway.
*fond memories of his childhood playing silly tunes with a DOS software called “pianoman” or something like that*
That’s not my question. Why do you need a touchscreen for audio synthesis ? What is the problem with other input peripherals such as mice and styluses, as long as the UI is slightly tweaked to adapt itself to these devices just like touchscreen UIs are ?
And I read a newspaper article the other day about someone who didn’t have enough hand-eye coordination to manipulate a pen, and was able to succeed at primary school due to the use of a computer keyboard coupled with a command-line interface.
Autism is not about physical issues with the manipulation of user interfaces, to the best of my knowledge. Are you sure that it isn’t tablet software that made the difference ? For all I hate touchscreens, one thing I have to concede is that their low precision and limited capabilities make developers realize that UI design is just as important as functionality.
What’s broken in the first place is that most people can see and a few people cannot. We don’t want to give up on graphical user interfaces because they are very nice for people who can see, but we have to assess that they will always be quite bad for people who cannot with respect to command-line and voice-based interfaces.
To the best of my knowledge, this core problem would be very difficult to solve. Good input peripherals with hover feedback, such as mice or Wacom styluses, offer an interesting workaround. Touchscreens, on the other hand, offer nothing more than a flat surface.
I have discussed this before on this website, I wonder if that was not with you. If we want to build user interfaces that cater to the need of all human beings, not only those with good sight and precise pointing skills, then we need to design user interfaces at a very fundamental level of human-computer interaction.
We must work with concepts as abstract as text IO, information hierarchy, emphasis, and so on. This is for once an area where web standards are miles ahead any form of local GUI, as long as they are used properly.
I have tried to work on such a UI scheme, but it ended up being too complicated for me and I given up. You need whole research teams full of specialists, not one-man projects, to tackle that kind of monster problem. Unless this is done, everything will be hacks like Siri where you have to explicitly support voice command and design a voice-specific interface for your software to be “compatible” with blind people.
We were talking about WIMP versus post-WIMP GUI.
I advise you to look up what WIMP actually stands for. Smartphones with pull down menus, resizable windows and pointing devices such as a stylus have fell out of fashion a long time ago, and this paradigm certainly wasn’t used in the Nokia S40.
Except when the engineer who makes the car, he has the chance to change the control paradigm to make it simple and straighforward. When adhering to WIMP’s principles, you are bound by a fixed paradigm, thus convention becomes important.
Personal computers were initially conceived for consumers, not for work.
Well its still a hundred times better to try out new things and innovate rather than staying stagnant with a paradigm that needs fixing left, right and under because it has been superseded by a world thats changing around it.
Tablets sit somewhere in the middle between computers and phones. They allow for more functionality than a phone at the expense of less portability, but less functionality than a computer with the benefit of offering more portability. The thing to understand is that the success of these new devices is not about replicating all the use cases of the older devices, it is to augment them with some of the tasks they do, but in a more fluid way that is somehow more closer to the user. For some people, these basic use cases will be enough, so they won’t need the full blown computer experience anymore.
They aren’t. Check out AirPrint, it doesn’t need a menu driven application to be able to print. Copy pasting also on post pc also doesn’t need the traditional menu paradigm.
If they use the standard methods present in UIKit, they will. No need to reinvent the wheel when the functionality is already present. Which is kinda the point of using Apple’s Cocoa API’s in the first place.
-sarcasm- Oh no! A hobbyist developer! The worst of the bunch! -/sarcasm-
I didn’t say they are unable to do so. However, when they do, they are not using the WIMP paradigm.
I think you don’t really understand what WIMP stands for and interprete its concept way too broadly.
I didn’t say the Mac from today is a stereotypical WIMP GUI. When referring to WIMP on the Mac, i meant the original Macintosh as launched in 1985. Since the introduction of Mac OS X, Apple has been silently moving away from traditional WIMP design in small steps, from which Lion is the furthest iteration away with the introduction of full screen applications and the launchpad.
It depends on the use case. My argument is that on laptops and desktops, they tend to make things more awkward. For post-pc devices, they tend to work better than a traditional WIMP paradigm.
Again, it depends on the application. A lot of musicians seem to like garageband on the iPad, because it so happens to allow you to do quite a bit of prototyping and roughs on the fly while being on the road when you don’t have access to a MIDI keyboard.
Styluses get lost easily. They are also notorious for scratching the surface of a tablet device, leaving ugly marks. You also don’t need them for text input, since writing is way slower than typing. Using a mouse isn’t practical for a mobile device either, and kinds of defeats the advantage of the mobility of the device. Using a mouse is a bit like using wired network on a laptop.
Ofcourse it was more than the device alone and they used a special designed tablet software. But the tablet form factor and the touch interface were an essential part of the success of the project. They couldn’t have done it a desktop or laptop.
That doesn’t mean you can’t come up with a non-WIMP paradigm for people who can’t see.
Because by doing so the conversation would no longer be about the merits of WIMP vs post WIMP.
So stores let you play with running phones in the Netherlands ? Sweet !
This is debatable. On Windows, since at least Windows 98, you have Trident shoved in all places where it’s not needed, creating multiple major security holes, in order to justify users’ inability to remove IE.
If you compare the UI’s of say the Spotify, eBay and Tumblr apps on iOS and Android then you can see the consistency difference, the iOS apps are very consistent whereas the Android apps look like hack jobs. The anything will do sort of attitude. Anyway I recommend that you try the MIUI Rom on your SGS II its very well designed and all the defualt apps except the market (and clock if I remember correctly) are rewritten and redesigned from scratch and are very consistent. Even if you don’t like the iOS-like interface on MIUI you can use a third party launcher like Launcher Pro and still enjoy the awesome applications that are put together by the MIUI team. http://www.miuiandroid.com for the English version.
I’ve thought about Miui, but the problem I have is that it can’t synch Facebook contacts with your regular contacts. Is that still true?
I’ve tried it on a Galaxy S2 and yeah the Facebook contact sync seems to work fine
Hello Thom, I don’t comment often but I’m genuinely interested: what could you do with Windows Mobile that you can’t do with Android nowadays ?
I have trouble accepting part of your article :
“Windows Mobile, on the other hand, could be very confusing, was inconsistent, and had a far steeper learning curve. At the same time, its flexibility still isn’t matched by Android (let alone by iOS). You can literally do everything with Windows Mobile. Back when Apple was still busy trying not to die, and Google was a search engine, I was using my Windows Mobile PDA with wifi to connect to SAMBA networks to stream Futurama episodes from my server, while browsing the web and sending out emails.”
I have been using windows ce since version 2.0 on a HP 320 LX (quite old gear), then on a Casio EM505 Pocket PC, Ipaq 3630, powerful Asus A620 in 400MHz glory and a HTC smarpthone P3600. I also briefly owned a HD2. So I’ve experienced every version of windows mobile from CE2.0 to WM6.5. And I liked it very much before Android was brought to life.
However since I’ve owned a HTC Dream, I haven’t really been looking back. During the 1st year, there were some lacks of functionnality but I fail to see nowadays what Android lacks agains Windows Mobile PDAs or smartphone (considering 3.0->6.5, not considering WP7).
Edited 2011-12-21 17:34 UTC