This is the fourth article in a series on common usability and graphical user interface related terms [part I | part II | part III]. On the internet, and especially in forum discussions like we all have here on OSNews, it is almost certain that in any given discussion, someone will most likely bring up usability and GUI related terms – things like spatial memory, widgets, consistency, Fitts’ Law, and more. The aim of this series is to explain these terms, learn something about their origins, and finally rate their importance in the field of usability and (graphical) user interface design. In part IV today, we focus on a dead horse Fitts’ Law.
The scientific articles linked to in this story might not be accessible to you. You need a subscription to the distributors of these articles in order to read the full-text versions. Students can usually access these articles either by using a computer at the university campus, or by setting up a proxy at home. Contact your university’s IT dpt. for more questions.
Fitts’ Law is an interesting beast. It is one of the few consistently proven (and therefore reliable) laws in ergonomics, it is more than 50 years old, but at the same time, it has been cited so often on the internet that is has been beaten senseless, making it almost sound ridiculous. So, what is Fitts’ Law? How did he come up with it? And is it truly applicable to computer mouse cursor movements?
Fitts’ law is actually a formula [like Wikipedia, we use the Shannon formulation] that describes the time it will take for someone to move a device (stylus, finger, etc.) from one target to another (T), taking several things into account: the constants start/stop time of the device used (a) and the inherent speed of the device (b), and the variables distance between starting point and target (D) and the width of the target along the axis of motion (W). The consequence of this formula is that speed and accuracy are dependant on the width of the target and/or the distance between starting point and the target. [Thanks to Wikipedia for this clear explanation]
Image courtesy of Wikipedia.
What makes this seemingly harmless formula so important is that it is one of the reasons why we use a computer mouse today. Stuart Card, Alan Mewell, and Tom Moran used Fitts’ law to test various human-computer interaction devices while working at the Palo Alto Research Center, and from these studies, it came forward that the mouse was the best way [.pdf, see page 34] for a human to interact with a graphical interface on a computer, and hence, this is why Xerox decided to commercialise the mouse in their Xerox Star computer, mentioned earlier in this series. When Apple came to PARC, they obviously saw this too, leading to the mouse being included in the Macintosh.
This was not the only research done using Fitts’ law. Many researchers have proven the law, and a lot have also tried to extend it, most notably resulting in the Accot-Zhai Steering law, which expanded Fitts’ law from a movement in a single dimension to movements in two dimensions (such as a computer screen).
Fitts’ research [.pdf] that led to the law had, of course, little to do with computers (it being 1954 and all). Interestingly, however, one of the three sub-experiments that he conducted bore a striking resemblance to movements performed in present-day computing. Basically, he asked subjects to move from one target to another on a flat plate, moving a stylus from a vertical starting area to a vertical target area, where he manipulated the distance between the targets, as well as the targets’ width. The goal was to place the stylus on the target area, without touching any of the error areas adjacent to the targets. Even though this movement was fairly one dimensional, you can easily see how this relates to mouse movements, or, better yet, stylus movements on a PDA or a tablet.
Image taken from Fitts’ original research paper [.pdf].
The results are, for us, predictable: the longer the distance between the two targets, the longer it takes for subjects to hit them, and the more error prone their movements are. The same applies to the target width: the narrower they are, the longer it takes to hit them, and the more error prone the movements are.
Fitts also conducted two other sub-experiments that further strengthened his theory. He asked subjects to move discs from one pin to another, varying the width of the holes in the discs, as well the distance between the two pins. The third and last experiment involved moving eight pins from a sequence of vertical holes to another, again varying the distance between the sequences, as well as the width of the pins. The results were clear: the smaller the tolerance (the narrower the holes in the discs/the narrower the pins), and the larger the distance between the targets, the longer it took subjects to complete the task.
Now, what does this all mean for designers of graphical user interfaces? First of all, it means that the most desirable area for a certain function to be is right underneath the mouse cursor; meaning, the distance to the target is zero. That is why pop-up menus are more easily accessible than pull-down menus – ignoring the obvious issue of discoverability, that is. It is also argued that pie menus are better then vertical menus, seeing the height of an item in a traditional menu is fairly low, making it harder to accurately hit, compared to a pie menu where each item’s size expands from the base, making it easier to hit them.
From this it is derived that the more important a user interface element is, the closer it ought to be to the mouse cursor. That is why I, personally, demand of my browser that its bookmarks bar is placed right atop the content area of a window; besides the content area, this is my most used interface element in a browser, and hence, I want it as close to my cursor as possible.
A second obvious conclusion is that targets in a graphical user interface (such as buttons, menus, icons, and so on) should be as large as possible, making it easier to hit them. Personally, I experience this little fact every day when trying to do window management in Windows XP; since Windows lacks decent window management keyboard shortcuts (for me, that is) compared to, say, OS X, I am more or less forced to use the buttons in the window titlebar. However, they are fairly small, and when you are very busy, it is very easy to miss them. Obviously, you cannot make items so large that it becomes ridiculous, but it is good practice to not place often-used functionality under small, hard-to-hit buttons.
The most debated consequence of Fitts’ law is the importance of edges and corners. Because the mouse cursor cannot travel beyond screen edges and corners, they are seen as infinitely large targets – which Fitts’ law really likes. The two prime examples of this are the global menubar in the Mac OS, and the start button in Windows; you can blindly move the mouse to the lower-left corner of a Windows desktop, and it will always hit the clickable area of the start button (since my Vista laptop died due to hardware failure, I cannot confirm this behaviour for the new start ‘orb’ in Windows Vista). Windows also uses Fitts’ law in the taskbar, allowing you to hit entries in the taskbar despite the fact that the buttons do not cover the bottom two pixels of the taskbar.
Criticism
Many have proclaimed Fitts’ law to be holy, and that interfaces elements that respect Fitts’ law are always better interface elements than those that do not. Take the global menubar in Mac OS X for example: even though there are a whole lot of reasons to think of to put the menubar up there (I prefer it too), the discussion almost always ends up discussing the merits of Fitts’ law. This view, that a global menubar is better because of Fitts’ law, is a tad bit, dare I say it, short-sighted.
Despite its proven robustness, Fitts’ law has a serious shortcoming: Fitts’ law only takes into account untrained movements, which, in today’s world of ubiquitous computing (at least here in the western world), is a bit of a stretch. Many people in our world have been using computes for a long time, and have trained themselves to hit targets with their mouse or touchpads, getting better at it with the hour. I distinctively recall my grandparents buying their first computer about 8 years ago, and the troubles they had with using the mouse to accurately hit icons. When we visited them again, a week later, their troubles with hitting targets had gotten a lot less apparent.
In other words, Fitts’ law can only be applied to its fullest on untrained individuals, who have never used a mouse to navigate a graphical user interface before. Since this group of people is getting smaller by the day, the importance of Fitts’ law also diminishes. It is still important, but the thorough training people have gone through over the past 15 years with computer mice does affect its importance.
Conclusion
Fitts’ law is important for designers of graphical user interfaces because it is one of the few “constants” in ergonomics, and it provides a set of easy to remember and to apply rules and guidelines on how to design interface elements, and where to place them in your user interface. However, its importance must not be overstated; remember that it does not take training into account.
Consequently, you must examine your user base when it comes to the level of compliance to Fitts’ law you need. Find out if they are long-time users, with a lot of experience, or if they are people that have little experience with computers, and adjust your adherence to The Formula accordingly.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
First off, great article (as usual) Thom.
I can confirm, Vista’s start “orb” can be activated by the corner. One that you missed is you can close a maximized window by the upper right hand corner as well. Which is kind of funny, because in Vista, that means that the button “Hit area” extends a good 15 pixels beyond the visual indication, and only when the window is maximized. Same thing with the start orb, visually one would think that it wouldn’t work, but as soon as you try, you get the right (if inconsistent) behavior.
As for Fitts’ law becoming less important with training, I disagree. A badly done interface element will always be harder to use then a well done interface element. Just because the guys in the worlds strongest man competition CAN throw logs like they are sticks does not make throwing logs easy, even though their training has brought them to the point where they can do it as easily as normal guys could through a stick. Same deal with interfaces, just because you are an expert at using a poorly defined interface doesn’t make the design irrelevant, and if you compare to a guy who has the same amount of training on a well designed interface, at the end of the day the guy on the well designed one will have more energy doing the same thing. Case in point, I have been using computer mice for about twenty years now, and I still find it easier to hit corners then 40x10px menus floating in the middle of the screen.
HOWEVER, I completely agree with you that Fitts’ Law is blown completely out of proportion by armchair designers. You would think by talking to some people that it is the one and only concept or heuristic in interface design, and the sole metric one uses to judge the worth of an interface.
You make excellent points, but other than it not becoming less important with training, there are a lot of people out there with less than ideal coordination due to whatever reasons, and Fitt’s Law is their friend: with certain limitations people may have, no amount of training will make it easy to hit something, because they just don’t have the coordination for it.
Also, even with training, IIRC Apple (or someone else) showed that because there’s a high probability that the user will overshoot their intended target, they automatically slow down their mouse movements, to the point where the time taken to move the shorter distance (compared to the global menu on the side of the screen) actually increases: thus, it still matters for actual speed: the only thing that changes is user’s perception of whether it’s faster to have a localized menubar or a global menubar.
Admittedly, once you have more than one application in use, it can be rather confusing as to which one holds control of the menu bar, and the menu changing quickly can quite possibly be disorienting: in that respect, having the menubar attached firmly to the window it controls is less confusing.
Pretty sure you read that from Bruce Tognazzini’s askTog site, I know thats where I did 😉
On top of that, according to Jeff Raskin’s “The Humane Interface”, the brain uses a measurable amount of energy when using a user interface (something we can’t accurately or safely measure yet, but still). He says that the amount of brain power required to operate any given interface is an indication of how well it is designed, something that forces the mind to work in ways it doesn’t handle well will take more energy, and something designed with cognative psychology in mind will take less.
“””
On top of that, according to Jeff Raskin’s “The Humane Interface”, the brain uses a measurable amount of energy when using a user interface (something we can’t accurately or safely measure yet, but still).
“””
So we may someday rate interface ergonomics in clicks per gallon of glucose? (Yes, I dare say that we in the U.S will still be using gallons, pints, and onces at that time.)
Gnome gets X CPG? Windows Seven gets Y CPG?
YMMV, of course. 😉
Edited 2007-11-07 19:53
only US citizens would have their glucose usage measured in gallons (or liters for that matter)
It’s funny when people advocate a global menu bar based on Fitts law. Sure, with equal distances, and a knowledgeable user, and given the task is to access a menu, a menu bar on the edge of the screen is faster than one not on the edge. However there are a lot of confounding factors.
1. A local menu bar if often much closer to the mouse. For example, an IM window on the far right of the screen has a local menu bar very close to the application content, rather than across the screen. Also if you have two screens, you might have to move all the way across two screens to get to your menu. Hard to compare which is faster then.
2. Accessing a menu in an inactive app requires two actions with a global menu bar, activate and click.
3. The user must know they can click on the edge of the screen. I don’t think I’ve ever seen a non-technical user take advantage of screen corners. New users precisely aim at these buttons just like any other button.
4. The assumption is that the menu is an important UI element to access. This is not true for myself, since I barely ever use the menu in any application. Putting the menu at the top steals that space from other widgets that could potentially go there (for me this is the minimize and close buttons of a mazimized application).
Usability is never as simple as a formula.
“This view, that a global menubar is better because of Fitts’ law, is a tad bit, dare I say it, short-sighted.”
Correct. If you really want to invoke Fitt’s Law in a menu bar discussion, you would do away with the menu bar and put it all in a popup menu a la NeXT, DejaMenu in OS X or MagicMenu on the Amiga.
And RISC OS
Which begs the question: why doesn’t that happen more? Every GUI I use seems to blithely ignore Fitt’s law and it’s descendants, giving me thin strips floating in the middle of the screen to target, be they the resize bars along the edge of a window or frame, or the items in a menu – many’s the time I’ve been caught out by the wrong sub-menu popping up as I search for the menu item I really want. Then there’s those fiddly icons, small toolbar buttons and the save action within mere pixels of the save as… action.
And all this applies universally to Windows, Linux, Mac and pretty much everything else. For an over-used term, Fitt’s law seems to be sadly under-used.
Probably everyone elses laws getting in the way, krugs laws, jakobs laws, every usability student seems to invent their own too. 🙂
The 3D apps seem to get it right. Maya makes extensive use of circular context menus and even extended them to marking menus. 3dx max or Blender use context menus a lot too.
True. Although KDE will let you move or resize a window with your mouse anywhere on that window by holding a key (Alt or the Windows key) and dragging the mouse with left or right click. It’s incredibly convenient. The lack of that drives me nuts in other environments/OSes.
Also toolbar icons default to icons+text in gnome and the upcoming kde4.0, so the buttons are no longer fiddly.
that alt+mouse is a feature of the X server, not kde iirc.
hmm, i kinda recall a function in windows that allows one to move the window with the arrow keys. am i getting my desktops mixed up?
also, if one install powerpro on windows (should even work in vista) one get a lot of extra window control features, for free!
btw, i wonder what window control hotkeys thom is missing in windows. honestly, not looking to start a flame.
Edited 2007-11-08 00:33
Yes, you can do the arrow key thing, but that’s sometimes slower than idly swinging the cursor in the general direction of a window. Especially with spatial window managers (Nautilus’s spatial mode help file was what taught me the alt-trick.)
And GNOME has another shortcut (not alt-right drag like KDE) for resizing.
If they do they sure do a hell of a good job of making it hard to find. It’s not an option in any of the config dialogs. Sure there is ALT-F7 or something, but that’s not anywhere near as useful or fast.
I explained a few posts earlier.
Well, if you find Alt-F4 nigh on impossible to hit, try Alt-Space, C. All three keys are adjacent to one another, shouldn’t even break a sweat attempting that one.
ah, sorry. i thought i had read thru it all but ill give it a second look
Nope! The X server does not deal with initiating window motion. It only has mechanisms for actually moving the window to a new location on the screen. The Window Manager is entirely responsible for how moves are to be initiated, whether it is clicking on the titlebar, or holding alt, or even using the keyboard. You might be getting confused by the fact that many window managers allow you to use alt+mouse to move a window. But it is no more a feature of X than dragging the title bar is.
heh, i guess your right. its probably a holdover from the first window managers, that when you first learn is there, you cant do without
The ideal – IMHO – would be a window manager with the ability to use the various Adobe keyboard shortcuts for resizing windows (hold Ctrl to resize proportionally, hold Alt to resize in/out from the window’s center, and ctrl-alt resize to combine the two).
Which begs the question: why doesn’t that happen more?
Probably because a (right click) context menu is not intuitive to inexperienced users. If it’s not visible, it doesn’t exist.
A visible menu bar may not be the most ergonomic, but anyone (with good sight) can find it and read what the thing does.
In the year 2007, are there any unexperienced users? That a right-click will pop up a menu is something that is about as simple to teach as that a left-click will follow a link.
Well I’ve never found context menus to be particularly intuitive – unless absolutely everything in an application has a context menu, and they always show all of the options for the particular object I am clicking on – otherwise it’s a freakin guessing game.
“Does this object have a context menu?” “Is this option available in the context menu, or do I have to go to the application menu?” Those are two questions that, if I have to ask them too often in an application, will result in my almost always going to the application menu.
I see you’re not thinking that there are or will ever be anymore new users. However, unless we just happen to be living in the last generation of humanity, and all those not currently using computers will never use them, then the answer is a resounding “yes, there are and will always be new computer users!” since the last time I checked, we don’t have genetic memories, using computers isn’t a natural instinct, and we aren’t born in any manner with the required experience/knowledge. Besides, computers keep on changing how they work over time, and it’d be lunacy to presume that 20 years from now (or perhaps even 10) that the user interface will not have changed in some notable way, when you compare between large spans of time.
I’ll go out on a limb and propose a “Thompson’s Law” that applies to speech recognition interfaces: the easiest commands to issue with assurance they’ll be correct are the ones where the user utters some swear word at the top of their voice at the computer, so it can’t make a mistake as to what was uttered: I base this on my first-hand experience back in the days when the Macintosh Quadra 840AV came out with the speech recognition interface (very new then) came out, and I found the most reliable way to open the startup volume (Macintosh HD) was to speak the command: “Computer: burp!” into the microphone. That was by far the most reliable method to open that volume, as saying the name of it wasn’t nearly as reliable. The key here is that “Burp!” was very distinct and easy to not confuse with some other utterance, and somehow, the speech recognition engine of the time made the connection between that and “Open Macintosh HD” because I’m guessing it satisfied some mathematical set of points it needed to recognize. Needless to say, my coworkers were highly amused, and I don’t think the speech recognition stuff was used on that computer after that time
Yes, there are. And in fact, unless your software targets a very distinct niche (say, you’re targeting the readers of OSNews, for example), the vast majority of your potential market are inexperienced. Even people who use computers every day of the week are comparatively inexperienced, because they only ever learned how to do specific tasks: write a letter in Word, for example, or follow links on web pages. To many people, the notion of “it works just like Word” or “it’s just like a web page” isn’t one that’s at all intuitive. (A lot of this is down to schools and lack of training, though: people are taught specific applications instead of general concepts).
And yes, you can teach that “right-click will pop up a menu”, except that it won’t always pop up a menu that’s useful to what you’re doing and there’s no visual indication of where you should right click in order to get one that is. That’s a whole lot harder than teaching somebody how to follow a link that says on the tin what it is.
The bottom line is this: you design for the inexperienced majority, but cater for the experienced minority.
No, it raises the question.
Maybe now people will stop thinking that fitts law just means having a button in each corner of the interface.
How would a circular menu system work with submenus? Would they expand out around the opening menu? or in the middle and push out the opening menu around it?
Edited 2007-11-07 14:25
> How would a circular menu system work with submenus? Would they
> expand out around the opening menu? or in the middle and push out
> the opening menu around it?
Just an idea: A submenu could open around the exact position of the mouse cursor at the time you clicked the parent menu item. The parent menu would be “greyed out” so it won’t confuse the user. In case the submenu opens partially off-screen, the center of the menu could be used as a “knob” to drag the menu around the screen.
Another option is to have the submenu take the place of the original menu.
Personally I really like the implementation of ring menus in the game Secret of Mana (Super Nintendo, 1993 – for non-Americans the original name is Seiken Densetsu 2). It works wonderfully with most commands you need to do in the game’s main screen. Not just actions, but also inventory and equipment. In the game, you don’t move a cursor to select items that appear around the cursor; instead, you rotate the ring with the selected item being the one at the top. If an item has ‘submenus’, the submenu appears in the place of the original one. There are visual and sound clues to let you know when you select an item, enter a submenu or go back to the parent. Everything but the selected character is shaded out so you know who the menu applies to.
Con: not all items are at the same distance from the cursor.
Pro: you can prioritize items (distance from default item is 0), worst case is distance n/2 where n is the number of items (still better than n)
The main issue is that you really need to rethink how to organize items, and how many to put. You can’t just make a long list of items like in a regular popup menu. But then, it beats having 20 options when you are likely to use 3 or 4.
One point that even the Wikipedia article has gotten wrong is that elements on screen edges are of “infinite” size. The key point is that Fitt’s Law takes the size *along the axis of movement* into account. If the axis of movement (i.e. the axis between the current mouse cursor position and the position of the button I’d like to press) is NOT perpendicular to the screen edge, then the size of the button is indeed finite, because continuing the movement would make the cursor slide along the screen edge until it misses the button again.
All this, of course, based only on the original (one-dimensional) law.
I think the premise in the Wikipedia article is that they are speaking of a vertical movement to the top; in this case, the vertical size of the item is indeed infinite. This is debatable too; as soon as your cursor hits the top, mice cursors have the tendency to move horizontally quite easily, even with the slightest of mouse movements.
When it comes to corners, the item is infinite both vertically as well as horizontally (only not diagonally).
http://thinkubator.ccsp.sfu.ca/Dynabook/dissertation
The best interface, the one you will be most productive with, is the one you like not the one you should like. That’s why there need to be many different interfaces and/or highly configurable interfaces because this improves the likelihood more people will find an interface they like. How people decide the one they like most will vary from person to person.
There is no “right” interface.
Is there a plugin or something to use the pie menu like the one you showed in your article?
Although I am not a usability expert by any stretch of the imagination, I do think that the problem for design lays somewhere between he marketing department and those who write the applications.
There is always a war waged between the two groups; marketing wanting more bling, zoom and flash to make their marketing documentation attractive to customers, and programmers who simply just want their application to jolly well work – with usability being an after thought. I hoped that with the rise of the likes of XAML/Xcode, there could finally be a divorce between those who write the application and those who make the front end.
I also think that the IT industry are too quick to dismiss ideas from 20 years ago by virtue of ‘they’re old, this is new’ and take the approach that anything new is good, anything old is evil – call it IT trying to follow the Futurist manifesto.
What people forget is this; the simplest thing, is at times, the best one. It might not have the marketing pulling power, it might not have the bling or the attractiveness, but if the end user can go about doing what they want, with minimum fuss and bother – isn’t that better than all the effects and sounds one could conjure up?
Take the dock in Leopard; did anyone ever wonder whether the little blue dot to signify its status as being too small? the first time I looked, I failed to even notice it. Why not stick to the old way? the old way showed clearly to the end user whether something had been launched – what motivated the change? if these engineers were asked, could they actually justify it or would be it, “I thought it was cool at the time”.
Yes, there are lots of cool things one likes to do, but one also has to be an adult and realise that not all cool things are actually useful. Same goes for interface design; before changing something – ask, does it need to be changed, what benefits will be yielded; if it ascetic, will be there be a decline in usability if implemented?
Another example, in Windows Vista with the control panel, what does the category design over and above the traditional layout? I can understand the logic IF there were so many options that the layout would be confusing, but in many cases, there were overlaps in some area; take wireless and networking for example, along with file sharing. All in different locations but all inter-related. No attempt was made to actually created a way where all these inter-related things could appear together to allow the end user to set up all what was needed.
Edited 2007-11-07 15:42
Actually it is the fourth. 🙂
I like the pie wedge menu idea. They’d be a bit hard to use once you got above a certain number of items, though.
Of course, with enough menu items finding the right one would be difficult any way you do it.
Vertical pop-up menus often pose yet another usability problem: items that are menus themselves, must be traversed with a mouse to reach their sub-items. That is, a user sees the sub-menu item he/she wants to select, but must move the cursor 100 or even more pixels in a narrow horizontal tunnel (menu item). That’s very error prone, especially in Bookmarks menu, where menu items are rather long!
My proposed solution is: once a mouse cursor traverses some threshold of pixels in the sub-menu direction, move the whole sub-menu to the cursor! If a user didn’t mean to enter a sub-menu and moves a cursor away from it — the sub-menu jumps back where it belongs.
What do you people think?
Edited 2007-11-07 17:14
“Personally, I experience this little fact every day when trying to do window management in Windows XP; since Windows lacks decent window management keyboard shortcuts (for me, that is) compared to, say, OS X, I am more or less forced to use the buttons in the window titlebar”
Alt-Space followed by n (minimize), x (maximize), r (restore), s (size), m (move). Sure, it’s not a single key combo, but it’s probably faster than messing around with the mouse if your fingers are already on the keyboard.
-josh
Personally, I experience this little fact every day when trying to do window management in Windows XP; since Windows lacks decent window management keyboard shortcuts (for me, that is) compared to, say, OS X, I am more or less forced to use the buttons in the window titlebar.
Hmm…I’ll be the first to admit that I am not familiar with the keyboard shortcuts in OSX, but Window’s keyboard shortcuts are one of the few things it has going for it. Per the direct quote above, the three buttons are accessible as follows:
Close = ALT+F4
Maximize = ALT+SPACE, M
Restore = ALT+SPACE, R
Minimize = ALT+SPACE, N
Now, if the author had mentioned configurability of keyboard shortcuts – I’m sure OSX, as would any X Windows systems, would certainly win out – as Windows is 100% not configurable for its keyboard shortcuts. (Yes, some applications allow it, but not Windows itself – at least for those using Windows Explorer, which is what most people use. [Apparently, some of the replacement window managers do provide such functionality, but they are not as entrenched.])
Alt+f4 is evil. Look at where your alt key is, and look at where your f4 key is. The f4 key is east of the alt key, and as far north as it gets, making the finger position completely unintuitive (nigh-on impossible).
heh, thumb and long finger, thats how i do it
OS X does not use the X Windows system. It uses it’s own drawing layer, Quartz.
Screen corners are not only physically easy to reach by unexperienced and clumsy users. They are also those points on the square screen that are the easiest to notice and remember.
An example: I have a Gnome desktop here where the top panel is filled with more than 30 shortcut icons of my most used apps. 30 is quite a lot and although I’ve learned to remember the looks and places of the icons pretty well I don’t always immediately find the right icon to hit when I want to start some of those programa. Anyway, it is very easy to both reach and remember the shortcuts you have in the (only) four corners of the screen. Thus the four corners may indeed be quite optimal places for the most important shortcuts.
Very nice article. On global menus versus menus over each app window, I’ve never seen anyone having trouble with menus over the app window. Apart, as Thom says, from the case where they have trouble with the mouse and so have trouble with finding anything very readily.
It also brings up the limitation of relying too heavily on Fitts. The difference between menus over windows and menus at the top of the screen is not just about travel. Its also about how easy it is to deal with, Case A, top menu combined with window activation, Case B, window activation combined with menu on window.
Since you always have to click the window to activate it (well you can have focus on hover, which will drive the inexperienced mad!), your mouse is always nearer the window bar than the global menu bar.
All in all, I think ease of use for the inexperienced is overrated as a concept. The prime example is texting. Who on earth would have believed we could have millions of people sending text messages using their thumbs from a nine key phone keypad designed for the days when “MURray Hill 1234” was how you remembered phone numbers? And doing it, no less, with a new dialect of non-standard spelling?
Can you imagine if you had been in a meeting with the HIG group at Apple before any of this started, and made the proposal that this would turn out to take the world by storm? That people not only could, but would enthusiastically, use this as a method of writing English, French, whatever?
And yet, it works.