This is the third article in a series on common usability and graphical user interface related terms [part I | part II]. On the internet, and especially in forum discussions like we all have here on OSNews, it is almost certain that in any given discussion, someone will most likely bring up usability and GUI related terms – things like spatial memory, widgets, consistency, Fitts’ Law, and more. The aim of this series is to explain these terms, learn something about their origins, and finally rate their importance in the field of usability and (graphical) user interface design. In part III today, we focus on the desk accessory, popularly known as the widget, applet, mini-app, gadget, or whatever the fashionable term is these days.
Thom, I think you’re filling in some odd blanks based on faulty memories: the example you gave of using the word processor, having to use a calculator program, and then using the word processor program is stretching the truth. Certainly, if the user doesn’t know how to record that information into a text file, they’d need to use some intermediate storage, if only their own memory. However, the worst they’d have to do is exit the word processor, start up the calculator application, find the answer, write it down, exit the calculator, and then restart the word processor, but it would not require rebooting the computer: at the very worst, using a cassette tape drive (the earliest IBM PC’s had them) you’d have to swap tapes, or floppies, as the case may be: the reboot is something you added in that was not required, unless the word processor program was something that didn’t exit in a clean DOS-like manner.
Also, being the grammar/spelling nut you are, I’m surprised you posted this with as many spelling/typing errors as there are: I don’t think they’d be caught with a simple spellcheck by itself, but they’re there: I’ll leave that as an exercise to the reader and writer
Otherwise, a good article, do more like this, but don’t invent things that weren’t there
…in the dark ages of stupid single-tasking-systems…
…there was the Framework by Ashton-Tate.
I’ve always been partial to “Replicants” – but disappointed that no one ever wrote a complimentary program called “Decker” (which would be designed to kill replicants, of course).
Speaking of which…
Programmers? Yes… Users? Not so much. I think that part of the “problem” was that it’s generally more effective to use the workspace management tools in BeOS than the window management tools. At least, I’ve always found it easier to just switch to a blank workspace and open an app regularly, rather than shuffle windows out of the way to get at a desktop replicant.
Having just bought an iMac with Leopard, I’m thinking that using replicants in the same sort of usage scenario as Apple has with Dashboard would help increase their attraction to users. Creating a Dashboard-type of setup in BeOS/Haiku actually would be rather simple to do, too.
The problem I see here, JT, is that replicants are not coded in web-like languages [right?], making them the territory of people with programming experience, instead of all sorts of other people being able to code them too.
Edited 2007-11-05 11:11 UTC
Ah DOS TSRs. There’s at least one I still use today, to take screenshots of the game Privateer (running in DOSbox) as part of an ongoing effort to clone it. Amazing the hoops we jumped through.
I’m still not sure what I think of Dashboard and the like. Is the Dashboard calculator really more convenient than hitting the calculator key on my keyboard and having a regular (tiny) windowed app appear, one that I can switch to and from with the window manager’s normal methods?
[iI’ve always been partial to “Replicants” – but disappointed that no one ever wrote a complimentary program called “Decker” (which would be designed to kill replicants, of course).[/i]
It’s have to be called Deckard, but yes, that does seem like an obvious name for such a program. [OT]I’m trying to decide whether I can make it to Denver in time for a screening of the Final Cut.[/OT]
> I’m still not sure what I think of Dashboard and the like. Is the
> Dashboard calculator really more convenient than hitting the calculator
> key on my keyboard and having a regular (tiny) windowed app appear,
> one that I can switch to and from with the window manager’s normal
> methods?
Certainly not, and the same could be said (in one way or the other) about all dashboard widgets. But then, you can re-arrange them on dashboard as you wish, but you won’t rip out the keys on your keyboard and stick others in their place. (There was a keyboard announced which claimed to change it’s keys on demand, with tiny OLED displays on each key, but so far it’s vaporware).
“A long, long time ago, multitasking was something of a novelty to many computer users. ”
What was really amazing – from the perspective of an RISC OS/Amiga user – was the hard sell that multitasking needed with some people. “Why would I [i]want</> to run more than one program at once?”.
I think history is littered with comments and predictions that in hindsight seem rediculous.
I remember thinking when I upgraded from a 4MHz Amstrad 80286 to a 16MHz 286SX I’d never need another computer again.
Even these days people make comments like why do we need CPUs with 80 cores (or whatever number they insert here) but in my view we’ll definitely find a way to use them and want even more thereafter.
Because we dont need them, not now. In the future, for sure, but not now. This is quite different from the multitasking case which had obvious immediate advantages.
We don’t NEED 80 cores, we WANT 80 cores
If all you want to do is edit a text document then your 16MHz 286SX is still as capable as it ever was. The thing s we now expect our computers to do FAR FAR more than that (watching dvds). This is what drives us to upgrade
“What was really amazing – from the perspective of an RISC OS/Amiga user – was the hard sell that multitasking needed with some people. “Why would I want to run more than one program at once?”.”
Even today, when MICROS~1 are able to run things quasi-simultanously, there are still users out there who dont get familiar with a multitasking concept, expecially when mapped onto GUI elements.
I’d like to illustrate this with a few typical sentences:
– I don’t use it at this moment, so I don’t want to see it.
– I’m done with it, now I must close the application.
– If I need to see another application, I will have to close this one I’m working at.
– This window annoys me, I don’t need it.
So, in order to browse the web for an information, the word processor needs to be closed. Allthoug switching applications on screen, or even screens theirselves (virtual desktops), would be the more comfortable way here, it seems to be too complicated.
Hard to understand, I know… 🙂
Then, there are users who want their desktop clean if the computer doesn’t do anything, and there are the ones who need everything at once (many applications opened, desktop littered with icons, as many widgets as possible).
The problem here is the interface. Today’s computer systems are needlessly complex and actually works against many human principles. In Jef Raskin’s book, the Humane Interface, Raskin shows how the computers of today are needlessly complex and so, necessitates a complete user interaction overhaul. The book also describes one vision of how a general purpose computer system should operate. The system he proposes fixes all of the interaction examples that you have described.
Edited 2007-11-05 13:08
Are you a Raskin blow-hard or something?
Quick! Everyone pay homage to the author of all knowledge of all interfaces by intuitively sucking on this fleshy hose! After all, it looks like it was designed for such use.
If you could present an idea without making it sound like a product placement pitch for a book and it’s egomaniac author, it might lend you some credibility.
> What was really amazing – from the perspective of an RISC OS/Amiga
> user – was the hard sell that multitasking needed with some people.
> “Why would I want</> to run more than one program at once?”.
I’d be similarly cautious if you asked me. Let’s see what applications I have currently running:
– A web browser (obviously). That’s what I’m typing this comment in, so it is needed. I have a background tab open which I’m currently not using. I will use it in the future, but not now.
– Other applications like Eclipse and Finder (file browser), which I don’t use at the moment.
Here we already have the first point: Saving and restoring the state of these applications would do the job as well. Multitasking is a solution, but not the only one. Going on:
– Email and ICQ client: These applications aren’t doing anything at the moment. They are waiting for incoming events. Even when an event arrives, they will quickly handle it and then go to sleep again.
Event handling is an obvious case for the “widgets as drivers” idea that the people at Apple had. Again multitasking can do the job, but others can do as well.
– iTunes: Yes, here I am actually running a second program.
– Background tasks: … and some more.
Finally we have some use-cases for multitasking. But they are much less obvious than some people think.
with my ancient TRS-80 Color Computer3(IIRC) back in 1986. The device had 512k of memory, 1 floppy driver, no hard drive and a 2Mhz 6809E Motorola processor. The OS I used was named OS-9 from microware.
OS-9 was coded in assembly and was amazingly efficient. It implemented a primitive *NIX like shell, complete with mutliple virtual terminals-which you could switch between with ctrl-alt-1 through ctrl-alt-9. Yatou could run an application in each of these seperate terminals all concurrently.
Shortly before I quit using OS-9 a GEM-like graphical user interface came out which was a precursor of what I later experienced using twm on Linux.I routinely ran 6-7 major applications at the same time(including a wordprocessor- Dynastar-a Worstar clone, a spreadsheet program-Dynacalc, a LOTUS-1-2-3 clone, and Dynabase, a DBASEII clone in addition a terminal for programming and other programs.
When I think about all of these applications running so smoothly on a 2Mhz CPU w/ 512K and only 1 floppy drive I remain underwhelmed by most of the advances in OS design and technological progress in hardware.
I got my first IBM clone in 1987 and I was thrown back into the stone age-only later did I find out about TSR’s and GEM-DOS and windows-3.1 were horrible back then. One of the reasons I started using Linux in 1994 was because my 386sx laptop with 16MB memory running windows 3.1 in 1994 was not capable of doing what my old coco did back in the mid 80’s.
When i started using Linux I rekindled some of my fascination with computing-in 1998 i went Linux fulltime and have never looked back. To this day Winxp cannot hold a candle to the multitasking I do under Linux each and every day.
Yet I still yearn for my old coco and OS-9….
You somehow think that Windows can’t run multiple programs as well as Linux can?
Have you even used Windows since 3.1?
Given the struggling with Linux’s scheduler I would say that Windows excels here (and I use both daily).
While I have not had any problem with Linux and multitasking, (usually running Win2k3 in a VM for windows development while having Firefox, Thunderbird, pidgin, xchat, and amarok running on the host), claiming that Windows cannot multi task well is either FUD or ignorance.
When running Windows, I generally have the same apps (Media Player instead of amorak), and have never, ever had a problem. Been using Windows since 3.0, and the NT based Windows line multitasks very well, and always has.
There is a lot to complain about with Windows, but this isn’t one of them.
Maybe there are issues with Windows that can be associated with poor multitasking, but relate to something else. I know that the Explorer shell freezes up on me a lot (file copy operations and such), which I don’t see in Nautilus (?) or the Finder. But it is definitely an issue with Explorer since other processes chug along just fine.
Thanks to Thom for recognizing that the early DAs offered “multitasking” capabilities on older operating systems. But I would argue that modern DAs have nothing to do with that heritage or even usability.
I would argue that modern DAs serve two purposes:
Eye-candy sells modern operating systems or add-ons (like DesktopX). For whatever reason, people want something that looks good rather than something that just does the job.
While I cannot really speak for Vista or DesktopX, I think it is safe to say that Dashboard provides a development framework that is more accessible to non-programmers because it uses web-development technologies. Sure you end up doing some programming at the end of the day, but HTML, CSS, and JavaScript are things that people actually want to learn.
If is was a usability issue (i.e. people need the functionality wrapped up in DAs), then I would argue that all of these modern DAs would be pointless. A vanilla C application would do the job just as well. All you have to do is tweak the UI and avoid adding a glut of features.
That sentence makes no sense. Care to elaborate?
No, a c application would not do the job just as well, because a c application is a lot harder to write. By using web languages you allow a whole lot more people to scratch user itches, opening up a lot more possibilities. They’re the ultimate in high-level programming.
I should have said that these DA environments (like Dashboard and DesktopX) do not add much in terms of usability. I’ve used DesktopX before, and everything that it adds to the screen could be accomplished through a regular application that doesn’t have DesktopX as a dependency. Much the same can be said for Dashboard, with the (arguably) minor difference that all Dashboard widgets are on a single layer.
Didn’t I say that distinction exists from the perspective of the developer. At any rate, it is a pointless distinction from the perspective of a user.
I’ll have to disagree with you here, Thom, because it’s rare to get the maximum flexibility out of what you can do with a scripting language when you’re programming a sufficiently complex widget that’s native to an OS: that is, something more sophisticated than that which will run from a webpage. The key here is not that web languages make it available for more average users to create widgets, so much as the barrier to creating relatively simple widgets is a bit lower than requiring them to use a fully systems-enabled language (in other words, a language that has full access to the full system API of the relevant OS) and the reality is that whether a systems language (such as C/C++ or Pascal variants with full API access, as just a couple examples) or a scripting language (JavaScript, VBScript, whatever) is used, has zero impact on usability of the resulting widget, with the greater limitation being what’s available from within that language to access the GUI features of the platform, and furthermore, how much the developer wishes to actually use them: for example, in OS X there’s a nice little widget that encapsulates a standard Terminal, which, by the standards of GUI usability, is about as far away from it as you can get, short of dip switches and blinking lights. I’ve not verified which language it was created in, but strongly suspect no web scripting language was involved, while in other cases, I suspect scripting languages were used for widgets I’ve got in use.
I’ve not had time yet to fully investigate the Mac OS X API and see which languages have complete (other than the system languages) access to all the features, but it behooves Apple to make as many bindings available as possible for the languages that people actually want to develop in. What’s very important to keep in context is that the language doesn’t enable the typical user to develop a decent widget so much easier, as does having proper editing tools and documentation. The net result (pardon any possible puns in reference to a web-enabled language) is that all developers of widgets that have any meaningful functionality have to be able to program and solve problems with whichever language(s) they use, and what that means is that people that can’t solve problems regardless of development language, still won’t be capable of developing widgets in even the “Easiest” scripting language, no matter how much they desire to do so. Adding a URL into a form and calling it a widget and pronouncing “Hey, I’ve just developed a Dashboard application!” just doesn’t count, sorry.
“I’ve not had time yet to fully investigate the Mac OS X API and see which languages have complete (other than the system languages) access to all the features, but it behooves Apple to make as many bindings available as possible for the languages that people actually want to develop in.”
From what I have seen, the Mac OS X Dashboard provides some additional HTML elements (canvas for example) as well as some Javascript bindings for parts of the system. In addition there are “Widget Plugins” which can be implemented in Cocoa to provide Javascript bindings to essentially anything. I believe this is how the iTunes Widget works for example.
So I suspect almost anything could be implemented as a Widget, though obviously implementing plugins in Obj-C is a bit beyond anyone but programmers.
Once my WebKit port to Haiku and its associated browser are in good shape, I will probably investigate making a Dashboard/Sidebar like thing for Haiku. I’m not sure how useful it would be, but I imagine it would be fun to implement
But at the rate things are going, I doubt I will get around to that until February/March next year.
A friend of mine who is not registered on osnews, said:
“I think u you should consoder mentioning {NeXT,OPEN}STEP and windowmaker, with the support for dockapps as a gadget framework.
see dockapss.org”
An article about the birth of multitasking that doesn’t mention the amiga? How rude. 🙂