The past few weeks, as you surely have noticed, I have written a few articles on various usability terms [part I | part II | part III | part IV | part V]. I explain what they mean, their origins, as well as their implications for graphical user interface design. Even though the series is far from over, I would like to offer a bit more insight into why I am diving into these subjects.I have been using computers for a while now. My first ever experience with a computer was, well, I do not really know, as it happened before I was consciously aware of the fact. My parents had a Binatone TV Master Mk. IV, a games console with four Pong-style games on it. The device was distinctively simple: the controller was a pad with a wheel on it, and turning the wheel would move your paddle up and down the screen. Combine the fact that my parents have not even the slightest idea when they bought the device with the fact that the Mk. IV came to market 7 years before I was born (I was born December 1st, 1984), and you can see why I have no idea how old I was when I first touched a computer.
My first experience with a ‘real’ computer is clearer. Somewhere in 1991, my parents bought a 286 computer with an SVGA (!) screen, MS-DOS, and Windows 3.0. It was our family’s first foray into the world of the personal computer (apart from my parents’ experiences at work, of course), and it soon settled into our daily lives as if it had always been there. My brothers and I quickly got around the computer, and so, at the age of 6 and 7, my brain was programmed with DOS commands (we barely, if ever, used Windows 3.0). I still have all the manuals and diskettes from that computer (I sort-of collect manuals), and every now and then, I flip through them just for the fun of it.
From that point onwards, computers were a part of my life, and ever since, I have used a computer basically daily. When I was younger, mostly for games (Keen! Wolfenstein!), but as I got older, the computer was mostly used for high school, and of course for university.
Anyway, the reason I am writing all this sentimental fluff down is because from day one of my computing career, I have been amazed by some of the inexplicable stupidities found in (graphical) user interfaces – on all operating systems, of all times. Whether you are using KDE or GNOME, Windows or Mac OS X, Amiga or BeOS, Windows Mobile or Symbian, they all tend to have some major problems, GUI-wise. All of these systems regularly drive their users mad.
Consequently, for some time now, I have been thinking about how to improve the experience people have when they use a computer. My initial ideas were rather bold, and would require major adjustments from its potential users. I had visions (not the scary end-of-the-world type, but more the fluffy-bunny-and-ponies type) about an interface where each element would have its own physique, its own effect on the interface as a whole. I wanted to make a completely (what I call) “physical interface”. I wanted users manipulating elements in an interface to feel as if those elements were real, physical objects. They would interact with one another, giving the user the illusion that the objects he was manipulating were somehow real. I called this concept: Grow.
Let me explain with an example. Consider you have a desktop background picture of a large autumn tree; the leaves have turned red, and are about to fall off. Various leaves are scattered across the ground surrounding the tree. Moving a window or another object across your desktop in a physical interface would cause that window or object to affect the leaves on the tree and on the ground; they would ruffle, some might even fall off the tree and slowly fall on the ground.
This example would be a perfect example of pointless eye candy of course (and annoying at that), but I hope you get the idea of what I mean by a physical interface. In today’s interfaces, elements all exist in their own little world, they do not interact with one another, and they are completely static and dead: they literally are lumps of pixels, and nothing more. I think that this lack of “life” is one of the prime reasons why so many users make seemingly stupid mistakes. Humans love interaction and sensory input, but interfaces today give little of either.
However, this plan got larger and more encompassing by the day, and in the end, it would have required mice that would give tactile feedback when ‘mouse overing’ elements, and, in addition, I must say that I lost track myself of how elements ought to affect one another: what should happen when you move a window over another window? Without making it ridiculously annoying? This would require some massive usability testing and experiments, and obviously, I do not have the means to do that (I wish I never stopped my university study of Psychology in favour of a linguistics study…).
Additionally, the releases of Windows Vista and later on, Mac OS X Leopard really helped me open my eyes: people really do not like change. I obviously already knew that, but I had never realised what would happen if the two biggest operating system companies were to make major changes to their interfaces. We all know the results: people went mental over Vista’s Aero, and it took people less than a few days to turn Leopard’s interface into Tiger. This made me realise that my Grow concept, which had gotten way too complicated anyway, was doomed to not be accepted by anyone. It would be too drastic, too different, and people would hate it instantly.
And so I dropped Grow 1.0, in favour of a more manageable concept: Grow 2.0 (I am good with names). And this is where all the recent articles come into play: I wanted to learn the history of the graphical user interface and its associated terms. And as I was delving through the depths of the internet, learning about usability studies, scientific research, and the stories behind various graphical user interface ideas, I realised that I should share this stuff with the OSNews readers. Additionally, this provided valuable feedback about various ideas and concepts (comments, people, comments!).
So, what is Grow 2.0? Grow 2.0 will be a set of Human Interface Guidelines that can be superimposed over existing HIGs, without interfering with them. The guidelines will consist of various elements, little changes, that can be made to whatever GUI you are using. I am not a programmer, nor do I aspire to be one, so I will leave that to the people who can. What I want to do is make computer usage just that little bit easier, saner, and less infuriating, by applying small changes, by taking various studies into account, by listening to the people that actually study this stuff. Grow 2.0 will be my vision on how to improve interfaces today, instead of creating some massive shift away from what people are used to, to something that is supposedly superior, “if you only took the time to learn it”.
I am not saying a thing yet about what kind of changes I am talking about, as the Grow 2.0 Design Document is nothing more than a set of random bullet points at this point; much of the stuff I want to see in Grow 2.0 currently consist of a bunch of neurons firing like crazy inside my brain. For the explanation of the name “Grow”, you will have to wait for the design document too.
The Grow Design Document (we can drop the 2.0 seeing 1.0 never made it out to anyone anyway) will be just that: a document, licensed under a Creative Commons license, that programmers will be free to use, to take ideas from, and implement them in their products. And if they do not want to, or they do not care, or will never ever even hear of the document? Not a problem – I am not doing this for fame, or to proclaim that my way is The One Way (a condition so many other usability people seem to suffer from). I am doing this because I want to, because I like to do it. I cannot program, so this will be my contribution to the software world.
Thus, the document will end with a sentence I have used before on OSNews: do with it as you please.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
You go into all this introduction about what you’re talking about, and then you explain almost nothing! For shame!
Have you thought of going into writing serials, cliffhangers, etc.?
It’s very hard to comment on the meat of the article when all that’s there is the appetizer, but yes, I could have told you years ago, change for the sake of change in a GUI isn’t well-received, and I admit to being one of them: when Windows XP came out with the Luna(tic) interface, I immediately switched it to Windows Classic decor, because it worked perfectly fine, amongst other things. That, and I got the impression that it was faster that way, compared to the newer GUI, and whether that is/was true or not (I wasn’t using the latest video card) that was the perception, and really, all that seems to be most important to users about a GUI is perception, until you actually do objective studies with users and a stopwatch.
“System Properties > Advanced > Performance > Settings > Adjust for best performance” is how I roll.
Classic saves 20 megs over Luna…
That said,
Whoohoo!
I cant wait to see this out.
simply stopping the theme server will work. so i’m left with some huge window borders, but at least i have left teletubby-land
It’s great to have a popular news site because you can abuse it to hype up something that not only doesn’t exist, but is not even a concept. It’s a concept of a concept.
Don’t get me wrong, the usability articles are great, but this article belongs in a Thom Holwerda blog post and not on the front page of OSNews.
It may have been excusable if you’d posted the design document, but you’ve not even got that, just an idea. You just wasted everybody’s time hyping up your own inconcrete proposal.
I) OSNews IS a blog.
II) WE get to decide what belongs on the front page.
Perhaps in the loosest sense, but it’s more like a news site. Just like Slashdot is not a blog.
Of course, but what exactly are you trying to accomplish with a posting about a concept containing almost no information? There is nothing in there to make an intelligent comment on, so don’t be surprised when the comments are few and irreverent.
Err, that is only natural. The comments are/will be made on the usability terms articles. I just wanted to explain the why behind the apparently sudden interest and eagerness on my end.
Edited 2007-11-13 18:55 UTC
OSNews is not a blog. A blog is a personal web log. This place has editors. That makes it a news site. If OSNews called itself a blog on the front page, fewer people would follow it.
The blog is there in case you forgot:
http://www.osnews.com/staff/
You certainly have a strange definition of “blog”, considering “blog” means “personal web log” to the rest of the world. As in, only 1 person writes the articles, usually one per day, that people can comment on. The main criteria for a blog is that there is only 1 author for all the articles.
That right there is what prevents OSNews from being a blog. You can’t have editors, and decision makers, and call yourself a blog. That would make it a news site. Hence the name as well.
>WE get to decide what belongs on the front page.
Sure, as long if you have to majority of the users behind your ‘blog’. A blog is more than just news/articles together with some comments. Btw. time for a change of the title, OSnews doesn’t fit anymore – but that’s just an opinion usus in a blog.
I) OSNews IS a blog.
Blog? Really? Why is OSNews advertised as a news site, why does it not say “blog” in anywhere on the front page except “Staff Blog”, and why is there such a section if this whole site is a blog anyway? Isn’t that like..err, duh? I’m not saying anything about this article in question, all I’m saying is that OSNews is most definitely not a blog.
The thing about introductions, being of an introductory nature, is that they tend to, well, introduce things.
While technically you have fullfilled the bare minimum requirements to qualify as an introduction, you seem to have confused a personal introduction with a technical one. As in, “Hi my name is Leo”, versus “Hi my name is Leo, my body is composed 65% of oxygen, 18% carbon, 10% hydrogen, and 7% miscellaneous evil.”
Please, specify if you’re using a free (a’la Attribution or Attribution-ShareAlike) or non-free (a’la Attribution-ShareAlike-NonCommercial or anything else with NC or ND clause) Creative Commons license. It does matter.
The non-commercial clause will NOT be in this document’s CC license. I want to enable ANYONE to use the ideas I will put forth, commercial or free, proprietary or Free.
Edited 2007-11-13 18:39 UTC
“My birthday’s coming up in a couple weeks, guys! Who’s got presents?”
But in all seriousness, I agree about making interfaces more literal, more physical. This is why I believe the touchscreen is the way forward: it enables direct manipulation instead of indirect, through a mouse. It is one less layer of abstraction from the physical. I had a real “A-HA!” moment the first time I used a Palm Pilot, and I still believe it is the interface of the future.
I agree entirely. I had the same “aha” moment with a Psion netBook. Now I’ve gotten rid of it, though, and missing the intuitive experience…
Would this article have made it onto the site if Random Joe had written it? Is there any merit in it. I mean your name is not the same as say a Jeff Raskin. I can see the usability articles making it. They stand on their own and don’t need explanation.
What makes OSNews qualify as a blog site? I know you link to Blog sites but isn’t it more of an OS News aggregation site with a forum component? There appear to be editorials and news. No mention of blogs anywhere.
Anyway good luck with your ‘GROW’ work I am sure it will be worth the effort.
AKA limiting one’s self to what can be modeled as physical objects and what one can do with them.
Instead of formating text into a buffer with a scrolling view port that we can control conveniently with the keyboard and is relatively simple to implement, we’ll simulate reading a book wholesale.
First, the book will be modeled in 3d so that the book reading experience will be as rich and immersive as possible. The pages of the book can be bent and crumpled in the manner that real paper can; it can even be torn and burnt (the virtual blow torch application will come later). To change pages, you have to carefully (as you’ll see why later) “grab” the edge of each page with the mouse and pull the page over. Sometimes the pages are a bit sticky and you’ll need to separate them. Other times, an occasional gust of wind will blow the book clear off the desk. We’ll even model the binding of the book breaking down so that after a few hundred page turns it begins to fall apart. I guess you should be forced to buy a new copy when this happens with DRM.
I suppose after everyone starts using the Sony book reader to read books, a paper book won’t be familiar anymore. So, we’ll just simulate the book reader instead.
Edited 2007-11-13 20:26
i do agree with it: this is merely a concept of a concept and absolutely pointless– except for realizing the rude
and harsh answer of a certain blog editor, defending his own story is not really the kind of stuff i want to read.
i used osnews (as the name states) as a newssite, now learning that this is a private blog just makes me go away like this:
Funny – you registered TODAY. You’ve posted ONE comment. Not exactly much of a loss, is it?
have you ever logged out?
You can read OSNews without logging in you know, I did for ages.
Cmon, frustrated geeks..
No need to complain about Thom’s temporarily “egocentrism”. He just wanted to show you a decent introduction to something he thinks is worth hyping.
Personally, it worked for me, since I really want to read more about his GROW ideas.
Human Interfaces is a crucial topic to me, ever if I’m only a “script kiddie” and not a real programmer.
Please stop figuring out if Osnews is a blog or not..
You completely missed the point.
Edited 2007-11-13 21:16
Cmon, frustrated geeks..
No need to complain about Thom’s temporarily “egocentrism”. He just wanted to show you a decent introduction to something he thinks is worth hyping.
Personally, it worked for me, since I really want to read more about his GROW ideas.
Human Interfaces is a crucial topic to me, ever if I’m only a “script kiddie” and not a real programmer.
Please stop figuring out if Osnews is a blog or not..
You completely missed the point.
I agree completely!
Psst – is it a blog?
I like ideas which can be summed up as succinctly as possible. How short will yours go?
Are we talking about GUI’s or window management?
Or maybe both?
The example with the tree seems like a reverse border
snap, i.e. instead of sticking the moving window on
the closest one, push everything back to make space.
That would be something relatively easy to do on
current WM’s, but I’m not sure how usefull it would
be.
Or maybe you mean interactivity in the application
level? As in, when you move your file manager next to
a media player, the file manager becomes a file
browser for the media player application?
In any case, I’m not a big fan of implicit actions,
even if they look more natural.
I mean, dropping a book on my desk might make some
papers fly around, even drop them off the desk, but
why would I want this behavior emulated on my
computer?
At this point, I think that posting more information
about Grow, will not only satisfy our curiosity, but
it might also help you create something that will
actually be used, instead of a long document that
comes out of the blue and few will ever bother with.
Edited 2007-11-13 21:52
Maybe I’ll whip something preliminary up, but I’m not giving any promises. You see, it might lead to people getting the wrong idea of what Grow is. That’s why I’ve remained so vague here.
…So you’re saying I’d like using my PC more if there was a big tree shedding its leaves and dying as a backdrop, instead of wallpaper?
I think I missed something in the “vagueness” you’ve got going here.
1) Animated desktop where the desktop is aware of you moving windows around.
— This goes back to around 1997 or about ten years ago. Someone had created a program for OS/2 that had no borders and looked to anyone that didn’t know, as if it were the OS/2 desktop. It literally copied everything off the real desk top and passed any commands onto OS/2 and then display what happened. It was buggy and the programmer ran into walls on OS/2, mostly due to ignorance of how some system things worked.
It was pretty cool though. One of the “desktops” he had was a forest with a bear that would stay hidden until you stopped working on your computer for whatever sleep period you had assigned. Then it would pop out above, below, or between windows. If there wasn’t any room to do this it would take its paw and move windows apart so it could peak through. Pretty cool. Of course when you activated any input device the bear would hide again.
Moving windows did cause a little bit of tree movement but not much. He also experimented with having windows that were used fade into the back ground using OpenGL.
At the time of all of this, graphic cards and programming and the guy just weren’t up to what he really wanted to do. I accidentally bumped into him on a message board and tried out different things he did. Once he ran into the walls he couldn’t figure out how to get past he lost all of his steam and disappeared and didn’t respond to e-mails after that.
Again it was really buggy and slow.
2) Moving windows around and having you feel that you are moving them over each other.
I can’t remember which mouse company that did this. But one came out with a mouse that you could “feel” when you went over the edge of a window or over specific icons. Not enough people found it useful so I think it just died too.
> (what I call) “physical interface”
we’ve been talking about such interface concepts for years and coined the term “organic interfaces” to describe them.
it encompasses more than just physics type simulations, but also graphical effects that take advantage of various hardwired human mental traits / capabilities / limitations, presentation modes that avoid violating rules enforced in nature, etc..
Organic, physical… I prefer the latter term because it covers the subject better (for me). But yeah, they’re both accurate enough terms.
That was indeed more or less what I was thinking about, but as I explained, I’m forced to drop those ideas because it would be way too different from what we have now – nobody would want to implement or use it. We are stuck with the traditional windowed desktop interface/metaphor for now, so I believe it makes more sense to make improvements in that paradigm where possible – we can always brainstorm about more radical changes.
Some of those ideas WILL be in Grow 2.0, but just not as encompassing and radical as I had in mind.
Edited 2007-11-14 14:17 UTC
> nobody would want to implement or use it
if you do it all at once, i agree.
but if you introduce concepts in the software bit by bit over enough time people will eventually get on board. so instead of expecting everything to change in one rev, get involved and start making changes in the code that work towards that eventuality. =)
Even though his introduction to Grow was a little vague, at least he’s giving us something. Maybe it’s a confusing concept to grasp or maybe he just hasn’t totally defined the concept himself, but either way he’s excited to share with people.
The articles about UIs were great and somewhat eye opening, and I agree, there are many things that need to be improved. I’ve had lots of ideas of my own too, although not enough skill yet to implement them. I look forward to seeing his ideas and possibly where they fit in with my own.
Good work, Thom.
It’s a shame that after decades of development computers still do not work as intuitively as the original concepts of people like Vannevar Bush, Ted Nelson, Alan Kay, Douglas Engelbart, Jef Raskin prophetized. Computers and similar devices should help us to maximize our intellect, ease our communication and generally make us happy. In today’s reality computers are mostly dumb, asking us unintelligible questions in meaningless dialogs (for example if the user wants to remove shared library xyz because it may be needed by another application, other than the application the user wants to uninstall…how could the user possible know, the computer itself should be aware of it’s inner workings).
Revolutionizing the UI of todays operating systems is very hard, not only because the complexity but mainly of the general nature of it’s purpose. You should improve the UI and at the same time try not to alienate your users.
This is why it’s easier to start with a less general platform or improve small parts of existing systems, for example, improving the UI in computers with a predefined context ( iPhone, the OLPC concept).
There’s a series of books by Issac Asimov called the foundation series. It describes a situation where technological priest understand technology and nobody else. People think technology is miraculous.
There is an effort to hide the details of computers from users, but sometimes, I think users must learn how computers work.
I have this romantic notion that in the 60’s, fathers only let their sons drive cars once they knew how they worked.
I think I might be idealistic. I understand how computers work and enthusiastically wish to share, but maybe there’s nothing wrong.
I am convinced, however, that sometimes explaining how things really work is less confusing than trying to hide stuff from users.
Douglas Engelbart actually suggested having special guides to help normal people navigate through the heaps of information envisioned as being accessible on computer systems. Yet today we have Wikipedia. So the “priest” trend may actually be less prevalent today than some of the “founding fathers” expected.
As for people thinking technology is miraculous: well, that’s already the situation. You don’t need to delve into a sci-fi novel to imagine that.
I know that it has become trendy to complain that the current crop of UIs are over a decade old, and that something better should be created, but that’s just silly.
In April of 2005, I downloaded and installed Ubuntu 5.04 on a system to use as my 1 year old son’s first computer. I loaded up gCompris, and selected the ‘game’ where you would move the mouse over the blue or red squares to reveal the picture underneath. This moved on to the more complicated task of having to click on squares, and then it repeated with smaller squares. It took him about 2 days to become completely comfortable with the mouse. After that, I showed him the power button on the front of the computer and the Applications menu. He received all of 10 minutes of instruction on how to use the computer.
Within a week, he was perfectly competent to use his computer, and was using applications that I never showed him. Now, if you want to insist that my kid is some kind of bizarre genetic anomaly that makes him into some kind of ultra intelligent superhuman, I won’t argue. But even if that unlikely scenario were true, I would expect even a slow 4 year old to be able to match wits with the smartest of 1 year olds. I think it is safe to say that if a user interface can be mastered by a slow 4 year old, (or a mutant 1 year old) the UI has to be pretty darn intuitive.
Improvements in UI would be great, but I’m going to have a hard time taking any adult’s opinion on UI seriously if they are having trouble with a UI a 1 year old child can use with ease.
We have childrens books and adult books. A child mode might be good. That doesn’t mean other modes are bad.
I don’t know what books you read, but I’ve never come across an adult book that was easier to read than a children’s book.
The previous poster’s point was that, if his 1 year old can quickly become comfortable with a UI, any adult should. If usability is the whole point, how could an “adult” mode be more usable if loads of adults are still having real issues with “child” mode.
Personally, my years of development, testing, support, and UI design have reinforced that there are two kinds of people in the world: Those who try and those who don’t.
The people who “don’t try” seem to live in constant fear of breaking things. Moving buttons around or making interfaces more “intuitive” doesn’t seem to help these people, as they only use elements they already know how to use. Essentially, they learn — not by trial and error, as children do — but by being explicitly told what to click on, and when. They don’t explore. They don’t optimize their workflow. When they encounter something new or unexpected, they freeze and request a resident “magician” guide them through.
Redefining the workspace in terms and concepts that these people will understand is noble, but ultimately useless, I think. At its heart, computing is a complex activity that appeals to some minds and is completely foreign (or in the least, uninteresting) to others. Simplifying and familiarizing computing for the users who “don’t try” may be a vain endeavor, just as you can’t make driving any easier without limiting a driver’s options (at which point, they’d be better off on a train). For these people, changing the metaphor means that they will have to learn their intricate processes all over again, step by step.
My boss told me an interesting thing not long ago: She, a pharmacist with no CS training, felt more at home on the command line of a VAX in 1982 than she does in Vista or OS X. It was simpler, more predictable, less distracting and disorganized.
For all our supposed “gains” in usability, a lot of it has just been flash and sparkle, with little discernable improvement in efficiency or proficiency. That said, I wish you luck.
Exactly, which is why I’m not going to do that. It was my original plan (Grow 1.0), but I quickly realised it wouldn’t work (as said in the article).
That’s why I’m focussing on small improvements to today’s interfaces now. Tweaks here, changes there, to make it just a little less infuriating and more predictable. If you read the usability terms articles, you can definitely see some elements of Grow 2.0 shining through already.
Edited 2007-11-14 08:39 UTC
I know it’s lame and all, but… me too.
I’ve tried to articulate this several times myself, but you say it better than I have.
Your kidding right? Children’s books are the same as adult books except simpler. Just as I am not going to trust the reading advice from an adult that has difficulty with a children’s book, I am not going to trust an adult’s opinion on UI that cannot handle a UI a 1 year old can handle easily.
Last year, I was using a rule of thumb. If an adult cannot trivially learn what my 2 year old can, they are not competent to speak on the subject. Do you think that is unfair?
Edited 2007-11-14 07:36
Last year, I was using a rule of thumb. If an adult cannot trivially learn what my 2 year old can, they are not competent to speak on the subject. Do you think that is unfair?
I think you should learn some psychology or biology. So because I cannot be bilingual by listening to other people talking around me, does it mean that I am more stupid than a 2 year old?
No, it doesn’t. In the same way, adults have more difficulties learning to use a new application or GUI, because the biological learning phrase has already ended for them. Also, they have other things to do aside from playing around (because that is how children learn) with unintuitive applications; like taking care of their own 2 year olds.
Of course, you are right in that wanting to learn is important. But it is also important to keep the learning curve short.
@Savior
I think you should learn some psychology or biology. Because you CAN learn to be bilingual by being put in a fully immersive foreign language environment with anywhere from 1 to dozens of full time language instructors who will happily spend hours on end holding up objects and pronouncing their names. Add to this the very low standard that is applied to a 2 year old to consider them bilingual or fluent in the language. So, yes, if you cannot learning a second language when given the same resources, you are more stupid than a 2 year old. Really the ‘kids learn languages easier’ myth is a perfect example of bad research. I have known literally hundreds of people that have become as fluent as a 2 year old in a second language with only a few hours a week of study in a non-immersive environment. The facts just don’t support your premise.
@losethos2
No, I explicitly stated that the current crop of GUIs are simple and intuitive. I also stated that if an adult has trouble with the current crop, I do not believe them to be competent enough on the subject to have meaningful input. As to me wanting to promote my OS of choice… That is clearly a reactionary response with no thought behind it. I neither indicated what OS I would endorse, nor limited my praise to a particular OS. Some OS UIs that I would include in what are fundamentally the same would be Windows, Gnome, KDE, MacOS, OSX, OS/2, AmigaOS, TOS, GeOS, BeOS, PocketPC, and… Drum roll please… LosethOS. That’s right. Reimplementing the same old interfaces using text instead of graphics is neither new nor different.
All of these OSes have different levels of refinement, and some might have different foceses, but they are all fundamentally the same UI. A competent user of Windows or Gnome are easily going to figure out how to run a program on TOS or Amiga with little trouble.
As for being as you say “Retarded”… Maybe you should look inward when you use that word to try and support the position that a UI that is intuitive for the illiterate are not intuitive for the literate. Hot keys are great and all, but they are far from intuitive. They are something that give greater functionality when you have moved past the intuitive phase and have moved into the trained phase. A screen full of text MIGHT be more productive to someone that has learned where the text should be for particular applications, but just because someone can read well, and can sort text from a jumble on the screen does not mean that pictures stop being intuitive.
2 year olds form an impression of reality. I had to adapt my view of reality in adulthood and it’s tricky. I have respect for 2-year-olds. They say the Kingdom of God is for childlike for a reason;-)
Edited 2007-11-14 08:07
Your implication is that we’ve arrived at the best UI and you’ve closed your mind to anything else. I reject the argument that what is good for a child is good for an adult. One simple fact is kids can’t read! What a retarded notion it is to proclaim what’s good for the illiterate must be good for the literate.
I have invented a new user interface and it works well for me. It’s based on keyboard navagation. Some activities, like programming, are primarily based on the keyboard. In user interface school, they teach it’s bad to switch between input devices.
My interface is optimal for programming — the regular tasks you need to do to write and debug programs. Other interfaces might be optimal for web browsing.
Don’t be a fool and suggest one size fits all, from infants to post-graduate students.
If you wish to open your mind, see my operating system. Burn a CD and test drive it without installing or look at the video’s.
http://www.losethos.com
You’re real motivation is to proclaim your operating system as best by distorting facts with silly arguments.
I agree with the intro Thom has laid out, it is correct that humans prefer to interact in a more meaningful way.
A very small example of this would be the iPod Touch i use. It uses a touch interface similar to other devices however it differs in it’s execution. For example when setting a time for the count down timer you are not presented with a keypad of numbers but with a dial (similar to the rollerdex) you are then able to spin this dials with a flick of your finger, you can flick it fast or move through it slow.
It’s a small example but it highlights the need for more feedback from devices. I like the idea of grow, i feel that the classic UI we are all used to today could do with an overhaul. We have computers which are incredibly powerful, we use them for things we could only dream about 20 years ago, it seems that the only part of the computer which has not evolved is the UI, we are still using a technique invented in the 70’s to interact with out computers.
I’m all for eye candy visual feedback, i like the concepts in minority report (sliding data into transparent memory cards) These all work around concepts we understand in the real world.
I have disagreed (silently) with Thom on quite a few things, and I can see why Thom’s style might irk some – but I think we are all big girls and boys on this site and can take the rough with the smooth.
However, despite the trumpeting elsewhere on this site as to the concept of openness and how we can all build on ideas positively, no matter how small the original kernel (no pun intended) of the idea might be, precisely because they are shared and shareable, I find the negative, pettifogging and ad hominem positions in reponse to this particular piece somewhat puzzling, yes, shameful, and very much contrary to the concept of ‘openness’.
Thom is correct I think in one major point – the use of psychology and especially cognitive psychology by inference to help in the understanding of what happens to the human mind and its attentive abilities when it has to process information visually, as opposed to the linguistic, linear bias we see in current ways of thinking and (appreciation of) apperception.
Also, could the visual objects that Thom is thinking about become ‘learning’ objects – how could or might data describing their movements and interactions driven by the user be recorded, audited and mined for relationships on an ongoing basis so that the way a desktop experience evolved would be keenly linked to the personality and preferences of the user, and be employed to pre-empt what that user needed without becoming simply a future quantum-computing equivalent of an, just for example, XP informational balloon.
So here is a concept that UIs should be improved in some unstated way, summarized in the last couple of paragraphs. I agree… but who doesn’t? Almost no one thinks current UIs are as good as they can be.
So what was the point of this article?
While I’m here, a word or two on intuitive UIs, as I see them:
The UI is good if when I reflexively attempt to do something I want to do the result I see matches the result I expect, where I is each user individually.
You cannot hope to design one UI that does what I expect all the time, where I is me and you and everyone.
And on a related note… there is a concept among UI wonks that making the screen objects familiar and having them act like physical objects somehow makes interaction easier, functionality more discoverable and users happier. This is, as far as I can tell, mostly nonsense and probably a bad idea.
What happens when the user has no frame of reference for what is happening on screen? They are confused, then they learn how the screen-objects relate and function, then they can use them.
What happens when the user uses an existing frame of reference because the screen lies to him about what is being represented? The user is functional until the object on screen violates expectations by not behaving as he expects it should, then he is confused and annoyed, then he unlearns his assumptions about how the screen objects interact and learns how they really work.
What is the gain of lying to the user like this? Learning has to occur at some point anyway unless the object modal is 100% accurate, which is simply not achievable without some kind of neural interface (and maybe not then).
How many manila folders do you use in a typical week? Probably not many. How deeply do you nest folders within one another? Probably not deeply. The UI-wonk reasoning is therefore that computerized manila folders should not be nested deeply, even though it is extremely useful and natural to do so, in an effort to not violate expectations. This is crazy.
Computer UIs should primarily be built in a way which is most useful for their purpose, disregarding user expectation, and secondarily be built to be tuned by the user to match whatever the user’s expectation is, and lastly computer UIs should be internally consistent and consistent between each other in whatever ways possible that do not contradict the first and second points.
1) Write down the problem.
2) Think very hard.
3) Write down the solution.
1) Write down a very clear description of at least parts of the problem you want to solve
2) Come up with 5 simple, unfinished but already usable guidelines
3) Call it Grow 0.01
Tongue in cheek of course 🙂
G.R.O.W. rings too closely to male enhancement cream.
I would come up with something less related to being sarcastically dug at.
I hope he does something that pushes the envelope and increases one’s productivity.
Edited 2007-11-16 10:06