I can remember seeing my first calculator in the 1970s. It was shown to us by a teacher in school. Up to then all we had seen were mechanical adding machines. The calculator amazed us, it was silent, instantaneous, and even had a square root key, a function I never saw any adding machine do. The teacher explained that soon every home would have a computer. I couldn’t believe it, computers were huge, and filled rooms. Even a home computer would take up a living room. He was right though, by 1977 we had home computers that weren’t much bigger than a keyboard.
The first “home computers” (Apple II, C64, TRS 80) usually had a command line interface, and came with the BASIC programming language. Programs came on a disk, for those who had one, or on a plug in cartridge. Those
first home computers were used for a variety of tasks: word processing, sound analysis, decoding weather satellite pictures, and were even turned into heart monitors. The BASIC programming language (an OS in a way) allowed the user to get their computer to do whatever they wanted. It seemed that there was nothing you couldn’t do with a computer, and users were constantly coming up with new ideas.
Then in 1983 Apple introduced the Lisa. The Lisa introduced the mouse, the GUI, icons, an office suite, the ability to run several programs at once. It also had true preemptive multitasking capability. Then in 1985 the Amiga was introduced, which introduced the concept of multimedia.
That was 1985, what about now? Is there anything truly new in operating systems these days? Before you say “of course!” consider that most of the features of Windows XP, Mac OS X could be found on the Lisa or Amiga,
way back in the mid 1980s. Yes, there have been continual changes, improvements, and updates, along with improvements in OS stability and graphics. Still, is there anything really new? Has there been a “quantum leap” like the home computers themselves or the GUI in the past 15 years? When you think of it, there really hasn’t. We’re still using the same desktop based GUI that the Lisa had in 1983. Yes, graphics have improved, but computers like the Amiga and Atari ST had multimedia and sound processing in 1985. And basic types of 3D games existed by1984.
Suppose you could get in a time machine and take one of today’s computers back to a computer user of 1985. What would they think? They likely would be impressed with the game and graphics capability, but I think they would be disappointed to find we’re still using the desktop based GUI, along with the keyboard and mouse. See, back in 1985 there was much talk about artificial intelligence, voice recognition, etc. The possibilities for future computers seemed endless. Given the progress from 1980 command line computers to 1985 graphical computers, I think most people would have expected there would have been much more progress in the past 17 years.
What happened?
Part of the problem was mismanagement by the companies that made the Lisa/Mac, and the Amiga. Although these computers were advanced, they were never given the chance to become a market standard, mostly due to
the mistakes of the companies that made them. Also, they were ahead of there time. There were no Power Point presentations, and so businesses didn’t appreciate the graphics capability of the Mac and Amiga.
Although the Macintosh found a place in schools and businesses, internal problems and changes at Apple kept it from progressing much since 1985. Apple had planned to have a next generation OS (along the lines of OSX or BeOS) for the Mac by the early 90s, but numerous mistakes and problems, along with management changes, kept it from occurring. So Mac users had to wait until 2001 to see the new OS.
Secondly, with all the incompatible operating systems in existence, businesses were waiting for a standard. The IBM PC became the standard. The problem was that the IBM PC was introduced as a business machine, not a graphics machine. It was mostly a text machine, with limited graphics capability (there was a GUI for the IBM PC called GEM, made by Digital Research. It came out shortly after the Mac did, and basically provided a Mac like GUI for the PC. However, it never took off).
With the IBM PC and DOS becoming the standard by the late 1980s, much of the progress provided by the Lisa/Mac and Amiga were lost. The IBM PC needed to catch up. It wasn’t until OS/2 and Windows 95 that the PC
finally had a desktop GUI, multitasking, decent multimedia capabilities, etc. In other words, features that existed on the Lisa, Mac and/or Amiga 10 years before.
Still no progress?
So here it is 2002, and we’re still using operating systems based on 17 year old concepts (actually, the GUI, mouse, ethernet, etc. were developed at Xerox earlier yet). It seems like the computer industry has lost vision. Computers have become “business machines” and status quo prevails. Where is the imagination and vision of the 1980s? The possibilities seemed endless. Computers were being programmed to talk, and to communicate with plain english via text.
A few hopeful signs have appeared recently. There is work being done on a full fledged pocket sized computer. Add LCD glasses as your display and a compact or virtual keyboard, and we’ll have computers that go with us anywhere. Also, Microsoft has been working on a 3D style GUI that will update the desktop based GUI to the modern internet/email/multimedia world.
So, what should we have by now?
You might ask “well, what did do think we should have by now“? A good question. How about:
An improved GUI. The desktop metaphor GUI was developed when most people used their computers for office functions, – – writings letters, doing spreadsheets, ect. Much of the time users just used one program at a
time. The interenet and email were virtually unknown. Today that has all changed. It is typical to have several programs running at once. It is typical to be on the interent, checking emailing, and writing
a letter all at the same time. Several programs may be running, each with several documents open. The desktop GUI still works, but it needs an update.
You would think that by now multiple desktops would be standard. These allows users to have a desktop for each task group: games, office work, graphics work, ect. It would also be nice to have desktops with zoom
and pan capability. In fact, instead of being forced to do all the pointing and clicking, it would be nice if the GUI worked like a game. You could “travel” and zoom in and travel through folders and zoom in on documents to fill the screen.
How about a 3D analogy, where you “turn” to your right or left to access an internet panel or email/fax panel, ect? A 3D spatial layout could make it easier to keep track of many items, even those off screen. Window management could also be improved. It would be nice be able to rotate and angle windows to fit more on the screen at once.
I would have also expected that the OS and applications would be more seamless. Instead of OLE links between word and spreadsheet programs I would have expected by now that programs would “morph” into a word
processor or spreadsheet as needed. [Gobe Software does have an office application that does this, and Apple did try to introduce the concept as OpenDoc a few years ago.]
As far as command lines, I would have expected they would have gotten past the arcane and hard to remember commands. Computers should have enough AI capability to recognize plain english to a degree. I think by now I should be able to type “place folder called “bills” in the folder called “finances“, without typing the whole directory structure for each folder. And the computer should know how to do it. Or how about typing:
“change my internet access number to ……….” and not needing to go to a special panel to change the number?
There you have it. Several ideas that could have been implemented in Windows or the Mac OS years ago. A few are being considered only now. As I said, it seems like the problem is that the computer industry has
lost its visionaries. And maybe the “wonder factor”. We no longer see computers as tools that can do anything we want, but more as machines that do the standard jobs, word processing, graphics, email, ect.
Without the visionaries, there has been little progress.
About the Author:
Roger M. is an engineer who works in Southern California. He has owned computers since the time with 64K of memory was considered a lot of memory. He is interested in computers, cars, and pizza. Roger can be contacted at [email protected]
Along with the Ginger scooter, completely useless stuff.
What happened was that people realize they have better visions to fulfill. That little boring computer led to the most unprecedented growth in productivity in human civilization history — that’s a much bigger and better vision. The 36 inch TV you bought today cost you less than what your 20 inch TV you bought in 1985. Your 2002 Civic cost the same as your 1985 Accord and your new civic is a bigger car with a bigger engine than your old accord.
Do you really want the U.S. government to spend trillions of dollars of public money to pave wider sidewalks so you can ride the Ginger scooter to work?
All of your conceptual visions are already theoretically possible, they are just not deployed.
AI’s are busy at work in data-mining. Voice recognition runs automated call centers and helping the disabled. Adobe already sells Atmosphere, a 3D browser plug-in, for surfing different 3D virtual world. Guess what, nobody is interested because humans don’t interact with files in 3D virtual ways.
I think most of the ‘improvements’ that were mentioned in the article would be called … bloat. I mean, I fail to see how having multiple desktops (isn’t this possible in Linux anyway?), a zoom feature, and a 3D environment would actually help me get my work done fast. As it stands now, if I have multiple apps open, I can simply click an icon in the taskbar or ALT+TAB and I’m there – simple and effective. Hell, that might actually be quicker than telling the computer ‘switch to program xxx.’
At any rate, perhaps the slowdown in technological improvements may be due to the fact that things now days are so much more complex. I mean, take games for instance: in the old days, you could have one or two guys developing a commercial game in their basement – now days, it usually takes a team of at least several dozen programmers to create anything but a crappy shareware game.
Creating a simple word processor is one thing – creating something with bells and whistles like speech recognition is another – not like somebody could sit down in their spare time and actually program something like that on their own from start to finish – not without using someone else’s code as a foundation anyway.
The same argument could be made for any field. Look at a car. Gee, they all still have four wheels, they all still run on gasoline, they all have steering wheels, accelerators, brakes…..where’s the progress?
We haven’t seen much change in computers because we have a system in place that works, and works REALLY well. It works exactly as it should (most of the time) and because of that it is absolutely boring. I would love to see computers move into something new, but when you start proposing ideas, you start to see how good the current system is.
A 3-D desktop is a nice idea. However, how do you make it easy to manipulate windows in 3D space? How do you keep track of what is where? You would need an onscreen radar to tell you not only which direction to move to find a window, but how far away it was and what direction it was facing. Seems a bit less efficient than pointing and clicking, doesn’t it?
As far as commandlines, plain english isn’t very good, because it is not what you say, it’s how you say it. A computer has to understand context to understand plain english, a task hard enough when you can hear the tone of voice and see body language. Even you and I can’t always glean what a person means.
And don’t even get me started on voice commands, most computer functions can be done faster typing.
It’s not lack of innovation, it’s creating a framework that works too well. Show me another field that’s changed so radically as you expect computers to.
One other thing I forgot to mention …
I think another reason for the not so huge technological leaps is because if you actually have some kind of vision and are working to see it come to light, you’ll eventually wind up in court, being sued over anti-competitive or copyright issues – either that or else companies spend more time bickering about licensing fees than they do actually developing the technologies!
Roger; one note about your scenario of taking a modern computer back to 1985: it is amusing to realize that they would probably be disappointed in the lack of conceptual advances, but they would be absolutely flabbergasted by the amount of RAM, hard drive space, processor speed, and network bandwidth we have. They would be even more surprised to learn that with all that extra power, we users still have to drum our fingers waiting for things to boot and load.
In short, most progress in the past 15 years has been quantitative. And, the GUI interfaces have just used up all of that quantitative power to give us more eye candy but almost no conceptual advances. We still have clumsy heirarchical filesystems to navigate up and down. We still click on menus and icons. We still click buttons to “save” information.
The only (somewhat) revolutionary advance during this time has been the proliferation of that crazy decentralized network we call “the Internet”, and the hypertext method of navigating. But actually, hypertext was invented in the early 60s, and the internet was invented before that. So again, the progress has been largely quantitative.
Another example: the theory of the relational database was invented in the early 70s, and in 30 years, the industry still hasn’t gotten it right. To this date, no truly relational database exists. (Think I’m wrong? Just read Codd & Date)
Most of what we have seen since the 80s is a frenzy of activity, with very little true vision behind it. We are constantly re-inventing the wheel, rather than asking whether we even need a wheel. And, to hide the fact that we are not really advancing conceptually, vendors try every year to pack more “features” into their software, to the point where the average piece of software has hundreds of menu choices and configuration options that the user never has time to deal with. Again, we see quantity, but not quality. And since the software is so bloated and over-complicated, we employ armies of coders, armies of testers, and then armies of “consultants” to actually make the software work as advertised.
And is the average user any happier with computers now than 15/20 years ago? I think it’s the reverse. Operating systems are more complicated than ever, and have more pitfalls and gotchas than ever. Among all the “average users” I know, I don’t think I know anyone who is really happy with the computer overall, but they just pick their one or two favorite applications that they have bothered to spend the time to figure out, and hope that the application isn’t totally changed or screwed up in the next release.
The main problem, as I see it, is that for most of the 80s and 90s most computer companies and developers approached things from a perspective of divergence, rather than convergence. They didn’t want their system to work like the other guy’s system, nor did they really want standardization, because they all feared losing their perceived marketing edge. By the late 90s, we finally started seeing a push for standardization, with the internet, Java, Linux, FreeBSD, etc… all helping to tip the scales toward open standards. But, now what we have is a horrendous proliferation of standards boards. W3C, IETF, IEEE, ISO, SESC, ANSI, EDI, etc…
And, rather than fix our standards, and try to simplify things, we continuously invent new standards, along with new headaches, ad nauseum. This is one reason it is hard to develop half-decent software. As we developers work, we feel the ground beneath us constantly moving, with all the business decision-makers involved constantly pushing for new, new, more, more, without ever having the chance to sit back and evaluate what is even worth doing, much less take the time to even make the software work properly.
My plea: Computere science majors, developers, engineers; can we take some time out to really learn and explore the theory behind what we are doing, rather than just charging out there and building things? We have passed over so much potential good stuff already that it is a shame. Let’s go back and really examine the best ideas of the past 30 years, then compare them to what we actually have, and build something qualitatively better.
>>>And is the average user any happier with computers now than 15/20 years ago? I think it’s the reverse.
The thing is that the “average” computer user have grown from a few million people to a few billion people.
>> think most of the ‘improvements’ that were mentioned in the article would be called … bloat. I mean, I fail to see how having multiple desktops (isn’t this possible in Linux anyway?), a zoom feature, and a 3D environment would actually help me get my work done fast. <<
The same could be said (and was said) about the GUI itself: “why do we need this, a command line is fine” Yet the GUI has proved invaluable. Oh, and a flying car is yet in the works.
>>AI’s are busy at work in data-mining. Voice recognition runs automated call centers and helping the disabled. Adobe already sells Atmosphere, a 3D browser plug-in, for surfing different 3D virtual world. Guess what, nobody is interested because humans don’t interact with files in 3D virtual ways.<<
Good points, the technology has been put to use, but why not give this technology to PCs? Windows Help sure could use a little AI. I think humans could interact with files in 3D, the same way that games work.
>>The same argument could be made for any field. Look at a car. Gee, they all still have four wheels, they all still run on gasoline, they all have steering wheels, accelerators, brakes…..where’s the progress?<<
Actually, cars have advanced, computers make engines run better, and you get far more features then years ago. And, cars could/should have gone much further, either turbine and or electric cars for instance.
>>We haven’t seen much change in computers because we have a system in place that works, and works REALLY well.<<
You may be right, maybe the current desktop/icon arrangement does the job, but I don’t think so. Microsoft doesn’t seem to either, hence their Freestyle project
In no particular order…
– processor speed increase allows for better compression, allowing us to decompress audio (MP3) and video (DV) in real time. Digital everything.
– affordable printing/scanning/imaging in colour.
– affordable networking and all P2P/client/server thingies. I can view satellite photos in near real time (weather satellites). I can watch TV/listen to radio over the internet from anywhere in the world.
– incredible miniaturization – think laptops, digital camcorders, MP3 players, handhelds, mobile phones etc.
– colour LCD’s / TFT devices. Sharper/bigger CRT monitors.
– affordable multi CPU systems (btw, what happened to transputers?)
– monopolisation – yes, I know, curse M$ all you want, but I can now walk into any computer store and buy software/hardware which is guaranteed to work on my system. Not so in 1985.
Its Friday afternoon home time, see y’all.
I think the reason 3d won’t work for desktops is because 2d is superior. I say that because look at desks (minus the computer related stuff), people put the things they use the most on the top and shove everything else in a drawer. With 3d desktops you can have a lot more stuff but its layer upon layer of objects to remember, which would slow you down (the game example you gave I think proves this, in order to get to a door at the end of a hall and on the right I have to walk all the way down to the door and then to the right, in 2d I just go right). Thats also the reason I have very few icons on my desktop, the programs I run the most are close at hand and easy to find, by adding more (either in a 3d space or just more icons) you spend time looking for a program you use a lot amongst all the programs you haven’t run in days.
I also think people always mistakenly expect there to be perpetual leaps and bounds in new fields when they emerge (look at tv, after color in the 50s nothing big until cable in the 70s, then mini dish satalite in the early 90s and hdtv starting in the late 90s, 50 years and only 4 advancements that made it to the mass market, and dozens of failed attempts at improvements with various interactive plans and what not). Computers are actually ahead of the curve (gui early 80s, internet mid 90s), 2 in 20 beats 4 in 50 imo. Of course I could be wrong, and I just use my desktop space differently then everyone else. btw I think ms’ new 3d windows system is relying on the video subsystem to show everything and releaving some of the stress on the cpu and system ram.
Well, I think, only hardware has really improved, and this mostly due to manufacturing technology. If the process sizes for chips hadn’t shrunken all the time, the P4 wound have maybe 10Hz or so…
Software, OTOH, has not improved. The i386 already allowed multitasking, that was 1987 (?). The first OSes that used this were Linux and BSD in 1992, Windows only really does it since NT.
All these systems, however, take lightyears to boot into a usable state, while the only improvement over, say, an old Mac, or the Lisa is more animation (WinXP), transparent stuff, etc.
Applications don’t really interface nicely, and all of the time proggers have to _rewrite_ the wheel. Why is this so? Well, we mostly have software written in C dialects, which are notoriously known for weak type checking, low levels of abstraction, while the only ‘improvement’ of Java and stuff is a form of Object Orientation that doesn’t even match the elegance of Smalltalk, a language from the ’70s!
It’s plain and simple: if you hack 0s and 1s all the time, and define no new abstractions, you end up with unmagageable bloat that doesn’t cooperate with other programs.
For this exact reason DOS sucked, and for the same reason X Window has no real transparency, and we have to define layer over layer, like X, Gtk, Gnome, etc…
If you are interested in the other direction, which is reflective systems, take a look at http://www.tunes.org or learn other things than C++ and UML (which I sadly have to do right now…).
They were ahead of their time and so suffered the same fate as everything else that is ahead of it’s time…
However, the concepts live on. In a sense massive distributed computing projects over the internet have some similarities, AMD’s hypertransport has similarities, there
are plenty of examples. Admitedly none of these are currently built into CPUs (though I’m sure I recall seing something wrt AMD’s hypertransport and integration with CPU’s…)
I have had similar thoughts on this subject. The first buisness computers were teletypes that printed on paper instead of an electronic screen, later came the text monitor that could display characters without the hassle of paper and ink. Then came bit-mapped monitors that could handle graphics. This all happened by the mid 80s. Today we are still using mice, keyboards and bit-mapped monitors. It is a useful system and has served us well, I don’t mock it. However, imagine a 3d display, where instead of having a flat monitor, you had a 3d space where you could truely have control of the length, width, and depth. This would make many tasks on the computer fundementally easier. Modern 2d apps could be run in an emulator in the 3d environment (like a terminal window emulates a text based display in a graphical environment today). The possibilities are endless. These are the kind of visionary ideas that the auther is refering to. I hope to see them someday! Well food for thought, call me crazy if you want.
Later, Skipp
Sorry but I remember having been always very skeptical of all these wonderfull new things which were promised.
– voice recognition: for this to work very well, we’d need VERY intelligent computers.
Beside, your neighbour at the office is going to kill you if you use it!
– AI which brings many wonders..
Even in the end of 80s it was already apparent that AI will stay with a big A and a very small I for a long time.
– 3D desktop? Give me a break! I doubt very much that it will be usefull unles we also have very nice 3D screen..
There are incremental niceties which would be very nice: flat panel display, QUIET computer, a computer with no HDD: only permanent memory.
But a revolution in the near term?
Maybe but I don’t foresee it..
The next big conceptual change that’s bubbling up has tried a number of times before and might get it right this time: handwriting recognition and the “piece of paper” metaphor.
The keyboardless tablet computers are getting close. Coupled with good handwriting recognition and a data-mining search facility these, I think, will be the next big change in how we interact with our computers.
Aren’t web services meant to be the next big thing? Currently the actual information content on the web is unstructured (well, it’s more or less written according to the grammar rules of a range of human languages, but for the purposes of computer data mining it’s pretty unstructured). If some of the information could be provided in a more structured way, it would become a whole lot more useful.
In reply to Rick Morris: hey, don’t hassle heirarchical filesystems, I think they’re cute.
And I agree with rdoyle720, plain English? Know weigh, their R context issues hair that don’t beer thinking about. DOS/Windows already has enough trouble with structured commands:
C:> XCOPY /s MYDIR1*.* MYDIR2
Does MYDIR2 refer to a [F]ile or a [D]irectory?
Imagine how much worse this would get. Besides, not everyone speaks the same “plain English”, and, of course, not everyone speaks English at all, plain or otherwise. When someone moves into a new field, they learn the lingo. Conventions are required for effective communication, just let them be sensible conventions.
—
James
… -all these cool ideas not being developed. I think that Roger has a pretty even keel about the history, but it seems like he feels trapped with not being able to implement ideas that he feels would benefit humankind. I think that the problem lies in the fact that engineers don’t do customer relations. They hang out with other engineers and power-users. Believe me, the average (there’s that word again) user has a tough time dealing with the concept of shortcuts. Don’t confuse them any more with a 3D desktop and a flawed AI (because “flawed” is what we’d get first.)
>> I think humans could interact with files in 3D, the same way that games work.
Games are not meant to be efficient. They are meant to amuse and soak up time.
The 3D desktop seems terribly inefficient. I don’t want to feel like I’m trapped in a virtual maze while trying to find the file or program that I want. I can read shortcut and folder labels just fine. I agree with Mr. Doyle. I can’t see how a 3D environment would be any more efficient. I would just be doing the same thing in a different way. You would still be slowed by the way you input the data.
>> It is typical to be on the interent, checking emailing, and writing a letter all at the same time. Several programs may be running, each with several documents open.
What does this matter? The user can only actively use one at a time anyway. Right now, I don’t find that I have the ability to view a web page and work on an un-related document at the same time, even though I may have several of these open at the same time. Until the user can multitask in this way, the current method is just fine.
Perhaps I don’t understand your “pan and zoom” concept, but it is already possible to have a desktop larger than your screen. No one uses it. It is a hassle to hunt around the hidden areas of your desktop for your file or folder. It is neither fun nor productive. That is why bigger monitors were invented. Less scrolling/panning/hunting == more productivity.
Your AI command line neglects one thing -multiplicity. If there is more than one instance of a folder or “Internet connection”, then the user would have to be presented with a choice anyway. Then we are back to being less productive. Even if you were able to dictate commands to the computer, the computer would then have to relate back the choice, leaving the user with the feeling that they should have just used 5 mouse clicks and saved themselves 30 seconds (which is a long time when waiting on an appliance.)
I think that Roger’s ideas are a great exercise in “what could be” and I applaud the thinkers that brainstorm new ideas, but I don’t think that these ideas are practical for the here and now. That isn’t to say that these things shouldn’t be worked on for the future. Let’s get them ready for a generation of less technophobic people. I feel that the majority of the computer users right now are used to the good ol’ desktop concept when all they had on their desk (if they had one) was an in/out box, a calender/clock, and a calculator. I think that where we are now with our technology is “just right.”
>>Microsoft doesn’t seem to either
By the way, Microsoft is not the definitive on user-friendliness nor efficiency
Oh, and Zenya, yes, having a defacto standard OS does make standardisation easier. The same principle could be applied to any other industry too. For example, if there was one car maker recognised as the defacto standard, then all mechanics could train for certification from that one company. Then everyone could be sure that they could go to any mechanic and get their car fixed. Many people seem to see this kind of arrangement as absolutely wonderful, but I personally don’t like it.
—
James
You complain that todays existence is not the future 1985 envisioned yet the the technological advances you claim we should have by now are trivial and explained away by basic concepts of user interface design.
The beginning of your article talks about the computer revolution and by the end of the article your argument is that the revolution of to day has not happened because you can’t type a long sentence into your computer and have it “do what you mean and not what you say”.
Give me a break. The revolution of today is that “Roger M.” has the opportunity to have his loosely thought out opinions heard, and subsequently shot down, by thousands of people on the internet. You said it your self “in 1984 the internet and email were virtually unheard of”.
– more power efficient processors, i want my linux desktop 24/7 in my pocket, like my mobile phone.
– better connectivity. the ability to use ANY channel of communication.
– more bandwidth, everywhere. i want to read news while on the tube, train and walking in the park!
– less code bloat & more security. i dont want to have to think about the danger of my always-connected, always-on mobile device…
PS: linux already has multiple desktops and theyre quite useful….
will not work.it is to time consuming. when you think of a tool (like a computer) you want to make accessing information quick, simple and easy.
navigating through “corridoors” to get to my documnets is not as efficient as clicking on the folder.
the problem of better UI than the desktop deals with not coolness, but is it quicker, easier, and simpler then the desktop?
and plain english commands are a problem. english is very complex grammer wise, as is any language that is spoken, it has to be because of the vast numbers of ideas we have to express.
computers do not have a lot of idea that they have to express. a simple set of commands that can be combined into complex operations is a much better way of dealing with them. we might be able to get rid of file directories if we can successfuly marry a database with the file syetem, then saying:
mv foo/myfile bar
would move the file “myfile” from the folder “foo” to the folder “bar” with out having to remember directory structure
that is an improovment that I see coming, but I think that making the UI easier, more efficient, and simpler is going to be very hard.
it is not If humans can access files like a game of quake, it is the question “is accessing files like a game of Quake efficent, simple, easy?”
my answer is no, it is not efficent, simple, and easy.
when was the last time you missplaced a file on your computer (and I mean one you created and put in a place like your home folder)
when was the last time you misplaced an object in the real world?
a 3d interface does not add anything except flashyness, a coolness factor that is so much more counterproductive that people would not get thier work done.
>As far as command lines, I would have expected they would >have gotten past the arcane and hard to remember commands.
About 8-10 years ago, when I was already using OS/2 but Windows 3.X was yet to make the splash it did, I read an editorial (by John Dvorak, I think) mocking the hype around GUIs. To paraphrase, “Why would I want an idiot working for me who couldn’t understand ‘copy letter.doc a:'”. Still trure.
>Computers should have enough AI capability to recognize
>plain english to a degree. I think by now I should be able
>to type “place folder called “bills” in the folder called
>”finances”,
Great, all the productivity gains of the PC revolution down the drain. How about “move bills finances”, assuming your directory structure is logically laid out.
GUIs can be a real productivity killer for simple file operations. At least in NT, delete or copying dozens of files in a folder takes for ever as the little animations and constant screen updates take place. I guess I do this more often than most users, working with source code and CAD files, so it’s probably not a big deal.
>without typing the whole directory structure for
>each folder.
4DOS et al provides this functionality. And other shareware provides virtual desktops. All in all, this article is not very visionary.
I think we don’t see major progress in computer science (only great improvements) because it’s based on the same technologies. The same Input/output: Mouse, keyboard and screen. the same type of electronic brain: Processor.
The day we’ll find a new way to compute things and to interact with our “computer”, it’ll be a new world … and your amazing “Pentium 14/755 GigaHertz” with your wonderful 3D desktop windows (which basically will do always the same thing than before) will be obsolete.
I think the futur is: Speech recognition, Artificial Intelligence, image processing and 3D vision. All of them will provide to computers some human interactions.
it takes time …
geoffroy
>However, imagine a 3d display, where instead of having a flat
>monitor, you had a 3d space where you could truely have
>control of the length, width, and depth. This would make many
>tasks on the computer fundementally easier.
Care to elaborate which tasks this would make easier? I use solid modeling CAD programs and I can see where this MIGHT have SOME benefit to me. Of course, transparancy is a problem. But how is this going to benefit most users in any way, let alone a profound one?
This all reminds me of the photorealistic talking head that was supposed to be part of Workplace OS (the microkernel-based OS/2 for the PowerPC that was eventually canned). It was supposed to sit on your screen and make you feel more comfortable, showing & telling you what to do in response to your spoken questions. I thought it was ridiculous the first time I heard it. It was even more ridiculous when it was announced that it would not be included, and the pundits said, “Well, now OS/2 has no future. What does OS/2 have to offer over Windows 95 if it doesn’t have a photorealistic, talking head?” Of course, OS/2 didn’t have a future, but that wasn’t the reason.
— QUOTE —
will not work.it is to time consuming. when you think of a tool (like a computer) you want to make accessing information quick, simple and easy.
navigating through “corridoors” to get to my documnets is not as efficient as clicking on the folder.
— END QUOTE —
Did anyone see the Michael Douglas movie, _Disclosure_, based on the Michael Crichton book? The big gimmick was that Michael’s company’s database was stored in a virtual filing room, just like is being discussed. In the climax, Douglas puts on some VR goggles and gloves to retrieve a critical file. As he moves thru the room and touches some button, there’s some CGI magic as something appears that looks… JUST LIKE A FILING CABINET!!!!!! By comparison, the “hero types really fast” scenes in many of today’s action and suspense movies are positively thrilling. Admittedly, it was one of the first uses of CGI that I know of.
And the secret, killer product that Douglas’ company was working on was a CD-ROM drive TWICE AS FAST AS ANYTHING ELSE AVAILABLE!
Rob C.
It is precisely my point that even the incremental improvements I suggested haven’t even been implemented in the past 17 years.
You’re thinking of a too complicated 3D desktop. Think one room, and truning side to side, maybe down also. The concept would be like sitting at a desk and turning right or left to your files or fax or documents, ect, with a quick move of the mouse to one side of the screen.
Ever have 10 Windows open and try to figure out which is which on the taskbar on Windows? You can do a lot of pointing and clicking to find the right Window. With a 3D or pan and scan desktop, you could move the pointer to the right of the screen and scan across open windows much quicker. It also makes open Windows easier to keep track of. People can “see” spatial relationships better.
Yes, I would say the revolution hasn’t happened simply becuase even the incremental concepts I mentioned have appeared. A few things just now being implemented:
Search engines that can really find answers to plain language text “how many red Corvettes are in California”
The Freestyle GUI which allows better manupulation of multiple Windows and desktop items, in a 3D eviroment. And yet it keeps it fairly simple for for the user.
The Gobe all in one document concept that allows the user to use a document as a sheet of paper – – write, draw, add tables almost seamlessly. Instead of the clumsy OLE linking scheme.
All these could have been done by the early 90s.
would you be blown away by a new 2015 computer (maybe running “Apple OS XX” or “Windows WOW”) that looked pretty much like the one sitting on your desktop? i’m sure the apple would be nicer looking than the dell, but it’s still the same loud humming box with slots in the front and a ton of cables coming out of the back? and a desktop with 7 items (all with cords) on it just to get it to work?
and when you booted it up it took a minute and a half to come up to a ‘desktop’. and then you had to click and right click and click-hold 10 times to start creating your document or image? sure, the screen was 30″ across and the icons were jumping and smiling at you. and the hard drive (of course it has one) says that there was 1.6tb of space left. you had to click another 5 times to find a browser to launch. for fun, you typed (yep, you still type slow) “hbo.com” (after mousing into the URL bar) and up came the site in 1 second. a few more clicks and you were streaming HBO. well, you had to setup a bunch of plugins and dismiss a bunch of spyware, but there it was, the 15th season of the sopranos zoomed to full screen.
fun? yes. groundreaking? no. these are things that are basically available now, especially if you have the pipe and dual G4 that i have ;-). you’d probably come back to 2002 feeling a little better about your high-end XP box.
i have to agree with the author, we’ve come nowhere fast in most areas except for the internet. the OS and the user experience has not improved to the point it should (and could) have. the marketplace squashed much of the true innovation like Apple’s OpenDoc, NeXT’s distributed computing, and Sun’s Jini & Java. remove the internet from the equation and WindowsXP can’t create much more than the mid-80’s macs and amigas combined.
I enjoyed the read. It reminded me of the days gone by when I could write a program in less than 1k. I think computers have evolved and move into new avenues each day, home computers are just a small part of the computer industry.
We are owned by a Japanese company who have lots of real hi-h stuff. Nothing you’ll see posted here at OSNews for a year or so. But with all that hi-tech computerised hardware, our Japanese FD still uses an abacus!!
Computer users haven’t hit the billion yet!! but we’re working on it.
Most of Jef Raskin’s The Humane Interface is about the concept of a zooming UI. It isn’t 3D, but it does allow panning around and zooming in on a workspace.
What’s the point of that? The point really isn’t the implementation, it’s the recognition that our current interface model is centered around applications and physical organization, and as our information load proliferates, it’s worth considering models that are centered around data and logical organization. Raskin’s “ZUI” is a showcase for this concept. In a ZUI (or any other UI built around similar paradigms), the idea of applications largely disappears, replaced by ideas of command sets and data handlers. (A rough analogy would be something like browser plugins, expanded to handle data editing and creation.) In fact, the idea of conventional files and directories–from a user’s standpoint–also goes away, as do concepts like explicitly saving and loading documents!
Unfortunately, I’ve found that describing it in cursory detail is a lost cause. It took me two reads of the book–and a brief email discussion with Raskin himself–before the lightbulbs started coming on. It’s difficult to see past fifteen years of doing things the Lisa way, and I’m a user interface geek. I’ve found many users, particularly those who literally grew up using the conventional desktop paradigm, are openly contemptuous of anyone suggesting the future of UI design might lie in a different direction than “same desktop metaphor but with user-customizable widgets.”
The next “breakthrough” in computing will be augmented realiy. For those who do not know, augmented reality is basicly a computer hooked up to a virtual retina display (A laser that “paints” images on your retina, not a LCD glued to a pair of glasses), and has image recognition and mabey even GPS. It will let us get information about anything simply by looking at it. Need to know where the bus is in relation to the bus stop, no problem. Want to know the price of that widget sitting in the window, no problem. Want to be able find a product in a warhouse, how about the schematics to a TV, or the plans to a engine IN 3D and relational to the engin your working on right now, etc…. It WILL revolutionize computing as we know it because it will become compleatly seemless, an extention of our minds and our eyes, so to speak. I personaly cannot wait, their is a lot of work to be done still, but energies are being burnt elsewhere unfortunatly. What we need is CHEAPER non volitile memory Im talking at lead 50+ gigs in something the size of deck of cards. Better Virtual retina displays (currently they are limited to red only, sound familiar, just like the quad lcd on this OS390 from ibm i have here that they built in the 70’s) Smaller GPS systems that are accurate to within millimeters (Probably will never happen because of security issues) and a fast network link to access a central database that houses information on the location/size/shape etc… of things. This is what we should be shooting for, unfortunatly it will take someone with more $$$ then i will ever have to finance such a venture, and with the latest tech bust, that is hard to come by these days. Unless of coarse someone wants to get a hold of me and finance me to start something like this
— Quote —
Smaller GPS systems that are accurate to within millimeters (Probably will never happen because of security issues)
— End Quote —
More likely technical issues. Errors are no longer intentionally built in to the GPS signals (you just need a recent receiver to take advantage of it), yet the resolution is no better than a couple of feet. I’m going to show my ignorance, but I believe resolution is limited by wavelength or somesuch.
But what would you need GPS for? To locate the retina? No doubt an autofocus algorithm of some sort could do that. The optical technology will definitely be there soon. My company makes machines for the MOEMS market.
Rob
[email protected]
No the GPS would be used for spatial analisys, how does the computer know what taht building across the street is, what is through that door, where can i find the nearest starbucks (hehe) etc..
We don’t need a time machine to take today’s computers back to the 80’s and compare it with computers of that time. Those old machines still exist today and it is possible to put the old and the new side by side and compare them.
I disagree with the author on the point that not much has changed in the user interface. Putting aside the fancy colorful eye candy, today’s user interfaces are easier to use and more efficient. Not just the desktop interface of an operating system, but the controls and elements that an operating system provides for applications. They are more intelligent today. Check out the fancy development IDEs like Visual C++ and CodeWarrior. Today we have pop-up tool tips and pop-up list boxes with a list of functions and methods and they’re a big help which I appreciate having in my dev tool. I haven’t tried programming on any of the older machines, but I have a small idea of what it’s like programming in a command-line environment like using Vim or emacs or whatever Unix cmd-line text editors.
The now defunct BeOS operating system had a several radical improvements over competing operating systems built into it, but sadly BeOS died for lack of public interest (and MicroSoft’s uncompetitive behaviour which prevented computer manufacturers from including it pre-installed on the computers they sold).
BeOS was built on a database-like filesystem, a feature that you can hardly tolerate living without once you’ve tried it. With the database-like file system and built in queries, it was incredibly simple to, for instance, find that address you filed away somewhere in your huge hierarchical set of nested directories. You didn’t have to remember where it was, you just had to type in a simple query and BeOS would find it for you in a flash.
BeOS also had the ultimate “addressbook” app built right into the operating system: using the database like filesystem and “people” app, you could store names, phone numbers, addresses, etc right in the filesystem, and have them immediately accessible to any BeOS application. Not like the mess in the Windows world, where every email tool and PIM has it’s own mutually incompatible format for storing contact information.
There were many other wonderfully good features to BeOS. Too bad it’s now gone the way of the Dodo bird (though there are a few open-source attempts to recreate and eventually advance a BeOS-like operating system).
I now use Linux, because I dislike it less than Windows. But I miss BeOS badly. While the major Linux distros get better every release,I doubt Linux will ever come close to the smooth, polished,integrated,just-so-right behaviour of BeOS.
-JV
[i]However, imagine a 3d display, where instead of having a flat
monitor, you had a 3d space where you could truely have
control of the length, width, and depth. This would make many
tasks on the computer fundementally easier.
Care to elaborate which tasks this would make easier? I use solid modeling CAD programs and I can see where this MIGHT have SOME benefit to me. Of course, transparancy is a problem. But how is this going to benefit most users in any way, let alone a profound one? /[i]
I was thinking about something similar to what you can see on http://www.3dwm.org/frameset.html .
Although 3d modeling would probably benefit most from this, it is not hard to imagine how this could have real benefit for most tasks now perfomed on a desktop today, and somet that cannot even yet be accomplished. You could display flat pictures and 3d objects. Instead of having a “file manager”, you could have the real thing, a virtual 3d file cabinet. You could surf the web in 3d, howabout a spherical website of the world, with information about each part as you scroll around? You could see the layers in your photoshop creation back to back, like real layers are put together. You could flip through the pages in your Word document as you flip through a real document. You could make a 3d graph in Excel that was really 3d, and you could rotate it, turn it upsidown, zoom in and out of it, etc. you could make a really cool powerpoint presentation that is not linear, but you could go to any “slide” in a 360 degree space. You could play a 3d game where you see with your eyes the environment instead of just looking at a section of a 3d world; imagine 360 degree quake where you are the player. You could take a tour of your new house you are designing, or the solar system, or Russia’s red square or your next vacation destination. You could really see such diffecult mathematical shapes as conics and 3d objects. You could do real line lengths seemlessly and easily, no more calculus! Boy there are a lot of possibilities, and these are just the ones I can think of off the top of my head.
And to top all of that off, you could still do everything that you do now the same way. You could have a 2d desktop in 3d space that would act exactly the same as ours do now. Kinda like a terminal acts like a command line environment on a WIMP desktop.
Skipp
Well first I do miss cartridge computing. There was just something about popping a new cartridge in my atari that seamed cool at the time.
Anyways. Flying cars do exsist. They are just something the will never be popular. Having wash from jet engines hitting pedestrians is not a good thing. Also if you think a 2D traffic jam is bad wait till the 3D version. Ever play 3D tix tac toe? People can’t even handle looking both ways at interestions. Now they have to look 4 directions. You still need roads with flying cars. No one wants you flying over your house and such. Its just a big mess. Far as saying cars have not advanced they still work the same, well this is true. The current layout just happens to be very good. Cars went through the user interface evolution phase 100 years ago. Cars used to have tillers instead of stearing wheels, one petal for forward one for reverse. some had 3 tires,(though some still do) there was a incredible amount of variation. After time one design came the norm. Computers apear to have achived this quickly. 2D gui, keyboard, mouse seams to be the way it will be for a long time. Also cars technology advanced alot, you just don’t see the advances in the computer, or reliablity and such. Much like a computer on the outside is still a box. hell look at a PIII or similar and look at a chip from 1980, there both little rectangles with wires. You wouldn’t know one is better from the outside. Also electric cars exsit now and exsisted before the gas powered car, they just haven’t been popular, and the turbine has been done, but turbines suck when made small. Chysler crushed their 300 turbine prototypes from the early 60’s.
Sorry, I messed up the end italics tag.
Skipp
What about the biggest thing since the printing press: the world wide web? Remember, the WWW has only been with us for a decade. It turned out what we really needed was not computers, but communicators. The big news wasn’t being able to compute and show on your screen what you were going to print later; that was just desktop publishing. The BIG news was being able to post what you could see on your screen for viewing on any other screen in the world, WITHOUT printing it and mailing it or even faxing it. And the secret to that was a universal document format, HTML, with no proprietary strings attached. With that plus an open graphics format or two, we have had a true revolution in our ability to communicate. Did the NeXT OS have anything to do with that? Or was it the genius of some folks at CERN? Either way, It’s not the technologies that have made this great leap, it’s the human agreement to communicate with rather than expolit each other.
“In a world without fences, we wouldn’t need Gates.” — Anonymous
Here is a way to think about it, what if Lisa/Mac and the Amiga has become the standard in 1985 and if they moved forward from that point? If we didn’t have to wait for the PC and Windows to finally get their own desktop GUI in 1995, where would we be?
If Apple wasn’t mismanaged, and Jobs didn’t leave, the Mac would have probably had an OS X type operating system by the early 90s. So by now we could likely have the third generation GUI, beyone OS X or Win XP.
[i] The problem is we lost time becuase we were locked into the DOS mode for so long [i] All the progress of the Lisa/Mac/Amiga were essentially lost.
>>>Here is a way to think about it, what if Lisa/Mac and the Amiga has become the standard in 1985 and if they moved forward from that point?
You will still complaining about an even more powerful bullying monopolist (because apple/amiga controlled the hardware side as well).
>>>So by now we could likely have the third generation GUI, beyone OS X or Win XP.
Apple enforces GUI “guidelines” even worse than Microsoft.
>>wait for the PC and Windows to finally get their own desktop GUI in 1995,<<
what about win3.1 and before, they had a GUI, very ugly but still GUI’s win1 was 1985. So i don’t think there was any waiting.
actully i looked up shots of both win1 and the first mac gui, I must say apples Gui looked much better. Though the flip side of that was from the first one to os9 is looked the same. In the 80’s it was very nice looking, in 2002 it looks horrible. OSX is way over due.
Until we have a 1gz processor that will run for 3 months off one of those little watch batteries I will not be happy. That is where Intel and AMD should be focusing their efforts. Just imagine ‘no more fans’!! Incredibly energy efficient devices are the key to real advances in electronics for a number of reasons. Tiny power demands and very little heat production are two of them.
As for AI in personal computers. HA. You can’t just have a little intelligence in the system. You gotta have it all. Every time I try to wade through the thought processes of a developer that was trying to make my life easier by adding another and yet another new wizard, I just want to scream. People rant about how important intuitive behaviour in software is,but I have come to believe that something that is intuitive to you may not be intuitive to me. Beauty is in the eye of the beholder.
Anybody know who said ‘the smarter the machine the dumber the human’ ? Well they could not have been more correct. Every new process we learn requires effort even learning how to use a Mac(just an example of user friendly)
Once a computer is intelligent enough to understand how to interpret all of the conflicting signals we as humans send out, then the computer will be as smart as DATA from Star Trek. In other words almost a whole new form of life.
Until that time I will depend on my little brain to do my spell checking, grammar checking and any number of other inane tasks that I would not trust to anything or anyone.
The fact of the matter is 3d is a great idea i always loved it but not the way M$ or anyone else is doing it. It all revolves around hardware were still sitting behind LCD or Cathode displays (yes some new types are under development) but i mean real advancements such as holographics and virtual reality. New fields must be explored before any real advancements can be made. I always loved the idea that you walk in turn on a holographic computer
move your hands and point at items and sure enough the computer responds. No of course this is impossible or improbable at this point we either have Linux or Windows (in reality) and with those 2 we either have KDE crashing and killing the computer (or some other giant error that screws the system) or in windows the BSOD that STILL EXISTS even in wonderful XP. We cant even manage to have perfection or close to perfection in modern tools. The fact is its time for a company to step aside and break through barriers and have some sight. As it lies now new companies open and do the same thing the rest do (Alienware, Compaq, IBM, HP, Dell, etc etc etc).
My theory and 2cents is that for software and computers to make a giant leap it is not the software that must make it to out level its the technology and equipment that must catch up to the abilities of modern people and abilities. (Also some cheaper prices would be nice i mean shesh still no macs for under 1g and a comp still takes a load off for a good (non entry level) comp.
The kind of computer we should have today, that I do not expect to see until my late 40’s (I’m 26):
-You press the power button and begin working. There is no such thing as “booting.”
-The machine has no fans because it runs with such low power consumption that it need not be cooled (like your VCR today).
-The OS is stored entirely in ROM (flashable for updates, or exchangable via simple card like HandSpring add-ons) and is designed to support multiple operating systems for greater choice should users desire to try alternatives to the one shipped with machine.
-Any storage devices have no moving parts (except for maybe a tiny flicking laser diode as it writes to holographic crystal blocks), and can be removed and inserted without little effort – there is no “mounting” and no ejection OS freakouts.
-All user-storage is via non-volitile memory cards (like a memory stick, only greater storage than today’s models at less cost per GB) that users can insert/remove at any time and move from machine to machine as they used to do with disks (or holographic storage cubes).
-All external device connections are via a single format connector much like today’s Firewire.
-Any “internal” system add-ons are connected via slots accessible from the outside, easy to add/remove for anyone at any skill level (like a HandSpring Palm unit does today).
-Any internally mounted devices are connected via the same Firewire-like connector as external devices.
-All displays are full-colour, “active” LCD. CRT is a forgotten bad memory.
And that’s just the beginning…
The future will have arrived when I can get something like this http://www.panoramtech.com/products/pv290.html for considerably less than $20000 (and no seam between panels would be nice).
Making that kind of screen real estate commonplace would be IMHO a much bigger advance in productivity/usability than any of these 3D desktop or multiple virtual desktop ideas/implementations.
That and Jace’s feature list (especially no boot, no fan noise, just “there”, but with all the perfomance of a top of the line system) would make me verry happy.
The Mac Classic had the whole OS on a ROM, you could boot from it by holding down a couple of keys, pretty cool…
Actually, by now we could/should have had:
An instant on machine, today we have enough memory hold the core of the OS, it could be kept there. It sort works like that when you use the sleep function, but it is not quite there yet.
A more modular OS, designed for speed, only adding components when needed.
Most of what BeOS had, good multitaksing and mulitthreading, a better file system, ect
A choice of GUIs, or design your own GUI (yes, shareware exists, but that is sharware)
An OS with global spell check, a plain language help system that works.
And we shouldn’t have these Windows “features” that keep coming back or reverting settings no matter how many times you change them
– Roger M Quote –
Here is a way to think about it, what if Lisa/Mac and the Amiga has become the standard in 1985 and if they moved forward from that point? If we didn’t have to wait for the PC and Windows to finally get their own desktop GUI in 1995, where would we be?
– End Quote –
If pigs could fly. This is no way to prove your point.
– Roger M Quote –
A choice of GUIs, or design your own GUI (yes, shareware exists, but that is sharware)
– End Quote –
So what?! It isn’t from MS or Apple so it doesn’t count? I guess that the fact you can choose your own GUI on Linux also doesn’t count, because it’s free, which must be worse.
Linux counts, but my whole issue is the features haven’t reach “mainstream” OSs like Windows or the Mac. Like the should have years ago. Since Windows had no signficant competition to worry about, we haven’t seen the progress we might have. Like I said, there was a desktop GUI for the PC before Windows 3.1 ever came out.
I think my point could be proved, in a way. It is logical to expect the if the Mac/Amiga had prevailed as the standard, and their hadn’t been the management problems in both companies, their next generation OS (which is just coming out now in both cases) would have likely come out years ago – as was planned. So we could have have the third generation OS by now.
I just wanted to point out tha the Apple Lisa did not have pre-emptive multitasking, it was co-operative, just like MacOS has had right up ’til v7.5 (and even then it wasn’t truely preemptive, more cooperative with a hack) and Windows has right up ’til Win95. Compare that to AmigaOS which has been preemptive since v1.0.