No, I’m not going all “New Age” on you, this time I’m looking at how computers are going to get a 3rd dimension and how this will change the way we interact with them. The previous parts of this series have been based on extrapolations or previous history. This time I’m looking further forward, when technologies currently in long term development become available and open up a whole new realm of possibilities.
A third dimension is going to require large amounts of computing power, luckily enough thats going to happen so I’ll start by telling you how your computer is about to get faster – a lot faster.
Architectural advances in microprocessors seem to have slowed down in recent years, indeed what on the market today is much more advanced than mid 90s designed Alpha 21264?. Instead we are starting to see the addition of additional cores on the same chip and special purpose elements being added. All the major CPU companies are now committed to adding multiple cores on a single die and this along with the additional cache memory made possible by smaller geometries will bring us large rises in computing power.
But CPUs are not the only thing getting more powerful. Graphics Processing Units (GPUs) get more performance at a much higher rate than CPUs but have to date always been limited to producing graphics. These days modern GPUs have programmable vertex and pixel shaders and these are starting to be used for general purpose computations[1] in research and even in some applications. With the next generation shaders they will become not only more powerful but also more general purpose. Just as vector units (e.g. SSE, Altivec) have boosted computing power expect GPU shaders to be utilised in a similar way to provide potentially massive performance boosts – in the order of 1000%.
In part 3 I predicted office and casual computer users will turn away from traditional desktop systems. Desktop PCs will still remain, the numbers may be smaller but their computing power will be massively higher than anything available today. When combined with 3D visual displays all this power will transform what our computers are capable of and how they will be used.
Visual Interfaces are just starting
Some people believe the command line is king and all the graphical stuff is superfluous. The command line is just one way of interfacing with a computer and as such it has it’s own strengths and weaknesses, it offers a great deal of power but at the cost of complexity and having to remember strange commands and their syntax. The GUI has other strengths and weaknesses, features are more obvious so remembering how things work is not difficult or not required at all, this is much better for the beginner. The power of a command line is difficult to present on a GUI so their simplicity can be a drawback, there is no one “perfect” GUI, each interface method has it’s own place.
A third interface method is on the way, it exists in research labs and in select, specialist areas of industry, the 3D GUI is coming.
The human brain has a very large portion devoted to the processing of visual information. Some regard the human ability of abstract thought to be our greatest strength but when you combine this ability with visualisation it becomes a great deal more potent. If you could visualise an equation wouldn’t mathematics become easier? Wouldn’t the progress of science become faster if when we manipulated equations we could see the results visually in real time? Albert Einstein could do this in his head, If everyone had the tools to do the same, think of the progress that could be made.
I for one think exactly those sorts of tools are coming. I think the future will become more visual, I expect the type of interface in shown in Minority Report will actually appear and will be useful for many tasks. I don’t know if we’ll do physics this way but it would certainly have interesting results if we could.
The implementation of the interface in Minority Report was somewhat clumsy with all sorts of weird hand movements required and moving could accidentally send files flying off all over the place. I don’t expect to see an exact replication of this system but something similar could emerge.
Alternatively, it could be something better: Take a large screen and view it through LCD shutters mounted on the screen, have cameras monitor your eye and hand movements with tactile feedback through gloves so you can feel what you can see. I don’t know what such a system will be used for but I bet playing Doom 5 will be awesome. Does this sounds like fantastic futuristic technology which will never appear? Think again, this system already exists [2].
Already more advanced techniques are in development such as Holographic screens which give even better displays. Of course Sharp are already starting to sell 3D display screens today.
The 3D GUI
When I say the 3D GUI is coming I mean a GUI displayed in three dimensions, not a 2D representation of a 3D space such as games deliver. I mean a real 3D display where objects on screen have Height, Width and Depth. Our displays are one way or another going to gain another dimension, 2D displays are going to seem somewhat quaint by comparison.
There are problems though, one major know problem with real 3D displays is that they can confuse the viewers eyes no end. The human visual system expects to point both eyes at a single point and focus them there. 3D displays create images in front of or behind the screen but don’t focus them there, to remain in focus your eyes must focus on the screen yet point at an image elsewhere. The human visual system is not designed to do this and it is the major reason why Virtual Reality has never caught on. Using a large screen (say 60 inch) will mitigate the need for this as it will be further away and depth of field [3] means images not too far off the screen will remain in focus.
The key to this will be subtlety, making sure the images are not projected too far in front or behind the screen. Your brain may be able to get used to this without causing too many problems. Another technique is make the displays bright, your eyes’ irises will shrink and the resulting improved depth of field should allow images to move further away from the screen.
Compute encounters of the third kind
Once we all start getting these 3D screens we will also get applications for them and I expect just as there was for 2D, there will be a lot of experimentation in new kinds of user interfaces and new ways to use them.
There are some pretty obvious candidates for 3D displays, 3D modelling of course stands out (pun intended) as an obvious candidate, as of course are 3D games – where monsters will really try and bite your head off. I will leave the reader to probe the possibilities of porn.
There has been plenty of research into 3D desktops [4] and Sun have recently shown their work [5] but I for one can’t really see the ability to spin 2D windows around being of any particular use. Sun put post-it notes on the back of the windows but that means having to remember to look. It could however be an interesting way of hiding complexity but you could do this just as well in 2D.
No, in order to create a 3D GUI we will have to do better than this, the 2D windows will have to become 3D, and that won’t work for many applications. In order to create a 3D GUI we will have to forget about the way existing desktops work and think of something completely new.
In the following I describe some ideas for interfaces which could work, this area is pretty much impossible to predict but that’s never stopped me before. In the cases below I have tried to find areas where a 3D interface will benefit the application and improve it. This will be the key to a successful 3D GUI, there is not point trying to make a Word processor 3D, it doesn’t gain anything (though displaying 3D images may be useful). There are many application areas which I think will gain from 3D and I have described some of these possibilities below.
File Manipulation in 3D
This seems to be the one area where a lot of 3D GUI research has been concentrated on, unfortunately this seems to be a horrendously difficult problem as many seem to end up with a screen full of thousands of files.
I can’t see how a command line could be represented in 3D but rather than trying to change a command line into a 3D application it could be augmented with a 3D display of the part of the directory tree, files etc. you are working on. The 3D display would be linked to the terminal display so what is done in one is reflected in the other. File manipulation could be done either by command or by selecting and moving files by hand, you could change directory by dragging a folder from the 3D part onto a terminal window.
The 3D representation could change depending on the commands you are using, you could use “ls” to look at which files are oldest in a text display or you could display them from above as pillars, the shortest would be further away and thus oldest.
Some things would be easier or faster in 3D others in 2D, by displaying both 2D and 3D we will get the best of both worlds.
Sounds with depth
Audio doesn’t sound like an area which could benefit from an extra graphical dimension but I think it is one which will benefit more than most. Applications could have controls where moving them would be easier, the mouse is a pain for musicians and many control surfaces exist to supplant it. If a virtual synthesiser is displayed the control knobs could be turned by gripping and then turning them, this means no more mapping from screen to an external control surface so it will be easier and cheaper. To really work properly this will require tactile feedback but this can be done in gloves.
The basic creation of sound could benefit from 3D also, if an synthesiser’s envelope generators or filters responses are displayed in 3D they can be manipulated by hand, creating new sounds becomes a whole lot easier. The depth can be additionally used to represent volume so the response can change as volume changes and this too can be manipulated.
The BeOS had a very good demo of mixing in 3D [6] where instruments could be moved around to change their balance and volume. Extending this to a 3D display device would allow the musician to move instruments by hand instead of mouse. Mixing will never be the same again.
An interesting possibility is to move multiple instruments by hand simultaneously, (think about moving multiple chess pieces with an outstretched hand). With a mouse you have to first select all the pieces then move them, a hand will be a much better and more immediate tool for this.
3D Painting
Depth can be added to existing applications in a 3D GUI to enhance them. 2D Paint programs could benefit from this, This wouldn’t make the program into a 3D modeller but rather the painter could use multiple 2D layers in front of one another to create depth. Traditional 2D painting uses a number of techniques to fake depth and this is really just an extension of this.
3D could however create a better interface for the drawing tools:
In computerised painting there are a number of different pallet types to chose from, rather than selecting from a menu and picking them one by these could all be placed on screen at once in a window, the window would then be shrunk and moved to the background. When a new pallet is required just pointing at the window would bring it forward, pallets could then be dragged out or exchanged. When done the window is just pushed back out of the way. To get rid of a pallet when it’s not needed simply grab it and throw it in the direction of the shrunken window.
If you are an OS X user this may sound familiar, it has similarities to the dock’s “warp” effect. This technique could work for different kinds of application, you could be doing 2D painting or even word processing, it’s a technique which will allow the 3D interface to be used to enhance a 2D interface without replacing it, again you get the best of both worlds.
A scenario:
3D can benefit many areas which can be used together, in the following scenario I put a number of different areas together with a potentially very powerful end result.
A guy goes out with a 3D camera (actually 2 2D ones spaced apart). He proceeds to take a few pictures of buildings and other objects from various angles.
Then he gets home and plugs the camera into a desktop PC (this is one of the people who has one) which is connected to a 3D screen. He then uses the screen to go through the photos combining the multiple photos into 3D objects, the computer makes guesses for any parts he hasn’t got images for. This is all an easy task as it’ll be done by picking up the objects and manipulating them by hand, the computer will scan his eyes and hands so it knows where he’s looking.
He then takes these objects and inserts them into a 3D modelling package and creates landscapes. Into the landscapes he then adds objects of people, these aren’t just dumb 3D representations of people. They have AI software controlling them so they can be made to walk around scenes, the AI software knows how people look and how they move so when directed they will move properly.
This sofware will be better though, it will also know how people look close up and even how they talk. Our user shall then give the actors instructions to move about and talk to one another. The computer will render this in 3D in real time, no previews will be necessary as this computer will have the horsepower to generate ultra high quality “photon mapped” [7] images in real time. He can view the outcome and adjust things if he doesn’t like how it looks. As with any 3D package he can move lights around and change the camera parameters.
In this manner our future computer user will be able to make an entire movie to almost Hollywood grade visual standards at home, whether the movie is any good is a different matter altogether however. This may not be the future of real movie production but it’ll help speed up the process a lot and will enable TV stations to produce low cost TV series.
Most of this has already been predicted and some has already been done – You think all the armies in Lord of the Rings were real? This is an application where a 3D display with a 3D interface and massive computing power will work to greatly enhance the process. It’s exactly the sort of thing computers will be capable of in the future and will be used for.
Add more interactivity, remove the actors lines and you’ll have a system for producing games. In the future when you play the game of a movie perhaps you’ll just be playing a different version of the movie.
Unemployed Mice
It seems with a 3D interface we are not going to need mice. Perhaps there will be a 3D equivalent or we’ll just use our hands. I think 3D interfaces will enable us to build “proximal” interfaces [8] where the tool becomes part of us, these have the promise of making computers easier to use than ever before.
Whats next?
The next stage to this is where all 3D interfaces will end up if you think about them long enough. I’ll not even bother describing it, I’ll give you a clue instead and let you figure it out: The Matrix.
In the film The Matrix there was a 3D world which everyone inhabited, it is designed to pretty much emulate the real world. The Matrix is the ultimate 3D interface and I fully expect it to happen in some form but something so realistic is obviously a long time away.
We could start with having a “Personal Matrix” on our own computers, this doesn’t have to abide by same conditions or rules as the Matrix in the film, there are no reasons why you couldn’t have it set to be a in a zero gravity environment, on Mars, or both. There are no limitations on how the world could be set up their world other than the power of the computer it’s running on and the imagination of the user.
I do think plugging cables into the back of your head is some way off yet. That said I’ll be reluctant to plug my brain into anything, what happens if a cracker gets in and 0wN3r5 your head? Who would you call? a Sysadmin, Surgeon or an Exorcist?
The Matreb
Another aspect of this system that could be very different from the original Matrix is the concept I’ve named “Matreb” (short for Matrix-Web). If we can have our own worlds why not connect them up? The possibilities here are virtually endless, even bigger than the web. You can build any world you like and invite anyone you want to join in. You could surf the web to a web site about surfing and actually go surfing. You could have a game of “Finding Nemo” where you actually have to go swimming to find him. Ever fancied flying on the star-ship Enterprise?
The computer could be become a lot more appropriate for socialisation than it currently is. Currently the lack of visuals or audio means a lot of the message you are sending can be misinterpreted, it is this detachment that leads to no end of misunderstandings and unwarranted flame wars. In a Matreb you could set up a world with a virtual pub and actually see and hear the people you are communicating with, enabling much more constructive conversation. It’d also be a valuable business tool for meetings and discussions as you could invite people all over the world to meet in the same room, no need to leave their offices. You can imagine this will apply to much more than social or commercial gatherings, again I’ll leave it up to the reader to feel the possibilities of futuristic porn.
Of course we would also get the web’s downsides – spam would be worse, who knows what viruses will be like and pop up ads will be a lot more annoying – of course all these will pale beside the experience of clicking on a 3D Goatse link!
Building a Matreb
Building such a system is going to be a highly complex but I’ve no reason to think it’s impossible. The initial Matrix like system requires a tactile body suit and a very powerful computer, but these will come, we’ll need to go back to Virtual Reality type headsets to create the “being inside” illusion properly but perhaps the screens can be focused away from the wearer to mitigate the focusing problem.
To build a Matrix web is a more difficult problem because it will require the user to interact in a world elsewhere and this will bring up the problems of transferring a large amount of data at very high speed. In order to enter a world you first need to have a full description of it sent and that could be large and complex. After that all the movements within that world will have to be transferred at very low latency, the more realistic the world is the more data will have to be sent.
There are ways around these problems but they are complex and going to need a lot of research and work before we ever see them. The beginnings of this are already being researched and we will see this system developing in the future.
Perhaps one day we wont sit down at a computer to use it but rather plug into it and enter it to use applications. When we do this the computing experience will have changed to something which will not be recognisable today.
But this will still be software running on CPUs, right?
Wrong. I think there are going to be fundamental changes to come in both software and hardware. Not only will the applications be very different but the software they use and hardware they run on will also become unrecognisable. The technologies are all in place, another journey is about to begin.
Stay tuned for part 5…
—————————-
References
[1] General purpose computing with Graphics hardware.http://www.gpgpu.org/ [2] Not the link I wanted but look at the end of this and you’ll see similar technology.
3D Displays [3] Depth of Field.
Depth of Field [4] Nooface has a 3D GUI section, 3dcgi is about 3D graphics
http://www.nooface.org/
http://www.3dcgi.com/ [5] Sun’s Looking glass 3D GUI.
http://wwws.sun.com/software/looking_glass/ [6] The BeOS’s 3DMix.
http://www.skycycleonline.com/images/3dmix.html [7] Good article on 3D graphics which mentions Photon Mapping and other techniques.
The Future of 3D Graphics [8] Andrew Basden’s Proximal User Interface.
http://www.basden.u-net.com/R/proximal.html
Copyright (c) Nicholas Blachford February 2004
Disclaimer:
This series is about the future and as such is nothing more than informed speculation on my part. I suggest future possibilities and actions which companies may take but this does not mean that they will take them or are even considering them.
MS research did a 3d display, the link is here.
http://research.microsoft.com/adapt/TaskGallery/
There is another company that took and sold this idea but I don’t have their site.
My opinion does not reflect “average joe XP user” but I think first there is room to improve the task management interface in out existing 2D desktops to better handle multitasking. XP’s “grouping” function is too difficult to figure out which window is which. I don’t don’t like having my operating system ove stuff around on me because I have to go looking for it.
So far multiple desktops seems to be the best existing solution to this problem. I think MS has a powertoy to add multiple desktop support to windows but I have not tried it yet.
“Architectural advances in microprocessors seem to have slowed down in recent years….”
Huh? The tremendous die area has been utilized to implement numerous refinements to all sorts of things–larger and faster caches, better branch predicting, register renaming, longer pipelines, speculation, larger issue width, out of order execution….The Opteron and the Power5 both represent major advances beyond Alpha.
“Graphics Processing Units (GPUs) get more performance at a much higher rate than CPUs but have to date always been limited to producing graphics. These days modern GPUs have programmable vertex and pixel shaders and these are starting to be used for general purpose computations[1] in research and even in some applications. With the next generation shaders they will become not only more powerful but also more general purpose.”
Graphics processing is about the most embarrasingly parallel task you could can imagine. They are fast because there are few data dependencies and everything can be done in parallel. For data dependencies, you implement multiple passes. Sure researchers have used them for some embarassingly paralle problems that happen to not be graphics, but it is not like these GPUs will replace a CPU any time soon.
“The human brain has a very large portion devoted to the processing of visual information.”
Wrong. The visual cortex is small in comparison to the rest of the brain.
“When I say the 3D GUI is coming I mean a GUI displayed in three dimensions, not a 2D representation of a 3D space such as games deliver. I mean a real 3D display where objects on screen have Height, Width and Depth. Our displays are one way or another going to gain another dimension, 2D displays are going to seem somewhat quaint by comparison.”
What’s the point of making a hologram? You still only have two eyes. Put on some goggles!
Augmented reality will be the future. Why even have a monitor that sits there on your desk for you to look at, when you can have a lightweight pair of glasses with a wireless connection?
I’m not even going to bother reading this. If he is as far off as his previous parts then Pluto can be seen from Earth by the naked eye.
This guy keeps writing this HOPEFUL drivel…. quite sad actually
Neural interfaces allowing direct interaction with computers at the level of abstract thought will replace any need for an “interface”, and are likely to render a 3D environment obsolete before it can really become useful.
Anonymous (IP: —.CS.UCLA.EDU)
What’s the point of making a hologram? You still only have two eyes. Put on some goggles!
Augmented reality will be the future. Why even have a monitor that sits there on your desk for you to look at, when you can have a lightweight pair of glasses with a wireless connection?
We’re already in the process of creating 3D displays which are physically the same shape as a standard LCD. Goggles are obtrusive, they either need wires or a rechargable power source, the latter of which means that you’ll have to interrupt what you’re doing to swap batteries/recharge. It also means that you’re all consumed with what’s being presented, you can’t say… mess around with your guitar while watching something on one or more monitors, etc.
I’d certainly prefer a 3D LCD-like display to goggles any day.
Headsets are the primary resaon why VR failed.
“Neural interfaces allowing direct interaction with computers at the level of abstract thought will replace any need for an “interface”, and are likely to render a 3D environment obsolete before it can really become useful.”
Interesting idea. So what protocals does the brain use? What about language? We’ll most likely see 3D (no pun intended) first, although as pointed out above, we need to fix our 2D interfaces first before embarking onto 3D with it’s own set of problems. Crawl before walking and all that.
Augmented reality has a hell of a lot more to it than simply a monitor on your eye glasses’ lenses. The system tracks your head movement and orientation and embeds digital images into your view of reality. So for example, instead of an actual LCD monitor sitting on your desk, your glasses will constantly monitor your position and orientation and which direction you are looking and make the same image appear to be the same physical location as a physical monitor would have been.
You could therefore embed the display anywhere: floating in the center of the room, on a wall, on the floor, on the face of a book….if you wanted to get away from the monitor, you can walk away.
If the glasses (I envision something the size and weight of eyeglasses, with a small computer the size of a watch battery behind one ear and a watch battery behind the other ear) are light enough and use little enough power that they can be wireless, then you could have a virtual 50″ on your wall, 23″ LCD on your desk, etc.
But of course the whole point of augmented reality isn’t to make a rectangular display appear to float somewhere, but to make virtual objects appear to be physically located places where they actually don’t.
Imagine an art gallery with blank walls, and the glasses let you see virtual paintings on the walls. Imagine a road with no signs, and the goggles show you both waypoints and the names of streets, buildings of your interest.
Imagine a battlefield where every soldier can have the positions of their squadron overlaid with reality, and foreign objects highlighted, categorized, and tracked in real time on their glasses. Waypoints, objectives, statuses, a flood of information embedded in reality (and of course a way to reduce the flood to currently relavant information).
I think we will eventually enter a time when physical reality is far less important than the digital overlays that we will create and share. We will need to build far fewer things, and everything will be more efficient, even daily life.
“The human brain has a very large portion devoted to the processing of visual information.”
Wrong. The visual cortex is small in comparison to the rest of the brain.
actually, vision does indeed take up a lot of the brains processing power, the visual cortex is devoted to one sense, which is vision. there is no other part of the brain dedicated to just one function.
I think 3d is totally overrated and just making things 3d isnt going to do a whole major amount different for interacting with things. All hail 2D forever!
I can’t see these `new advances` being much more than a passing fad or novelty. Wow, let’s use the funky touch gloves and move the knows. Cool, see how they turn. Who cares.
I don’t think it matters if computers are 1D, 2D, 3D of infinityD, it’s not the dimensions of the display device that make a different, it’s what’s being displayed, how it is designed and what it allows you to do.. Okay so a selling point is all this new `what you can do that you couldn’t do already` stuff, but it’s not much.
The closer we get to a true representation of the physical world the less interesting it is. While we linger in a limbo of `not quite there` things are much more interesting. If we had displays that looked real we’d be bored. Who cares, it’s reality. People like to live in fantasy. The audience will have to overcome their addiction to illusions and delusions in order to think that this super-real system would be appealing.
Do I get excited and freaked out about seeing a perfect real-looking 3d representation of an orange, apple or banana? No.. it’s just there, who cares. All this hype is computer junkie mania.
Paul
hi guys,
I beleive “tangible interfaces” are more likely to take over
in the near future than virtual reality .. check the amazing
work of James Patten of the MIT Media Lab :
http://web.media.mit.edu/~jpatten/
Cheers,
Jean-Louis
and also these web pages :
http://www.aec.at/en/center/project.asp?iProjectID=12284
http://www.aec.at/en/archives/relatedPics_01.asp?iProjectID=12284
with a quick overview of the Audiopad
Jean-Louis
3d isn’t gonna help anybody. We seem to be thinking that as we increase power we should throw all the increase of power (CPU speed) into the input/output systems. And we leave the IMPORTANT step, process, back in the 90’s.
Instead of fixing our “broken” interfaces maybe we should fix our broken process code. Humans are quite adaptable, and things are fairly ergonomic as it is, so I don’t see much need for great changes in UI. Optical (not gyro, because I only want it to move when my hands are in front of it) mice controlled with my finger would be cool, and useful for some situations (those where you are only on the machine for short periods and are likely standing up or walking).
A better keyboard would be nice. I’d use dvorak but nobody else does, and I just don’t wanna have to switch back and forth. I would find dictation annoying, as my voice wears out faster than my hands; but a businessman would love it.
In the end the mind makes the display of the interface good or bad. 2d is still 3d because the mind allows itself to be deceieved. But our physical input systems kinda suck. I have to move my hand from my keyboard to use my mouse, that’s pretty annoying (and the reason why many prefer CLI).
There will be new dimensions in computing when computing makes some serious, not small incremental, improvements.
Wrong. The visual cortex is small in comparison to the rest of the brain.
There are at least 27 distinct regions of the brain dedicated to vision, accounting for almost a third of the brain’s total mass. See the blurb for this Oxford Press book: http://www.oup.co.uk/isbn/0-19-852479-X or this University of Rochester press release: http://www.rochester.edu/news/show.php?id=1191 or this article from the International Biometrics Group: http://www.biometricgroup.com/in_the_news/03.07.03.html
actually, vision does indeed take up a lot of the brains processing power, the visual cortex is devoted to one sense, which is vision. there is no other part of the brain dedicated to just one function.
How about the hippocampus, which regulates the storage and recall of memory (much like a cache)? Or the medulla oblongata, the body’s autonomic control center? There are many parts of the brain which are specialized to perform a single task, although you could certainly quibble about their scope.
I can see the next generation, UI’s in fact being 3D or at least pseuddo 3-D. I’m certain that alot of the hardware manufacturers would love for MS push a such an UI, it certainly would drive alot of new units out of their warehouses.
As far as Gogles v. 3D Displays v. Augmented reality go in my opinion the majority of people would not accept any out of the ordinary computer interface. VR is pretty much dead, and as far as cyber punk Augmented realit goes we (techies) might be likely to adopt them but there will be along time before they become common place.
Just some thoughts.
Sorry, the first half of that reply appears to be rightfully directed at Mr. Anonymous (IP: —.CS.UCLA.EDU)
When I can ask my wrist watch/computer link: do I have any messages (and if I do, repeat them to me) what is the price of soAndSoStock, what is 9380.4 X .03125, …. Then we will have something. And, I think that is the next big thing!!!!!!!!
“Neural interfaces allowing direct interaction with computers at the level of abstract thought will replace any need for an “interface”, and are likely to render a brain an electrofied puddle of goo.”
Yes, neural interfaces are progressing, but the actual ability to decode the numerous electrical impulses of the brain, to re-represent them, and to re-transmit them through the brain which will always simultaneously have natural, non-artificial electrical activity in any meaningful, non-destructive manner is a long way off.
Attempts to re-represent visual information is advanced when the subject is a fruitfly. When it is a human, a neural interface is capable of producing some connect-the-dots phosphene activity that can “suggest” the most basic visual cues; however, anything sophisticated, visually, is difficult to re-represent as electrical impulses that can be understood by the brain.
Yes, neural interfaces are progressing, but the actual ability to decode the numerous electrical impulses of the brain, to re-represent them, and to re-transmit them through the brain which will always simultaneously have natural, non-artificial electrical activity in any meaningful, non-destructive manner is a long way off.
Not really, we’re testing prototypes of an artificial hippocampus (http://www.newscientist.com/news/news.jsp?id=ns99993488), the center of short term memory. It isn’t too much to envision such a device being networked, allowing someone to, say, query the internet in the same way they query their short term memory.
“When I can ask my wrist watch/computer link: do I have any messages (and if I do, repeat them to me) what is the price of soAndSoStock, what is 9380.4 X .03125, …. Then we will have something. And, I think that is the next big thing!!!!!!!!”
You’re right, portability is where it is. Smaller and faster, not bigger and more comfortable. The ultimate comfort is it never leaving your side. I think if cell phone providers would pull there heads out of there asses some of these new phone would be wonderful to browse the web, but I just can afford the minutes.
I am not sure that 3D will play that big of a roll in future interfaces. A need must be created before it will become common place, and history has not been good to 3D technologies. For example the technology to make good 3D movies has been round for 50 or 60 years, holograms for around 40 years. But where are these technologies in terms of popular culture?
IMHO The big advances in 3D may not be in terms of what we will see but what the computer will see. If our computers can watch and recognize our gestures within a 3D field they could use those gestures as queues to what we want them to do. Image a computer noticing that I am looking at a particular window’s title bar and switching its focus to that window (some 35mm SLRs do this already). Then when I say, open, bring to the front, or let me see that, the computer brings that window to the front. A key component to this is that the computer would learn about how I work. Ultimately it could occasionally recognize gestures and complete a command before I have fully formed the command in my head. It would be like having a butler that knows what I want (or need) before I do.
Why does this guy say that a raytracer follows the path of an electron? Silly me, I thought they were called photons.
this is a really smart guy!
he is an iluminated! he should work at corporate strategy at ms, oracle, ibm, apple or something like that
congratulations for the series!
daniel from buenos aires
“Not really, we’re testing prototypes of an artificial hippocampus”
yeah, IN RATS!!!
We’ve already produced computer-generated vision systems for flies, and have had baboons power video games…
That’s not saying much though really…
Nova just did a special on the future symbiosis of medicine and technology and covered a lot of cyborg stuff.
They have actually performed a human experiment to create “vision” in a blind woman. All she gets is some phosphene flares that denote boundaries… It works, I’m not suggesting it’s never going to happen; however, this woman is very disappointed. She has a permanent implant wedged into her visual cortex, a ribbon of wires coming out the back of her head around to a camera mounted on a pair of glasses, and all she sees is sparkles. And even at that, it is suspect– it seems to work well for the most fundamental modes of visual cognition (borders, intersections of lines, etc…) However, the cognition of a human was so complex that whatever electrical impulses were being generated by the vision system produced ZERO visual effect… no blurrs, no boundaries, SIMPLY NOTHING.
this guy’s articles are *always* fun to read. they make me giggle. madly. come back to earth. it’s comfy here.
the only interface that IMO can remotely compete to the 2D screen is something that is directly connected to your brain.
Either by emulating regular signals or use completely new ones. I don’t know how the brain would react to a “new sense”, perhaps there’s someone here that knows more about brains? I find the subject interesting though. If the brain could learn to interact with this new sense it would allow us to do a lot of crazy things. But as I understand it we wouldn’t be able to share memories but descriptions of our memories (somewhat like drawing a painting or telling a story). Just imagine getting your daily spam directly delivered to your brain
aah, it’s an exciting thought though. Scary but exciting.
Anyway, mindcontrolled interfaces allready exists even though they are slow and simple at the moments, but I think they will replace the physical input devices in the long run because they can be a lot more effecient.
Voice controlled interfaces has to be one of the worst for general computing. It’s noisy and slow. The fingers are faster than the mouth any day.
Yeah, that’s certainly a lot more convenient than making a gesture or pressing a button. And imagine how wonderful it would be if everyone walked around talking to their devices.
Bah, voice input is just a lousy idea.
I have yet to see a competlling application for 3D interfaces. Do not confuse this with 3D data – this is useful, but flipping around my application window still seems pointless and certainly the Sun Looking Glass app seemed contrived.
<quote>This guy keeps writing this HOPEFUL drivel…. quite sad actually</quote>
Not nearly as sad as jumping on the flame train simply because you have nothing constructive to add! there are far more constructive ways to get noticed.
It’s the way he envisages things to come, get it?
It’s is strangely ironic that your nick itself opposes looking to the future.
If you want to make a point that something is blatently wrong then that’s fine, but to come out with moronic comments simply because you don’t agree with his point of view is ridiculous!
@Sabon
If you didn’t read the article then you have no business replying in this thread imo.
I’ve said it previous and i’ll say it again, these articles have been an interesting diversion from the run of the mill OS News articles.
Goggles or glasses i think are the best idea. I wear Sunglasses all the time during the day outside. It would be nice if i had a means to display information on the edge of my field of vision that if i needed it i could glance to the side and view it without completely blocking out my sight. i imagine something like little billboards that i can minimize and maximize as needed. The best part of goggles/glasses is you can always take them off!
I do tend to agree that 3D displays are just for show we really need to come up with better ways to manage the huge amount of information we generate. I feel Google albiet better the most is a clusmy tool to access information.
Again, like in his previous articles, the author depends on false assumptions and contradicts himself.
For one he says that the desktop PC will disappear from the average household (he actually worded it much stronger than that in his previous article, but he seems to have come down from that position somewhat), but at the same time he acknowledges that an immersive, world-wide virtual community will require massive computing power to be available to everyone. Reeks of a powerful desktop PC to me, or at least something very similar to such a device.
His emphasis on 3D environments is somewhat odd, since that particular avenue has already been explored and deemed inappropriate. I expect more from augmented reality, as said in earlier replies.
Slightly ot…
There is another system for artificial vision that requires a much less invasive method.
A 16×16 panel of solenoids is worn like a shirt, so they covered an area of the patients back. Differences in light intensity produced variations of pressure.
After a while, blind people can ‘see’ using this quite well. Just because sighted people use the optic nerve and visual cortex to process visual information, it does not mean that it’s the only way.
The same thing may apply here… The whole point of 3D is to increase the amount of data the computer can communicate to us. Perhaps a touch screen that generated different textures and sensations so you could feel the data?
Well, the author clearly doesn’t recognize the fact that not everybody processes what they see, hear, feel and sense in other ways in the same way. What makes it “obvious” for a 3D user interface? It might be obvious to one user, because they think and interpret the world in that manner, but others do not. For example, in the US they’ve started migrating to iconic road signs (something I suspect is more common and has existed for a longer time in the rest of the world) and to be perfectly honest, I have a hard time figuring out what they really mean. Why? I’m far from stupid; I just don’t visualize their graphic representations to mean the same thing as I’d use to describe what they mean. The same thing applies to many graphic icons in computer programs. That’s at least one reason why tooltips were invented!
The same thing is true of the commandline: what works and makes sense for one person is not obvious to another. However, some people process most of what they do in a more verbal manner than visual. This does not make them freaks or uneducated, just different.
What it comes down to is that computers will work best when both types of interfaces are available, both verbal as well as visual in nature.
Of course, besides verbal and visual, there are other senses: let’s not neglect those! Why limit ourselves to 3D when other dimensions (sight, sound, touch, pressure, temperature) exist that can communicate information? cheezwog hinted at it with that pressure vest. There are many people that will be even further left out of the use of computers if 3D is the only real interface supported by computers, due to some limitation beyond their control. Not everybody has the equal power of control or sensation that everyone else does, and computers need to acommodate that reality.