Virtual Worlds User Interface for the Blind is a prototype user interface that enables blind users to participate in virtual world environments. It provides communication, navigation, and perception functions using GUI elements. As a way of enriching the virtual environment with descriptive semantic information, sighted users contribute annotations of virtual objects using a scripted gadget equipped by their avatar. These annotations are then made available to the blind users through the special user interface.
this is what they focus on? Good grief. With the growing problem of CAPTCHAs, touch screens, and every other access issue we face… IBM chooses to put resources into online gaming? Coming from a blind person, I just have to ask… what the hell? They could be dealing with so many more real problems with even the few resources they’re putting aside for this. If they’re trying to prove that Aria works, well we already knew that.
If you don’t mind me asking, what’s it like being a blind computer user? I’ve never really got any screen reader working well enough for me to use one out of choice. I’ve just always thought that I could deal with most disabilities without much fuss, but I rely on my sight the most, so I’ve always wondered how much blindness really effects computer users.
That’s actually a hard question to answer, and no, I definitely don’t mind you asking. In some respects it’s not all that different from being a sighted computer user, I suppose. The main difference is that we have the additional screen reading programs, some more complex than others, that can either say what’s on the screen or output it to a braille display (a $5000+ device that very few people can afford), or both. There are some issues we face that are non-issues for sighted users, the most prominent of them being those CAPTCHAs with no audio descriptions. They’re images, and a screen reader has no way of rendering a straight image, as it has no idea what the image contains. CAPTCHAs don’t have any image attributes, such as alt or title tags, that would give an auditory description–if they did, that would negate the CAPTCHA in the first place, unfortunately. There are efforts to solve this, but it’s one of those problems that doesn’t have a 100% clear-cut solution.
Another issue, and the most frustrating of all for me, is that we have to have a screen reader for each os we want to use. Not really a problem for non-geeks, but for me who would love to try out systems like SkyOS, MorphOS, ECOMStation, etc it’s frustrating in the extreme. There are no screen readers for these, and other, systems. Mac OS X, Windows, and UNIX (command line and Gnome) are the only systems with screen readers currently, not counting older systems such as DOS or the Apple II. So, in the alternative os world, we’re effectively cut out. No, I’m not whining, just stating the facts of the matter.
Aside from that, I’m not sure what else I can say. It’s one of those things that’s easier to show than to tell.
i love this line
“It provides communication, navigation, and perception functions using GUI elements”
it forvides those things using “graphical user interface” elements? really a graphical interface for teh blind? now i know what they meant but still the wording chosen could have been better….
This is an interesting approach and I suspect the limitations are down to the fact that this system exists well outside of the virtual world server and has to interact with it in the same way as any other client. This could be seen as A Good Thing(tm) but it does seem to mean that they can’t extract enough accessibility information to create a 3D interface for the blind (I’m talking about with sound here, BTW ;-)).
Some time ago, a friend of mine and I developed an audio interface to Quake, including the beginnings of a level description system. It was quite popular in accessible-gaming circles in its heyday and remains in use today. We don’t allow blind people to cheat, but are able to use the fact this runs in the engine to our advantage, by querying things like distances between the player and various objects of interest. I would imagine that some of the types of information we’ve used would not be provided in the external client protocol of some virtual world servers. When we wired up some bots with the same perceptual functions as our human players were provided with, they started navigating the maps in a similar fashion to blind people, which was amazing to watch (as is seeing a blind person running towards you swinging an axe around…)
Unfortunately due to a huge pile of “Real Life” happening over the past couple of years, we have not been able to keep up maintenance and development as much as we would like. Anyone who’s interested may visit http://www.agrip.org.uk/ but beware it’s only a small server so it might be brought down for a while if a few of you take a look!
With regards to other comments on what the point of this is: research takes a long time to get to market. By the time this research, or something like it does, I reckon virtual worlds will be a lot more important. It would be nice to have at least one new mainstream technology that is accessible from near the start, if not by design. I do appreciate that people have burning accessibility issues right now, but the amount this research cost was probably not huge and may well not have come from the same places funding research or development on those other issues.
There are other people working on access to games, too, by the way, such as http://www.eelke.com/ (click on research) or IGDA ( http://gameaccessibility.blogspot.com/ ) and http://gameaccessibility.com/ .