When Apple CarPlay and Android Auto first started rolling out, initial evidence suggested these technologies held promise to reduce distracted driving. These systems funneled the most important features from our phones onto the infotainment screen, curbing motorists’ desire to reach for their handhelds.
Yet, it looks like these mirroring technologies may not be nearly as safe as initially hoped. A new study from the UK’s IAM Roadsmart, an independent road safety organization, paints a far bleaker picture. The stark findings showed that drivers using one of the smartphone mirroring systems in a car displayed reaction times slower than someone who’d used cannabis. In fact, these motorists’ reaction times were five times slower than someone driving with the legal limit of alcohol in their system.
This shouldn’t come as a surprise to anyone with more than two braincells to rub together. These systems are based on touch screen technology, and touchscreens without any tactility are simply not suited for use while operating a motor vehicle. Touchscreens are far more distracting than plain old tactile buttons in a fixed order that you learn over time and can feel, and it blows my mind that no safety regulations heavily curtailing their use to parked situations has been enacted just yet.
What really irks me is it would be nice to be able to fully voice navigate web pages and have the articles read back something like a podcast. I can do that with @Voice on Android and even save them as oggs, but where is Siri, OkGoogle, and Alexa? Or even (sound)Hound? Oh, they might not read out the ads.
One of the reasons I got my Ford pickup is that it has a VFD screen so even though it has “Ford Sync”, it is all buttons. No cell phone link. I can do bluetooth media and phone (but don’t do the latter). I know the buttons to press to get to most things without looking at them. It even has an “install app” but I have no idea since I haven’t found any.
Strangely, the places that ban “mobile device use” don’t ban touch screen interfaces in cars. The thing is you MUST look at the screen to figure out what to touch because the context is variable and there is little if any feedback. My buttons don’t quite emit a click but I know when I’ve pressed them. And “gestures?”.
They could do a similar button matrix with small screens instead of the panel, but won’t. I already mentioned voice recognition.
Disclaimer: I have my previous Samsung tablet and keyboard case mounted in a very convenient location that syncs my podcasts when I’m within wifi range of home, but I can just hit “play” from the button on my steering wheel or the navigation or volume keys there. I also use a bluetooth headset with my phone and use the buttons there, not the screen.
This could probably be achieved with accessibility software. Specifically stuff for blind people.
No, at least, not directly. It is a common, though inaccurate, misconception that blind people use voice control, when we actually use text to speech and screen reading software. That being said, what you would do is take the accessibility APIs of a screen reader and pair it with a voice control system like that which is used for people with motor disabilities. iOS and MacOS already sort of have this if you enable both VoiceOver and voice control, though it would need further refining for use in vehicles as you would only want a subset of both. Still, definitely doable. iOS has the better accessibility APIs such a program would need, however Android is more open and you could develop the accessibility service itself for it more easily even though you’d end up settling for a subset of the APIs you currently have on iOS. This is actually why more blind people use iOS than Android, however I digress and that’s a different topic.
Did you just want to right?
Yes. Using a keyboard isn’t a problem, and I was thinking of a blind acquaintance and his screen reading software when I posted the comment.
I was thinking more along the lines of pairing Mycroft with a script and a screenreader.
It’s odd how more open systems are less accessible to the disabled compared to proprietary systems when you think it would be other way around because the open systems can be more easily modified to accommodate alternative inputs. Being latently ableist is one of the dark sides of tech.
The other thing to work around would be how robotic the text-to-speech applications are. I tried a book app with text-to-speech capability to listen to books while running, and it was bad. The voice was painfully stilted. Everything. Was. A. Short. Sen… tence.
Haha, I know exactly which tts engine that book app must have used. You need something that interfaces to the system TTS. Hmmm, I haven’t used Android in a while, but I believe Moon Reader+ was one that supported this. Kindle will do it also, however with amazon’s “restrictions” on which books you can use tts to read.
Part of the reason I suspect the less open platforms are the least accessible is the annoying mentality of “implement it yourself.” The number of programmers is a small subset of the population, the number of disabled programmers or programmers willing to listen is smaller yet. Of those who could do it, most of us have other jobs and are willing to settle for a solution that lets us do those other jobs and is already in a working state. The last thing I want to do after being an all-day network engineer and sysadmin is go home and do low-level coding. On Android it gets more complicated than that, as it’s the main system APIs that limit the screen reader or other accessibility service. That particular part of Android is, while not closed exactly, unable to be updated dynamically. E.g. if I were to take the time and alter the accessibility APIs, and did so in a way that extended them and did not break any existing services, the only way to load those updated APIs onto a device is to load a whole new ROM image. They are not encapsulated in an app or otherwise able to be updated on the fly, even though the accessibility services themselves are apps and can be updated at any time. It’s quite annoying. So, the only way this situation will get fixed is if Google does it and they don’t really have enough motivation to bother.
Couple this situation with the fact that it’s mostly education regulations which drive the accessibility motive for commercial companies, and you can see why closed systems generally get the better accessibility tools since they usually get the contracts. The only situation where this has gone a little bit the other way is ChromeOS and, not coincidentally, that platform’s accessibility is far and away beyond Android or general GNU/Linux desktops.
I could go on all day about this, however I won’t.
If there’s someone else in the car with you, they’re OK to operate the system safely. It’s a fine balance and I doubt a ban will be enforced.
I hate not having physical buttons in cars too.
Sodki,
I agree, one of the stupider restrictions I’ve seen is a built in GPS that disables the ability to change destination while traveling. I came across this once in a rental car when we were on vacation. There was always a passenger, and yet to change navigation we would have to pull off the highway (or even worse park right there on the highway). The GPS navigation wasn’t bad apart from this, but it made the carnav totally unsuited for its purpose and you can easily guess what happened….we didn’t use it and used our cell phones instead. Ironically whipping out the phone because you can’t use the built in nav is probably even more dangerous.
Yeah, same. I wouldn’t want a tesla for this very reason. You have to use the screen for just about everything.
See opening the glove box in a tesla…
http://www.youtube.com/watch?v=CLuejg3Fp0Q
@Sodki
Then the manufacturers should move that crap to the back. The driver usually owns and operates the vehicles most times… Screw the passenger, unless you purchased a bus. LOL
This is pretty much it. It’s the touchscreen interfaces which are problematic. Punting navigation and entertainment to devices which get regular updates is a big step up from the previous model.
Infotainment systems originally had this feature, and it was maddening. The car can’t tell if a passenger was operating the system or the driver, so it would lock everyone out until the car was stopped. It was really annoying.
My experience differs. I recently purchased a car with CarPlay. It’s made interacting with music, maps, and calls much *less* distracting and more intuitive. Car systems are generally terribly designed, CarPlay seems to make that much better. This essentially comes down to being able to use siri to initiate calls, request music, and dictate messages without ever touching anything.
This sounds like a terrible study.
Alcohol effects are permanent, while interacting with car controls are expected when there is no immediate threat.
You could as well make a study about cup holders in cars : “Holding a glass of water while driving is worse than smoking pot.”
Treza,
I haven’t been able to find this study anywhere. the article doesn’t include an original link and many other news sources only link back to this article. There are several articles referring to a “UK’s IAM Roadsmart study”, but I can’t even find the article on IAM Roadsmart’s own website.
You could definitely do a study about it, but until you actually do that it would remain a hypothesis without data. One reason I imagine you might be wrong is that using cup holders is more of a reflexive action that doesn’t distract your concentration. Once you know where it is you generally use muscle memory and don’t even need to take your eyes off the road. That’s the thing with tactile interfaces: the less you depend on your eyes, the better. Touch screen interfaces (and even voice interfaces) that require visual feedback are inherently dangerous to drivers.
The question of whether dangerous activity A being more dangerous than dangerous activity B should be measurable though. In this particular case the likelihood of an accident really would depend on things like how much alcohol they were consuming and how often & how long they are distracted by the infotainment systems during a trip, as well as external factors such as weather and how much traffic there is. In principal we could come up with a statistical model for all these variables. I predict you would see some points where moderate BAC levels over the legal limit (obviously we need to ask “which legal limit?” too) are in fact safer than using the infotainment screens.
In my mazda, the touch screen is disabled while driving, even with android auto.
You use the normal rotary dial thing. It works reasonably well, except that the apps are not optimized for this at all, making some of the ui elements tricky to hit.
I look at it this way. Whenever you must take your eyes off the road to make adjustments, etc. It is a danger. I believe in this effect, you’re driving while impaired. Touchscreens, by design require that you focus most of your attention to the screen. If you look at modern Western marketing of vehicles, including pickup trucks and work vehicles, the focus is on women, and what they can do witha touch screen, and other “amenities” while men are more interested in horsepower, towing capacity, torque, braking, suspension, powertrain, durability, etc.