During the Google I/O keynote last night, the company introduced a number of new products and talked some more about Android N. There’s Google Home, an Amazon Echo competitor, which will be available somewhere later this year. The company also announced two (!) more messaging applications, and at this point I’m not sure whatever the hell Google is thinking with their 3027 messaging applications. There was also a lot of talk about virtual reality, but I still just can’t get excited about it at all.
More interesting were the portions about Android N and Android Wear 2.0. Android N has gone beta, and you can enroll eligible Nexus devices into the developer preview program to get the beta now (Developer Preview devices should get the beta update over the air).
New things announced regarding Android N are seamless operating system updates (much like Chrome OS, but only useful for those devices actually getting updates), a Vulkan graphics API, Java 8 language features, and a lot more. Google is also working on running Android applications without installing them.
Android Wear 2.0 was also announced, introducing a slightly improved application launcher, better input methods (handwriting recognition and a tiny keyboard), and support for a feature that allows watchfaces to display information from applications – very similar to what many third-party Wear watchfaces already allow.
Tying all of Google announcements together was Google Assistant, an improved take on Google Now that integrates contextually-aware conversation speech into Google’s virtual assistant. Google Assistant is what ties Google Home, Android, Android TV, Wear, the web, and everything else together. We’ll have to see if it’s actually any good in real tests, of course, but it looks kind of interesting.
That being said, I’ve been firmly in the “these virtual assistants are useless” camp, and this new stuff does little to pull me out. It just doesn’t feel as efficient and quick as just using your device or PC with your hands, and on top of that, there’s the huge problem of Silicon Valley – all technology companies, including Google, Apple, and Microsoft – having absolutely no clue about the fact that endless amounts of people lead bilingual lives.
To this day, all these virtual assistants and voice input technologies are entirely useless to people such as myself, who lead about 50/50 bilingual lives, because only one language can be set. Things like Wear and the Apple Watch require a goddamn full-on reset and wipe to switch voice input language, meaning that no matter what language I set, it’ll be useless 50% of the time. If you’re American and used to only speaking in English, you might think this is a small problem… Until you realise there are dozens of millions of Spanish/English bilingual people in the US alone. It’s high time Silicon Valley goes on a trip out into the real world, beyond the 2.3 kids/golden retriever/cat/minivan perfect suburban model families they always show in their videos.
Android N, for no one. Because no one will be able to run it save techies.
I think virtual assistants like Siri, Google Now, and Cortana are kind of a cool concept and can be useful in some aspects.
As mainly a Linux user and not really having an AI assistant option on this platform, I wanted to see how feasible it might be to get the Cortana assistant to integrate with native Linux software via headless virtualization. I can get her to open apps by just saying their names. She can also work with custom voice commands to do things like integrate with the Linux Steam client.
You can see Cortana on Linux in action here:
I have fixed a couple bugs and improved Steam integration since the video was made.
https://www.youtube.com/watch?v=R8myM7g89sE
Edited 2016-05-19 14:41 UTC
From TFA:
What a load of crap. Are they seriously trying to say that, with the way SMS and Hangouts already integrate, they couldn’t take that one step further and propagate this via a service to your Google account? Sorry, but I don’t buy it for a second. If MightyText could do it, so can Google. If they don’t want to, they should be honest about that.
Why cant they just build up hangouts with these new features (and allow me to disable them)?
I have little interest in Allo and Duo at the moment. Why yet another message/chat app? Google should just added the features into Hangouts.
I too have little interest in Google Assistant, and as Google/Microsoft/Amazon/Apple try to know me better more and more, I get more protective with my personal info.
Android N also supports Labradors [black and yellow] and Sedans [4-door only].
I still can’t get why having phone number for ID is a good thing. Why can’t I have several different accounts for several different groups of people? (“circles” as Google+ puts it)? Why do I have to give my phone number to someone whom I don’t want to call me over cellular? I just don’t want all people with my number to be able to call me whenever they feel like it, free of charge. With jabber I could register my phone number as my username if I wanted to; why can’t I have the same on these new services? Weren’t they supposed to be an improvement after all?
P.S.: I think that Google separates texting and calling because less “tech savy” people tend to think of these as of separate activities. As with calls and SMS on “classic” phones. Or maybe just because Apple and Microsoft are pushing the same shit.
P.P.S.: Anyway, I don’t care about mobile-only whatever. And I will continue to decline all suggestions to install mobile-only chat. Just because I don’t want to switch to my phone when I have my hands on keyboard.
Because services like Whatsapp can then advertise their product as “free SMS”, which is both enticing in countries where SMS isn’t virtually free (ie. probably most countries besides the US) and instantly familiar for non techy users (even if the underlaying technology has nothing to do with SMS).
As long as both parties have the app installed you can seamlessly move from SMS to the new service without bothering with any kind of login, IDs or new contacts list.
It’s less flexible and probably inconvenient in some situations, but it caters to the lowest common denominator
Pretty much the only thing I ever do with Google Now is set my alarm. It’s faster when I’m tired and just want to lay down for a few minutes.
Such innovation having a “backup image” for updates, like switches and routers do a thousand years…
“system boot image-2
reload”
I guess I’m not understanding your quandary here, Thom. Why do you need to speak to your phone in multiple languages?
Personally, I find virtual assistants to be very useful in certain situations. For example, if I want GPS turn-by-turn directions somewhere, I just press the voice button and say ‘navigate to X’. A lot faster than even typing that into the voice bar, but mainly because I suck at touch screen typing Another scenario when it’s very useful is if I want to go somewhere but I’m not sure if I’ll make it before it closes, so I just ask, ‘What time does x close’ and I usually get an accurate answer. But maybe that doesn’t work the same way in other countries …
Edited 2016-05-20 04:21 UTC
For me it’s not so much as talking to my phone in different languages.
My phone’s language is English (i really don’t like the dutch translation for all my devices). If I want to talk to it, i have to talk in English. So far so good. But I need to send a message to my parents, that would be in Dutch. I need to tell google maps to go to a street in Ghent. Try pronouncing ‘Belfortstraat’ in English so Maps knows which street you mean. FAIL. While I’m driving to Ghent i want to call someone from my contacts with a Dutch or a French or a Spanish name… FAIL.
Pretty much this. In the span of a few minutes, I send messages in English and Dutch open English and Dutch websites, etc. Voice input also means things like dictation!
It’s not much better with text input, at least on iOS – just one dictionary/spell checker at a time.
Doesn’t Swipe have the bilingual feature that the Android version has? Even if you’re using the stock keyboard, at least the control to switch input languages is right there on the keyboard, so it’s not nearly so bad as it is with the voice recognition.
If you could switch languages on the fly, would you still consider voice assistants useless?
If Thom’s use of multiple languages is like mine, you’d need a voice assistant that can accept words in language A embedded in a sentence in language B. With a good keyboard, typing a sentence like “remember to buy lørdagsgodt for the kids” doesn’t even require you to switch languages. Until digital assistants can learn to parse stuff like that, they’re not going to be useful to bilingual people.
Edited 2016-05-21 17:20 UTC
For certain situations i might consider it usefull. Sitting alone in my car and giving it commands… other than that? not so much actually. Maybe i’m too old to consider talking to a computer/phone/device to be normal… It all still feels like a gimmick to me. Like would you consider talking to your phone in the train? with 10’s of ppl around you? Don’t think so…. The occasions that i would have no public while talking to my phone barely justify (or not at all) all the effort put into speech recognition…
Over here (Paris, France) I’ve never actually seen anyone use those assistants on the street. Even Siri users are very few.
Maybe it’s a cultural thing, or they just don’t work great in French. I don’t use them myself as they feel very awkward to use and the vocal keyboard misinterpret what I’m asking for half the time.
Time to graduate from ‘chocolate factory’ form factors.
Modularization of devices should follow something akin to ISO 216, but on a 3D FORMAT.
https://en.wikipedia.org/wiki/A4_paper.
Snapping with ultra high strength magnets. Modular on the small size should be a RUGGED CONCEPT, by definition.
ComLinks optical. Waiting for great ideas on heat management.
This is not a cheap concept. Is to allow option [combinatorics] formerly not allowed, on heavy environments not formerly allowed. Think search and rescue, science, military.