Google’s AR translation mirrors are still vaporware - The Verge

At the end of its I/O presentation on Wednesday, Google released an “something else” kind of surprise. In a short video, Google presented a pair of augmented reality mirrors with a single purpose – displaying audible language translations right in front of your eyeballs. In the video, Google product manager Max Spear called this prototype’s capability “subtitles for the world,” and we saw family members talking for the first time.

Now wait a second. Like many people, we’ve used Google Translate before and most of all we think of it as a very awesome tool that happens to make a lot of embarrassing mistakes. While we may trust it to give us directions to the bus, it’s not nearly the same as trusting it to correctly interpret and convey our parents ’stories as children. And didn’t Google say that it was finally breaking down the language barrier back then?

In 2017, Google marketed real-time translation as a feature of its original Pixel Buds. Our former colleague Sean O’Kane described the experience as “a commendable idea with disappointing execution” and reported that some of the people he tried it with said it was like he was a five-year-old. That’s not what Google showed in its video.

Also, we don’t want to overlook the fact that Google promises that this translation will happen inside a pair of AR glasses. Not to hit a painful spot, but augmented reality hasn’t really caught on to Google’s concept video from a decade ago. You know, the one that acted as a predecessor to the much-damaged and embarrassingly-worn Google Glass?

To be fair, Google’s AR translation mirrors seem more focused than Glass is trying to do. From what Google has shown, they are meant to do one thing – show translated text – not act as an ambient computing experience that a smartphone can replace. But even so, making AR mirrors is not easy. Even a moderate amount of ambient light can make it difficult to view text on see-through screens. Challenging enough to read subtitles on a TV with little sunlight through the window; now imagine that experience but tied to your face (and with the added pressure of engaging in a conversation with someone you don’t understand yourself).

But hey, technology is moving fast – Google can overcome a hurdle that has hindered its competitors. This will not change the fact that Google Translate is not a magic bullet for cross-language conversation. If you’ve tried to have an actual conversation through a translation app, you probably know you should speak slowly. And in procedure. And clearly. Unless you want to risk a messy translation. A slip of the tongue, and you might be done.

People don’t talk in a vacuum or like machines do. Just as we code-switch when talking to voice assistants like Alexa, Siri, or Google Assistant, we know we need to use simpler sentences when we deal with machine translation. And even if we speak correctly, the translation can still turn out to be awkward and misinterpreted. Some of us Verge colleagues fluent in Korean pointed out that Google’s own pre-roll countdown for I/O showed a dignified version of “Welcome” in Korean that no one actually uses.

That pretty embarrassing flub is pale compared to that fact, according to the tweets from Rami Ismail at Sam EttingerGoogle showed over half a dozen backward, corrupt, or otherwise incorrect scripts on a slide during the Translate presentation. (Android Police note that a Google employee identified the mistake, and that it was corrected in the YouTube version of the keynote.) To be clear, this is not in our expectation of perfection – but Google is trying to tell us this is close to cracking real-time translation, and those kinds of mistakes seem incredibly unlikely to happen.

Google is trying to solve a very large complex problem. Translating words is easy; Understanding grammar is difficult but possible. But language and communication are more complex than those two things. As a fairly simple example, Antonio’s mother speaks three languages ​​(Italian, Spanish, and English). Sometimes he would borrow words from language to language in the middle of a sentence-including his regional Italian dialect (which seems to be a fourth language). This kind of thing is pretty easy for someone to parse, but can Google’s prototype glasses deal with it? Ignore chaotic parts of the conversation such as vague references, incomplete thoughts, or innuendo.

It’s not that Google’s goal isn’t impressive. We really want to live in a world where everyone can experience what the video research participants are doing, staring with wide astonishment as they see the words of their loved ones come out in front of them. Breaking down language barriers and understanding each other in ways we couldn’t then do is something the world needs more; it’s a good thing we still have a long way to go in that future. Machine translation is here and has been around for a long time. But despite so many languages ​​it can handle, it does not yet speak human.


Related:


#Googles #translation #mirrors #vaporware #Verge

LEAVE A REPLY

Please enter your comment!
Please enter your name here