What if you want to find out more about the PS/2 Model 280? You head out to Google, type it in as a query, and realise the little “AI” summary that’s above the fold is clearly wrong. Then you run the same query again, multiple times, and notice that each time, the “AI” overview gives a different wrong answer, with made-up details it’s pulling out of its metaphorical ass. Eventually, after endless tries, Google does stumble upon the right answer: there never was a PS/2 Model 280, and every time the “AI” pretended that there was, it made up the whole thing.
Google’s “AI” is making up a different type of computer out of thin air every time you ask it about the PS/2 Model 280, including entirely bonkers claims that it had a 286 with memory expandable up to 128MB of RAM (the 286 can’t have more than 16). Only about 1 in 10 times does the query yield the correct answer that there is no Model 280 at all.
An expert will immediately notice discrepancies in the hallucinated answers, and will follow for example the List of IBM PS/2 Models article on Wikipedia. Which will very quickly establish that there is no Model 280.
The (non-expert) users who would most benefit from an AI search summary will be the ones most likely misled by it.
How much would you value a research assistant who gives you a different answer every time you ask, and although sometimes the answer may be correct, the incorrect answers look, if anything, more “real” than the correct ones?
↫ Michal Necasek at the OS/2 Museum
This is only about a non-existent model of PS/2, which doesn’t matter much in the grand scheme of things. However, what if someone is trying to find information about how to use a dangerous power tool? What if someone asks the Google “AI” about how to perform a certain home improvement procedure involving electricity? What if you try to repair your car following the instructions provided by “AI”? What if your mother follows the instructions listed in the leaflet that came with her new medication, which was “translated” using “AI”, and contains dangerous errors?
My father is currently undertaking a long diagnostic process to figure out what kind of age-related condition he has, which happens to involve a ton of tests and interviews by specialists. Since my parents are Dutch and moved to Sweden a few years ago, language is an issue, and as such, they rely on interpreters and my Swedish wife’s presence to overcome that barrier. A few months ago, though, they received the Swedish readout of an interview with a specialist, and pasted it into Google Translate to translate it to Dutch, since my wife and I were not available to translate it properly.
Reading through the translation, it all seemed perfectly fine; exactly the kind of fact-based, point-by-point readout doctors and medical specialists make to be shared with the patient, other involved specialists, and for future reference. However, somewhere halfway through, the translation suddenly said, completely out of nowhere: “The patient was combative and non-cooperative” (translated into English).
My parents, who can’t read Swedish and couldn’t double-check this, were obviously taken aback and very upset, since this weird interjection had absolutely no basis in reality. This readout covered a basic question-and-answer interview about symptoms, and at no point during the conversation with the friendly and kind doctor was there any strife or modicum of disagreement. Still, being into their ’70s and going through a complex and stressful diagnostic process in a foreign healthcare system, it’s not unsurprising my parents got upset.
When they shared this with the rest of our family, I immediately thought there must’ve been some sort of translation error introduced by Google Translate, because not only does the sentence in question not match my parents and the doctor in question at all, it would also be incredibly unprofessional. Even if the sentence were an accurate description of the patient-doctor interaction, it would never be shared with the patient in such a manner.
So, trying to calm everyone down by suggesting it was most likely a Google Translate error, I asked my parents to send me the source text so my wife and I could pour over it to discover where Google Translate went wrong, and if, perhaps, there was a spelling error in the source, or maybe some Swedish turn of phrase that could easily be misinterpreted even by a human translator. After pouring over the documents for a while, we came to a startling conclusion that was so, so much worse.
Google Translate made up the sentence out of thin air.
This wasn’t Google Translate taking a sentence and mangling it into something that didn’t make any sense. This wasn’t a spelling error that tripped up the numbskull “AI”. This wasn’t a case of a weird Swedish expression that requires a human translator to properly interpret and localise into Dutch. None of the usual Google Translate limitations were at play here. It just made up a very confrontational sentence out of thin air, and dumped it in between two other sentence that were properly present in the source text.
Now, I can only guess at what happened here, but my guess is that the preceding sentence in the source readout was very similar to a ton of other sentences in medical texts ingested by Google’s “AI”, and in some of the training material, that sentence was followed by some variation of “patient was combative and non-cooperative”. Since “AI” here is really just glorified autocomplete, it did exactly what autocomplete does: it made shit up that wasn’t there, thereby almost causing a major disagreement between a licensed medical professional and a patient.
Luckily for the medical professional and the patient in question, we caught it in time, and my family had a good laugh about it, but the next person this happens to might not be so lucky. Someone visiting a foreign country and getting medicine prescribed there after an incident might run instructions through Google Translate, only for Google to add a bunch of nonsense to the translation that causes the patient to misuse the medication – with potentially lethal consequences.
And you don’t even need to add “AI” translation into the mix, as the IBM PS/2 Model 280 queries show – Google’s “AI” is entirely capable of making shit up even without having to overcome a language barrier. People are going to trust what Google’s “AI” tells them above the fold, and it’s unquestionably going to lead to injury and most likely death.
And who will be held responsible?