German journalist Martin Bernklau typed his name and location into Microsoft’s Copilot to see how his culture blog articles would be picked up by the chatbot, according to German public broadcaster SWR.
The answers shocked Bernklau. Copilot falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents. It also claimed that he had been involved in a dramatic escape from a psychiatric hospital and had exploited grieving women as an unethical mortician.
Copilot even went so far as to claim that it was “unfortunate” that someone with such a criminal past had a family and, according to SWR, provided Bernklau’s full address with phone number and route planner.
↫ Matthias Bastian
So why did Copilot (which is just OpenAI’s ChatGPT with sparkles) claim Bernklau did all sorts of horrible things? Well, his occupation – journalist – is a dead giveaway. He has written a lot of articles covering court proceedings in Tübingen on abuse, violence, and fraud cases, and since Copilot is just spicy autocorrect, it has no understanding of context and pinned the various crimes he covered on Bernklau. Adding in his address, phone number, and a damn planned route to his home is just the very disgusting icing on this already disgusting cake.
What makes matters even worse, if you can believe it, is that Bernklau has absolutely no recourse. He contacted the public prosecutor’s office in Tübingen, but they stated they can’t press charges because the accusations coming from Copilot aren’t being made by a real person. And to make it still even worse, Microsoft just threw its hands in the air and absolved itself of any and all responsibility by pointing to its terms of service, in which Microsoft discards liability for content generated by Copilot. Convenient.
This is nothing short of a nightmare scenario that can utterly destroy someone’s life, and the fact that Microsoft doesn’t care and the law isn’t even remotely prepared to take serious matters like these on is terrifying.
But he has a recourse: Sue Microsoft. He didn’t even use CoPilot, so he didn’t accept any terms of service (even if they are enforceable).
It’s the same deal with that New York Times copyright lawsuit: Your machine did something illegal due to the way you programmed it to? (for example spitting out copies of copyrighted works) You are liable. If Microsoft can’t control its own code (and yes, that’s what neural nets are at the end of the day) that’s their problem, the world doesn’t have to care how it happened and whether a neural net was involved.
kurkosdr,
I agree that AI companies should be held liable for their AI.
I was going to say Disney’s lawyers have something else to say about that given that Disney were seriously perusing a forced arbitration clause for a wrongful death lawsuit on account of agreeing to the terms of a Disney+ trial subscription. That maneuver might have worked, but apparently there was enough PR backlash to convince them to change course.
https://www.npr.org/2024/08/14/nx-s1-5074830/disney-wrongful-death-lawsuit-disney
What a fiendish thing for a corporation to do! It’s just a given that most of us are subjected to these implied TOS agreements everywhere in the modern world. Hardly anybody thinks these are reasonable, but it doesn’t mean the courts won’t back them…to hell with the moral consequences.
From what I remember, ToS clauses which mandate arbitration are unlikely to be legally valid outside the U.S., which is why you see a lot of U.S. companies (eg. Discord) using them in their ToSes to disallow class-action lawsuits… but only for their U.S. customers.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
Huh. I did not know that.
That person had a Disney+ account, so Disney can cite the existence of the account as proof that he accepted the ToS.
However, there is no way to prove that this Martin Bernklau guy even used Copilot, all he has to do is take a photo of someone else using Copilot. Same for New York Times about the copyright infringement case.
kurkosdr,
With all the telemetry & tracking microsoft are doing, I’m not so sure microsoft couldn’t prove it. To be clear though I intended it as interesting hypothetical more than anything.
A tiny nitpick: What Disney tried to do was apply their Disney+ TOS to the wrongful death at a restaurant. There was no relationship between these beyond the fact that they share a corporate umbrella. Applying the same logic to microsoft would mean his use of ANY microsoft product could activate the TOS even across other microsoft services.
I wonder how long it’ll be before we start seeing legislators in the EU react to situations like this and how far they’ll go.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
Yeah, they might. For cases like this we really need to be looking at the full session involved including input prompts. AI can be instructed to create a fictitious narrative when prompted to do so. I would not incriminate the AI for libel if it’s output was merely following input instructions. It’s really interesting to see what you can get an LLM to say:
/screenshot
..,but context is very important:
Of course this isn’t to say that an LLM can’t make unprompted errors but just highlights the importance of having full context. I don’t really know if the source of this article supplied any more context because it wasn’t in english.
Actually this indeed will more and more become a problem for this big multinationals. They are pushing a product that does two things good, takes other people work without respecting the terms of use and on top of that makes things up. On the long run this will in my opinion damage their reputation.
Nothing compared to what’s coming. These AI things are obviously being set up to be the ultimate arbiters of truth (ScienceBasedTM).
Fun times ahead.
Ironically (spoiler alert: it is not), if you ask ChatGTP about who Martin Bernklau is, the AI answers correctly about being a journalist from Tübingen, also reporting the “incident” with Copilot. At least, there is someone at OpenAI doing his social listening job,..
From what I understand this sort of thing can even be induced by publishing content modified with the help of adversarial machine learning.
Your average human reader wouldn’t notice the difference, but language models that add those adversarial articles to their datasets would start spouting more of such nonsense, as determined by the adversary in question.
I do not like AI in OSs being there and doing stuff. I really like talking with ChatGPT though onsite. I have an paid account an it enables me to get extra stuffs like pictures, etc, and use ChatGPT 4o (4 with the letter o).
How exactly does Copilot “accusing someone,” as you say lead to a person’s life being destroyed?
1. Copilot has no legal authority.
2. It cannot represent evidence as it cannot be used in court against someone, Just as evidence in general cannot be produced with it.
3. Did it share this personal information with anyone else?
I am not saying that this event was not upsetting or, wrong. It was bad. AI should not do what it did. However, no one is there saying, “well, the AI said it so you must be guilty!” Everyone knows that AI makes mistakes – just like people. Information is being revealed about people, all over the net, all the time without the involvement of AI. That’s not a justification, just a way of saying, this is a problem we know about and are trying to deal with.
So why the hyperbole around, “this even might destroy someone’s life?” How exactly would that destruction take place in regards to AI? I request answers that are based in what is happening right now. Not, the imaginings of an apocalyptic AI future, which often happens when people talk about this.