- Arve Hjalmar Holmen, a citizen of Norway, said he asked ChatGPT to tell him what it knows about him, and its response was a horrifying hallucination that claimed he’d murdered his children and gone to jail for the violent act. Given how the AI mixed its false response with real details about his personal life, Holmen filed an official complaint against ChatGPT maker OpenAI.
Have you ever Googled yourself just to see what the internet has to say about you? Well, one man had that same idea with ChatGPT, and now he’s filed a complaint against OpenAI based off what its AI said about him.
Arve Hjalmar Holmen, from Trondheim, Norway, said he asked ChatGPT the question, “Who is Arve Hjalmar Holmen?”, and the response—which we won’t print in full—said he was convicted of murdering his two sons, aged 7 and 10, and sentenced to 21 years in prison as a result. It also said Holmen attempted murder of his third son.
None of these things actually happened, though. ChatGPT appeared to spit out a completely false story it believed was completely true, which is called an AI “hallucination.”
Based on its response, Holmen filed a complaint against OpenAI with the help of Noyb, a European center for digital rights, which accuses the AI giant of violating the principle of accuracy that’s set forth in the EU’s General Data Protection Regulation (GDPR).
“The complainant was deeply troubled by these outputs, which could have harmful effect in his private life, if they were reproduced or somehow leaked in his community or in his home town,” the complaint said.
What’s dangerous about ChatGPT’s response, according to the complaint, is it blends real elements of Holmen’s personal life with total fabrications. ChatGPT got Holmen’s home town correct, and it was also correct about the number of children—specifically, sons—he has.
JD Harriman, partner at Foundation Law Group LLP in Burbank, Calif., told Fortune that Holmen might have a difficult time proving defamation.
“If I am defending the AI, the first question is ‘should people believe that a statement made by AI is a fact?'” Holman asked. “There are numerous examples of AI lying.”
Furthermore, the AI didn’t publish or communicate its results to a third party. “If the man forwarded the false AI message to others, then he becomes the publisher and he would have to sue himself,” Harriman said.
Holmen would probably also have a hard time proving the negligence aspect of defamation, since “AI may not qualify as an actor that could commit negligence” compared to people or corporations, Harriman said. Holmen would also have to prove that some harm was caused, like he lost income or business, or experienced pain and suffering.
Avrohom Gefen, partner at Vishnick McGovern Militia LLP in New York, told Fortune that defamation cases surrounding AI hallucinations are “untested” in the U.S., but mentioned a pending case in Georgia where a radio host filed a defamation lawsuit that survived OpenAI’s motion to dismiss, so “we may soon get some indication as to how a court will treat these claims.”
The official complaint asks OpenAI to “delete the defamatory output on the complainant,” tweak its model so it produces accurate results about Holmen, and be fined for its alleged violation of GDPR rules, which compel OpenAI to take “every reasonable” step to ensure personal data is “erased or rectified without delay.”
“With all lawsuits, nothing is automatic or easy,” Harriman told Fortune. “As Ambrose Bierce has said, you go into litigation as a pig and come out as a sausage.”
OpenAI did not immediately respond to Fortune‘s request for comment.
This story was originally featured on Fortune.com
Recent Comments