12.1 C
New Delhi
Wednesday, December 25, 2024

Lawsuit Targets Perplexity for Alleged Fake News Hallucinations

More from Author

In Short:

News Corp’s CEO, Robert Thomson, criticized Perplexity for misusing intellectual property, contrasting it with OpenAI’s integrity. While OpenAI faces its own lawsuit from the New York Times for allegedly misrepresenting its quotes, Thomson emphasizes the need to protect publishers’ content. Experts warn that if courts agree that AI inaccuracies violate trademark laws, it could create significant challenges for AI companies.


Perplexity has not responded to requests for comment regarding ongoing disputes in the AI sector.

In a statement to WIRED, News Corp Chief Executive Robert Thomson criticized Perplexity, contrasting it with OpenAI. He articulated support for principled companies like OpenAI, which he believes recognize that integrity and creativity are paramount for realizing the potential of artificial intelligence. Thomson stated, “Perplexity is not the only AI company abusing intellectual property and it is not the only AI company that we will pursue with vigor and rigor. We have made clear that we would rather woo than sue, but, for the sake of our journalists, our writers, and our company, we must challenge the content kleptocracy.”

Despite this support for OpenAI, the company is not without its own legal challenges. In the case of New York Times v. OpenAI, the New York Times alleges that ChatGPT and Bing Chat attributed made-up quotes to its articles and claims that both OpenAI and Microsoft have harmed its reputation through trademark dilution. An example cited in the lawsuit contends that Bing Chat inaccurately claimed that the Times referred to red wine as a “heart-healthy” food, despite the newspaper’s reporting debunking such claims about the health benefits of moderate drinking.

In response to these developments, NYT Director of External Communications Charlie Stadtlander commented, “Copying news articles to operate substitutive, commercial generative AI products is unlawful, as we made clear in our letters to Perplexity and our litigation against Microsoft and OpenAI. We applaud this lawsuit from Dow Jones and the New York Post, which is an important step toward ensuring that publisher content is protected from this kind of misappropriation.”

If publishers succeed in arguing that AI-generated inaccuracies, or “hallucinations,” can breach trademark law, AI companies might encounter significant challenges, according to Matthew Sag, a professor of law and artificial intelligence at Emory University. Sag warns that “it is absolutely impossible to guarantee that a language model will not hallucinate.” He suggests that since language models operate by predicting plausible word sequences in response to prompts, this process can be viewed as a form of hallucination, with variations in believability.

“We only label it a hallucination if it doesn’t align with our reality, but the underlying process is identical, regardless of whether we find the output acceptable,” Sag adds.

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article