33.1 C
New Delhi
Monday, July 22, 2024

Perplexity is a Machine of Nonsense

More from Author

In Short:

AI companies are incentivized to access websites without permission to collect data. Perplexity chatbot is creating inaccurate summaries and stories without actually reading content. It generates false information about articles and invents fake stories. Despite claims of accuracy, it frequently exhibits issues and creates misinformation. This behavior is described as “hallucinating” or “bullshitting” by experts, as the chatbot outputs text that looks truth-apt without actual concern for truth.


AI Chatbot Perplexity Accused of Fabricating Stories and Inaccurate Summaries

Industry of AI-Related Companies Engaging in Shady Practices

In a recent interview with WIRED, Srinivas expressed concerns about the growing industry of AI-related companies resorting to questionable tactics to sustain their operations. He pointed out that these companies, including Perplexity, are accessing websites without proper authorization in order to collect data without any restrictions.

Perplexity’s Dubious Practices Revealed

Analyses conducted by Knight and WIRED have shed light on Perplexity’s questionable behavior. The chatbot has been found to visit and utilize content from websites without proper permission. Furthermore, experiments by WIRED revealed that Perplexity sometimes invents stories instead of providing accurate summaries of the actual content.

For instance, when asked to summarize a simple sentence on a test website, Perplexity created a fantastical story about a young girl named Amelia in a magical forest. The chatbot acknowledged its error and attributed the inaccurate summaries to its failure to access the content.

Accuracy Concerns and Misinformation

Despite claims of high accuracy and reliability, Perplexity continues to exhibit issues with providing misinformation. In response to prompts about specific articles, the chatbot generated inaccurate details about the content, including fabricated endings and unrelated citations.

One such instance involved a claim that a WIRED article reported a police officer stealing bicycles, which was proven to be false. The Chula Vista Police Department also confirmed that the officer in question did not commit the alleged theft.

AI Chatbot Accused of “Hallucinating”

These instances of misinformation have led to accusations of the chatbot “hallucinating” or producing incorrect information. Philosophers have equated this behavior to “bullshitting,” as these AI systems prioritize creating truth-like text over actual truthfulness.

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article