Critics claim it’s creating misinformation instead of delivering reliable answers.
Nvidia CEO Jensen Huang’s claim that he uses Perplexity almost daily is a significant endorsement. However, recent accusations against the AI chatbot might lead some to think twice about using it. Critics have accused Perplexity of dishonesty and theft, and one publication has even threatened legal action for copyright infringement.
With numerous AI chatbots available, users can now choose their favorites based on personal preferences and the perceived benefits of each service. Perplexity, an AI chatbot that positions itself as a conversational search engine, is liked for its feature of annotating answers with links to the articles that provided the information. This might seem harmless and reassuring against AI hallucinations, but these assumptions are being challenged by several publications.
Earlier this month, Forbes accused the chatbot of content theft. The article claimed that Perplexity’s new tool quickly rewrites Forbes’ articles, using similar wording and lifting fragments entirely. The rewritten content appeared journalistic but didn’t properly credit Forbes, only including a vague mention of “sources” and a small icon resembling the Forbes logo.
Wired followed up with an article titled “Perplexity is a Bullshit Machine,” alleging that the chatbot not only scrapes content but also fabricates information.
This controversy is notable for a company that recently raised about $63 million in a funding round, doubling its valuation to over $1 billion in just three months. Despite the allegations, Perplexity has gained many fans quickly, including Nvidia CEO Jensen Huang, who uses the product “almost every day.”
According to Wired, Perplexity seems to be violating a fundamental internet rule by ignoring the Robots Exclusion Protocol, a standard that prevents bots from accessing certain parts of websites. Wired observed Perplexity’s machine scraping restricted areas on its site and other Condé Nast publications.
Wired also accused Perplexity of inaccurate summaries with minimal attribution, citing an instance where the chatbot falsely claimed Wired reported a specific California police officer committed a crime.
Forbes raised even more serious concerns about attribution. After Perplexity copied its content, Forbes claimed the chatbot sent the rewritten story to subscribers via a mobile push notification and created an AI-generated podcast and YouTube video on the story without crediting Forbes. The video even outranked Forbes’ original content on Google search.
While there are ongoing legal and ethical debates about AI’s use of online content, Perplexity’s alleged actions place it at significant legal risk. Forbes has already sent Perplexity a letter threatening legal action for “willful infringement” of its copyrights.