fb tracking

Mushrooming Misinformation: Generative AI Poses Deadly Threat 

Washington, D.C. — Today, Public Citizen released a new report on the proliferation of AI-powered tools marketed to mushroom foragers and the potentially deadly consequences of misinformation provided by the emerging technology.

The report highlights AI-powered identification apps, whose use has led to the hospitalization of foragers who consumed toxic mushrooms, and generative AI foraging assistance chatbots and image generators, which can produce dangerous misinformation.

While some AI tools can be a helpful resource for mushroom foragers in conjunction with other sources of information, they are no substitute for human knowledge and experience. Too often, businesses promoting these technologies overhype their abilities..

“As AI-powered systems like ChatGPT are increasingly available to consumers, the technology’s potential to spread misinformation — from misidentified toxic mushrooms to deepfaked videos of political candidates — is of urgent concern,” said Rick Claypool, research director at Public Citizen and author of the report. “When these technologies corrupt valuable sources of trustworthy information, the results can be catastrophic. Businesses should be forthright about the technology’s limitations – and liable when deceptive content results in users making harmful choices.” 

Top findings from the report include:

  • Individuals relying solely on AI technology for mushroom identification have been severely sickened and hospitalized after consuming wild mushrooms that A.I. systems misidentified as edible.
  • Generative AI technologies that use OpenAI’s ChatGPT and DALL-E systems are being used to develop mushroom identification chatbots found to produce confusing and dangerous misinformation that could result in severe poisonings and death.
  • Amazon’s online marketplace was inundated in 2023 with books reportedly generated by AI, leading the company to limit the number of books any individual can self-publish per day.
  • Businesses behind these technologies must disclose the use of AI and remind users constantly that AI makes mistakes. When AI systems sold as sources of truthful information instead produce deceptive content resulting in users making harmful decisions, businesses must be liable.

For more information on the report or to speak with Rick Claypool, please contact Emily Leach at eleach@citizen.org