AI chatbots may struggle with hallucinating made-up information, but new research has shown they might be useful for pushing back against unfounded and hallucinatory ideas in human minds. MIT Sloan and Cornell University scientists have published a paper in Science claiming that conversing with a chatbot powered by a large language model (LLM) reduces belief in conspiracies by about 20%. 

To see how an AI chatbot might affect conspiratorial thinking, the scientist arranged for 2,190 participants to discuss conspiracy theories with a chatbot running OpenAI‘s GPT-4 Turbo model. Participants were asked to describe a conspiracy theory they found credible, including the reasons and evidence they believed supported it. The chatbot, prompted to be persuasive, provided responses tailored to these details. As they talked to the chatbot, it provided tailored counterarguments based on the participants’ input. The study fielded the perennial AI hallucination issue with a professional fact-checker evaluating 128 claims made by the chatbot during the study. The claims were 99.2% accurate, which the researchers said was thanks to extensive online documentation of conspiracy theories represented in the model’s training data.

The idea of turning to AI for debunking conspiracy theories was that their deep information reservoirs and adaptable conversational approaches could reach people by personalizing the approach. Based on follow-up assessments ten days and two months after the first conversation, it worked. Most participants had a reduced belief in the conspiracy theories they had espoused ” from classic conspiracies involving the assassination of John F. Kennedy, aliens, and the Illuminati, to those pertaining to topical events such as COVID-19 and the 2020 US presidential election,” the researchers found.

Factbot Fun

The results were a real surprise to the researchers, who had hypothesized that people are largely unreceptive to evidence-based arguments debunking conspiracy theories. Instead, it shows that a well-designed AI chatbot can present counterarguments effectively, leading to a measurable change in belief. They concluded that AI tools could be a boon in combatting misinformation, albeit one that requires caution due to how it could also further mislead people with misinformation.

The study supports the value of projects with similar goals. For instance, fact-checking site Snopes recently released an AI tool called FactBot to help people figure out whether something they’ve heard is real or not. FactBot uses Snopes’ archive and generative AI to answer questions without having to comb through articles using more traditional search methods. Meanwhile, The Washington Post created Climate Answers to clear up confusion on climate change issues, relying on its climate journalism to answer questions directly on the topic.  

“Many people who strongly believe in seemingly fact-resistant conspiratorial beliefs can change their minds when presented with compelling evidence. From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit,” the researchers wrote. “Practically, by demonstrating the persuasive power of LLMs, our findings emphasize both the potential positive impacts of generative AI when deployed responsibly and the pressing importance of minimizing opportunities for this technology to be used irresponsibly.”

You Might Also Like

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Useful reference for domestic helper.