Chatbots are surprisingly effective at debunking conspiracy theories
聊天机器人在揭穿阴谋论方面意外有效

2025-10-30  1590词  困难
In research we published in the journal Science this year, we had over 2,000 conspiracy believers engage in a roughly eight-minute conversation with DebunkBot, a model we built on top of OpenAI’s GPT-4 Turbo (the most up-to-date GPT model at that time). Participants began by writing out, in their own words, a conspiracy theory that they believed and the evidence that made the theory compelling to them. Then we instructed the AI model to persuade the user to stop believing in that conspiracy and adopt a less conspiratorial view of the world. A three-round back-and-forth text chat with the AI model (lasting 8.4 minutes on average) led to a 20% decrease in participants’ confidence in the belief, and about one in four participants—all of whom believed the conspiracy theory beforehand—indicated that they did not believe it after the conversation. This effect held true for both classic conspiracies (think the JFK assassination or the moon landing hoax) and more contemporary politically charged ones (like those related to the 2020 election and covid-19).
免责声明:本文来自网络公开资料,仅供学习交流,其观点和倾向不代表本站立场。