- Sep 10, 2021
- 5,887
- 5,069
Forum Friends, Or AI Chat Bots?

Let's hope it's the former, but a college team from Zurich ran some experiments, and the typical forum member cannot tell when they are talking to a bot 75% of the time:
Scientists from the University of Zurich set loose an army of AI bots on the popular Reddit forum r/changemyview — where nearly 4 million users congregate to debate contentious topics — to investigate whether the tech could be used to influence public opinion. To achieve these goals, the bots left more than 1,700 comments across the subreddit, using a variety of assumed guises...
And the authors, who (going against standard academic procedure) left their names undisclosed in the draft, noted that throughout the trial unwitting users "never raised concerns that AI might have generated the comments posted by our accounts." The post was met with ire by users and by Ben Lee, Reddit's chief legal officer, who in a comment below the post using the username traceroo announced that the website would be pursuing formal legal action against the University of Zurich... Whatever legal wranglings follow, experiments such as this highlight the growing ability of chatbots to infiltrate online discourse. In March, scientists revealed that OpenAI's GPT-4.5 Large Language Model was already capable of passing the Turing test, successfully fooling trial participants into thinking they were talking with another human 73% of the time.
It also lends some credence to the notion that, if left unchecked, AI chatbots have the potential to displace humans in producing the majority of the internet's content. Called the "dead internet" theory, this idea is just a conspiracy theory — at least for now.
www.livescience.com

Let's hope it's the former, but a college team from Zurich ran some experiments, and the typical forum member cannot tell when they are talking to a bot 75% of the time:
AI researchers ran a secret experiment on Reddit users to see if they could change their minds — and the results are creepy
By Ben Turner published May 1, 2025
Scientists from the University of Zurich set loose an army of AI bots on the popular Reddit forum r/changemyview — where nearly 4 million users congregate to debate contentious topics — to investigate whether the tech could be used to influence public opinion. To achieve these goals, the bots left more than 1,700 comments across the subreddit, using a variety of assumed guises...
And the authors, who (going against standard academic procedure) left their names undisclosed in the draft, noted that throughout the trial unwitting users "never raised concerns that AI might have generated the comments posted by our accounts." The post was met with ire by users and by Ben Lee, Reddit's chief legal officer, who in a comment below the post using the username traceroo announced that the website would be pursuing formal legal action against the University of Zurich... Whatever legal wranglings follow, experiments such as this highlight the growing ability of chatbots to infiltrate online discourse. In March, scientists revealed that OpenAI's GPT-4.5 Large Language Model was already capable of passing the Turing test, successfully fooling trial participants into thinking they were talking with another human 73% of the time.
It also lends some credence to the notion that, if left unchecked, AI chatbots have the potential to displace humans in producing the majority of the internet's content. Called the "dead internet" theory, this idea is just a conspiracy theory — at least for now.

AI researchers ran a secret experiment on Reddit users to see if they could change their minds — and the results are creepy
University of Zurich researchers secretly unleashed an army of manipulative chatbots on the r/changemyview subreddit — and they were more persuasive than humans at getting people to change their minds.