1. SEJ
  2.  ⋅ 
  3. Generative AI

Can AI Make Social Media Less Toxic? A Chatbot Study Shows Promise

Can AI help foster healthier social media interactions? Here's what researchers found after studying the interaction of 500 chatbots.

  • Researchers used AI chatbots to simulate social media users in experimental models.
  • The chatbots showed less toxic behavior when exposed to cross-party engagement.
  • More work is needed to ensure the chatbots accurately reflect human behavior.
Chatbot, using and chatting artificial intelligence chat bot developed by tech company.

Artificial intelligence (AI) may offer solutions to reduce activity on social media that is seen as toxic and polarizing.

In a recent study, researchers suggest that tweaking how social networks surface content could reduce partisan divisions and lead to more positive interactions.

The study was conducted by a team led by University of Amsterdam professor Petter Törnberg. They programmed 500 chatbots with unique political and demographic profiles based on survey data.

The bots were made to read real news and post about it within simulated Twitter/X environments.

In this setup, the bots displayed greater enjoyment in finding common ground and less toxic behavior. As AI mimics people more realistically, this study may provide insights into how to bring people together on social media while avoiding ethical pitfalls.

More On The Study: Simulating a Social Network

Törnberg’s team programmed chatbots using ChatGPT 3.5. The bots were each assigned a political affiliation, age, gender, income, religion, favorite sports teams, and more. This gave the researchers a diverse simulated population.

On a simulated day in July 2020, the bots read actual news headlines from that period on topics like COVID-19 and Black Lives Matter protests. The bots then commented, liked, and interacted with posts about the news headlines.

The researchers created three different experimental Twitter models:

  1. One echoed chamber model only showed bots posts from others with similar views.
  2. A discovery model prioritized engagement over beliefs.
  3. The third “bridging” model highlighted posts liked by opposing partisan groups to optimize for cross-party interaction.

Tracking Bot Behavior

The simulations ran for 6 hours, with researchers tracking how the bots behaved.

In the bridging model, bots showed greater happiness in finding common ground on issues like LGBTQ+ rights in country music. There was also significantly more cross-party interaction and less toxic exchanges than the polarized echo chamber model.

“If people are interacting on an issue that cuts across the partisan divide, where 50% of the people you agree with vote for a different party than you do, that reduces polarization,” explained Törnberg. “Your partisan identity is not being activated.”

The results suggest social media could be designed to drive engagement without fueling abuse between differing groups. However, more research is needed to validate whether advanced AI chatbots can faithfully simulate human behavior online.

Ethical Concerns

Ethical concerns remain around the private data used to train humanlike bots. They could potentially be programmed with people’s social media posts, browsing history, or confidential records, raising consent issues.

Guidelines are likely needed regarding the rights of people whose data trains bots for studies like this one.

As AI chatbots act more human, they may shed light on reducing toxicity on social media. However, researchers must ensure digital doubles also reflect the best in humanity, not the worst.

Featured Image: CkyBe/Shutterstock

Category News Generative AI
SEJ STAFF Matt G. Southern Senior News Writer at Search Engine Journal

Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, ...

Can AI Make Social Media Less Toxic? A Chatbot Study Shows Promise

Subscribe To Our Newsletter.

Conquer your day with daily search marketing news.