Artificial Intelligence (AI) helps reduce online harassment, study finds

Researchers at BYU and Duke harness AI to foster empathetic discourse and combat online harassment; driving a kinder digital landscape

[Oct. 6, 2023: Staff Writer, The Brighter Side of News]

Researchers at BYU and Duke harness AI to foster empathetic discourse and combat online harassment. (CREDIT: Creative Commons)

In today's digital era, a significant portion of our interactions occur online. From the chatty comments on our favorite YouTuber's latest video to spirited debates on social media about contemporary political issues, the internet offers myriad platforms for individuals to express their opinions.

However, it also offers anonymity that often culminates in uncivil and sometimes even dangerous interactions.

Scroll through comments on social media or digital news platforms, and you're more likely to find a negative sentiment than a positive one. This toxic culture isn't an assumption but backed by data.

A notable survey from the Pew Research Center shows that an alarming 41% of American adults have been at the receiving end of online harassment. More concerning is the fact that 20% of adults claim they've been targeted online specifically because of their political beliefs.


Related Stories:


However, there might be a light at the end of this digital tunnel. Recent research from the combined efforts of Brigham Young University (BYU) and Duke University has unearthed a potential solution that leverages cutting-edge technology: Artificial Intelligence (AI). According to their joint study, AI has the potential to enhance the quality of online conversations, thereby fostering an environment of civil dialogue.

An Experiment in Civil Discourse

The experiment that underscored these findings was both unique and timely. The research team, which included BYU undergraduate Vin Howe, developed an exclusive online platform. On this platform, participants with clashing viewpoints on American politics, particularly on the contentious issue of gun control, were brought together for a virtual chat.

AI suggested rephrasing messages didn't alter the content of the comment, but provided options to the user to make a more polite statement. (CREDIT: Vin Howe)

Here's where AI comes into play. As these individuals discussed, AI intermittently prompted one of them with a suggestion—rephrase your message to make it sound polite or friendly without altering its core meaning. While this might sound simple, the implications were profound. Participants were free to heed, adapt, or ignore the AI's advice. Post-conversation, they were asked to fill out a survey to gauge the conversation's quality.

Over 1,500 participants became a part of this experiment. This led to a staggering 2,742 instances where the AI's rephrasing suggestions were embraced. The outcomes? Those on the receiving end of these AI-assisted messages reported a considerable enhancement in conversation quality. More impressively, they demonstrated a greater inclination to genuinely listen to their opponents' views.

Treated conversation flow: Respondents write messages unimpeded until one partner receives a rephrasing prompt for the first message longer than four words, and every other conversational turn thereafter. (CREDIT: PNAS)

David Wingate, a computer science professor at BYU and co-author of the study, shared his insights: “We found the more often the rephrasings were used, the more likely participants were to feel like the conversation wasn’t divisive and that they felt heard and understood.”

The Ethical Side of AI in Conversations

A crucial distinction emphasized by the researchers is that the AI's role was to assist, not influence. While AI's capabilities in shaping or altering content or viewpoints can be a concern, this experiment was far from that realm.

Text analysis of rephrased messages tone: Marginal difference, with 95% CIs, between rephrased message scores on five politeness package features, and baseline scores from participants’ original messages they would have sent had they not chosen the rephrasing. (CREDIT: PNAS)

Wingate noted the differentiation between this AI application and "persuasive AI," which treads on ethical ambiguity. He stated, “But helping people have productive and courteous conversations is one positive outcome of AI.”

A Promising Future for Online Conversations

The implications of this AI experiment are monumental. Online toxicity isn't a new phenomenon—it has haunted the virtual world for decades. Conventional attempts to remedy this, like training sessions led by moderators, are restricted in their reach and impact. But an AI-powered solution? That could be introduced extensively across multiple digital avenues.

Analysis of semantic content of messages. Panel (A) presents a visualization of the topical distribution of messages sent on the platform. Each point is the semantic embedding of a message; points that are close to each other represent messages that are semantically similar. (CREDIT: PNAS)

Harnessing AI in this manner can metamorphose online spaces. Platforms could become hubs where individuals, irrespective of their backgrounds and ideologies, converge to engage in constructive dialogues. This study stands testament to the fact that when integrated judiciously, AI can be instrumental in sculpting a more affable online environment.

Professor Wingate encapsulated the sentiment perfectly: “My hope is that we’ll continue to have more BYU students build pro-social applications like this and that BYU can become a leader in demonstrating ethical ways of using machine learning. In a world that is dominated by information, we need students who can go out and wrangle the world’s information in positive and socially productive ways.”

As we navigate an increasingly interconnected world, fostering respect and understanding becomes paramount. This groundbreaking research reiterates the potential of technology, specifically AI, in achieving this objective, setting a precedent for the future of online interactions.


Note: Materials provided above by The Brighter Side of News. Content may be edited for style and length.

Like these kind of feel good stories? Get the Brighter Side of News' newsletter.


Joseph Shavit
Joseph ShavitSpace, Technology and Medical News Writer
Joseph Shavit is the head science news writer with a passion for communicating complex scientific discoveries to a broad audience. With a strong background in both science, business, product management, media leadership and entrepreneurship, Joseph possesses the unique ability to bridge the gap between business and technology, making intricate scientific concepts accessible and engaging to readers of all backgrounds.