Facebook will test artificial intelligence to stop fighting in groups

Facebook: artificial intelligence to detect conflicts in the network 1:05

(CNN Business) – Online conversations can quickly spiral out of control, which is why Facebook hopes artificial intelligence can help keep things in order.

The social network will test the use of artificial intelligence to detect fights in its many groups, so that group administrators can help calm things down.

The announcement came in a blog post Wednesday, in which Facebook released a series of new software tools to help the more than 70 million people who run and moderate groups on its platform. Facebook, which has 2.85 billion monthly users, at the end of last year said that more than 1.8 billion people participate in groups each month and that there are tens of millions of groups active on the social network.

Facebook suspends Trump’s account until 2023 1:07

Together with Facebook’s new tools, artificial intelligence will decide when to send what the company calls “conflict alerts” to those who maintain groups. Alerts will be sent to administrators if the AI ​​determines that a conversation in their group is “controversial or disruptive,” the company said.


The subtlety in artificial intelligence

For years, tech platforms like Facebook and Twitter have increasingly relied on artificial intelligence to determine much of what is viewed online, from the tools that detect and remove hate speech on Facebook to Twitter messages. that appear on your timeline. This can be useful for frustrating content that users don’t want to see. AI can help human moderators clean up social media that has become too big for people to keep an eye on for themselves.

But artificial intelligence can fail when it comes to understanding subtlety and context in online posts. The ways in which AI-based moderation systems work can also seem mysterious and hurtful to users.

Florida Social Media Law Faces Lawsuit 3:40

A Facebook spokesperson said the company’s artificial intelligence will use various signals from the conversations to determine when to send a conflict alert, including response times to comments and the volume of comments on a post. He said that some admins already have keyword alerts set up that can detect topics that can also lead to discussions.


How does it operate?

If an admin receives a conflict alert, they could take actions that Facebook says are aimed at slowing down conversations, presumably in hopes of calming users down. These moves can include temporarily limiting how often some group members can post comments and determining how quickly comments can be made on individual posts.

Screenshots of a mock plot that Facebook used to show how this could work show a derailed conversation in a group called “Other Peoples Puppies” in English, where one user responds to another’s post by writing, “Shut up, you’re so dumb . Stop talking about ORGANIC FOOD you idiot !!! “
“IDIOTS!” another user responds in the example. “If this nonsense keeps happening, I’m leaving the group!”

7 out of 10 smartphones will have artificial intelligence 0:42

The conversation appears on a screen with the words “Moderation Alerts” at the top, below which several words appear in black font within gray bubbles. On the right, the word “Conflict” appears in a blue bubble.

Another set of on-screen images illustrates how a manager could respond to a heated conversation, not about politics, vaccines, or culture wars, but about the merits of ranch dressing and mayonnaise, by limiting one member to posting, say, five comments. in group posts for the next day.

Live: Follow the first race of the London ePrix

Moulinex i-Companion Touch XL: plug in, cook and enjoy