Facebook has said its users are posting more in the social network’s private spaces, including groups and messaging, a shift that could make it tougher for the tech giant to police offensive content.
On Wednesday, the company said it’s taking new steps to stop misinformation, scams and other “problematic” content from going viral on the platform, with some of the changes applying to private Facebook groups, which let users post content only group members can see. The company has been criticized in the past for not doing enough to stop misinformation about and other topics from spreading in groups.
The new steps provide a glimpse into how the world’s largest social network, which has more than 2 billion users worldwide, is moderating content as users.
“Ultimately, the balance between protecting people’s privacy and protecting public safety is something that societies have been grappling with for centuries probably, and we’re certainly grappling with it,” Guy Rosen, Facebook’s vice president of integrity, said during a press conference at the company’s Menlo Park, California, headquarters.
The company said that in the coming weeks it’ll start looking at how administrators and moderators of Facebook groups decide what content to keep up. That’ll help Facebook determine whether a group is violating the social network’s rules. The company is also releasing a Group Quality feature so users can see what content was removed and flagged, including fake news. Facebook groups that repeatedly share misinformation will show up lower on the social network’s News Feed.
Facebook has community standards that prohibit hate speech, nudity, violence and other offensive content. Misinformation and clickbait, though, don’t always violate Facebook’s rules, unless there’s a risk of offline violence or the content is trying to discourage or prevent people from voting.
Antivaccine content, for example, can fall in a “gray area” because it’s challenging to link content to something that happens offline, said Tessa Lyons, Facebook’s head of News Feed integrity.
Facebook said it’s been using technology, human reviewers and user reports to flag and remove content in groups that violates its rules, even if the groups aren’t public. That’s allowed Facebook to proactively detect offensive content even before someone reports it to the company, Rosen said.
The company said it’ll also soon let people remove their posts and comments from a group even if they’re no longer a member.
This week, Facebook is also adding a verified badge for high-profile people in its messaging app, signaling to users whether a scammer is impersonating someone else. Earlier this year, as part of an effort to combat misinformation, the company released a tool to let users know if a message has been forwarded.
The social network unveiled a variety of other steps it’s taking to combat fake news, following criticism that its efforts aren’t working well enough. Facebook said it’s working with journalists, fact-checking experts, researchers and other groups to find new solutions to fight misinformation more quickly. The Associated Press, which reportedly stopped fact-checking for the company in February, is returning to fact-check videos and Spanish content in the US.
Facebook, though, acknowledged it still has more work to do as user behavior on the site changes.
Users are sharing photos and videos that vanish in 24 hours via a feature called Stories. That makes policing the content challenging.
“The format’s ephemerality means we need to work even faster to remove violating content,” Rosen and Lyons said in a blog post. “The ability to add text, stickers and drawings to photos and videos can be abused to mask violating content.”
Originally published April 10, 10 a.m. PT.
Update, 12:17 p.m.: Adds remarks from Facebook’s press conference.