As Facebook scrambles to deal with its most problematic groups in the wake of January’s assault on the US Capitol, a new report finds that leaders inside the company knew as long ago as August that 70 percent of its top “civic” groups had too much hate speech, misinformation, violent rhetoric, or other toxic behavior to be recommended to other users.
Researchers inside the company warned executives months ago that top groups were plagued with misinformation and calls to violence, The Wall Street Journal reported Sunday. “We need to do something to stop these conversations from happening and growing as quickly as they do,” the researchers wrote in internal documents obtained by the WSJ.
Of the top 100 most-active US civic groups, 70 percent “are considered non-recommendable for issues such as hate, misinfo, bullying, and harassment,” the presentation said. “Our existing integrity systems aren’t addressing these issues.”
Facebook deliberately pivoted in 2019 to make groups the centerpiece of its service—literally, on its mobile apps. But issues with groups plagued the platform for years before that. Although many Facebook groups are indeed small, relatively congenial, and beneficial to their users, both civil rights advocates and Facebook’s own researchers have warned for years that the way groups are managed and recommended to users increases extremism and radicalization.
Admitting you have a problem
In the days after the insurrection at the Capitol, Facebook COO Sheryl Sandberg downplayed concerns that Facebook played any role in bringing the mob together.
“I think these events were largely organized on [other] platforms that don’t have our abilities to stop hate and don’t have our standards and don’t have our transparency,” Sandberg told Reuters on January 11. “Certainly,” she added, “to this day, we are working to find any single mention that might be linked to this and making sure we get it down as quickly as possible.”
Media reports over the next two days, however, revealed that much of the planning for the events of January 6 did indeed happen on Facebook. The New York Times, The Wall Street Journal, and Reuters all ran stories within hours of each other finding that violent rhetoric had increased dramatically on Facebook into the first week of January and remained high ahead of President Joe Biden’s inauguration.
Neither was Facebook, in fact, taking action on “any single mention” of violence at the time, as Sandberg insisted. Even a cursory half-hour search at the time turned up several groups and events filled with calls for “patriots” to come together and march on Washington during the week of the inauguration, including some with explicit calls for harm against figures such as Speaker of the House Nancy Pelosi (D-Calif.).
The August report was far from the first warning Facebook had about toxicity on its platform. In May 2020, The Wall Street Journal obtained internal Facebook documents showing executives knew since 2016 that the platform’s recommendation engine explicitly drove users toward some of the site’s worst content.
“64% of all extremist group joins are due to our recommendation tools,” the 2016 presentation found, particularly the “Groups You Should Join” and “Discover” systems. “Our recommendation systems grow the problem,” the researchers concluded.
A Facebook representative told the WSJ at the time, “We’ve learned a lot since 2016 and are not the same company today.” Given the August 2020 report, however, it seems perhaps Facebook did not learn quite enough.
In October, amid heightened tensions preceding the 2020 US presidential election, Facebook said it would stop recommending “political” groups to users. At about the same time, however, the company doubled down on group recommendations with an update that made both the groups tab and an individual’s newsfeed promote group content from groups that a user does not subscribe to. This was done in order to increase engagement with both the content and the group.
Unfortunately, recent research has shown that the recommendation ban is not, in fact, blocking political groups from being recommended to users.
In January, tech news site The Markup reported on group recommendations Facebook was continuing to make to users. Political groups were definitely on the list—and they were showing up in a decidedly lopsided fashion. Facebook users who voted for Donald Trump were the most likely to have political groups promoted to them: 23 of the top 100 groups promoted to that cohort were political in nature, as compared to 15 for users who voted for Joe Biden and zero for nonvoters.
Several of the groups The Markup found recommended to Trump voters contained posts targeting elected officials and alleging falsely that Trump won the election. “Kayleigh McEnany Fan Club,” one of the groups the Facebook research team said back in August had a “toxic atmosphere” and served only to distribute “low-quality, highly divisive, likely misinformative news content” was recommended to more than 18 percent of the Trump voters in The Markup’s panel.
“A record of broken promises”
“Facebook does not just allow these dangerous pages to exist on its platform, it recommends them to users,” Sen. Edward Markey (D-MA) wrote in a January 26 letter (PDF) to Zuckerberg, referencing The Markup’s report. “Facebook’s recommendations allegedly reached users across the ideological spectrum, and it appears that Facebook continued to recommend political groups to its users in January 2021, even after your company reiterated its commitment not to do so.”
Two days later, Facebook CEO Mark Zuckerberg said in a call with investors that the moratorium on recommending political or civic groups to users is now being extended permanently. Markey applauded Facebook’s action… cautiously.
“I am pleased to see that Facebook is heeding my calls and has pledged to permanently stop recommending political groups to its users, as a matter of policy,” Markey said in a written statement. “Frankly, though, Facebook has a record of broken promises, and I’ll be watching closely to see whether it keeps this commitment.”