I.

Recently the president did a bad post, and a lot of people got mad. Twitter hid the post behind a warning. Snap said it wouldn’t promote the president’s account any more. Facebook said the post was bad but would stay up, and that made a lot of people at Facebook mad — current employees, contractors, former employees, and even employees of the Chan Zuckerberg Initiative, who don’t work for Facebook but are indirectly funded by it.

The president’s post inspired a lot of debate about what counts as an incitement to violence, about whether you could kick a head of state off your platform, and about whether platforms are getting played by fascists. Those are all good discussions to have, though they mostly seem to avoid a larger point, which is that an enormous amount of political speech now transpires on a platform used by 1.73 billion people a day, and whose rules about speech are ultimately decided and enforced by a single person: Facebook’s CEO, Mark Zuckerberg, who controls the majority of the company’s stock.

Later this year you might have some recourse if Facebook makes a speech-related decision that doesn’t go your way. But you don’t have any recourse now, and this is one reason why when people get mad at Facebook, they tend to get really mad at Facebook. You know what they say about the relationship between justice and peace: you can’t have one without the other, and while Facebook’s content moderation is good at a lot of things it feels like a stretch to call it just.

One thing you could do if you wanted to distribute power over speech more broadly is to entrust some of it to your users. A company that has done this with some success is Reddit, which sets a “floor” of speech rules (no spam!) but allows individual forums (called subreddits) to raise the “ceiling.” A religious subreddit might ban cursing, for example. Or a forum related to thoughts had in the shower might ban the publication of non-shower thoughts. Speech is much more easily moderated when people in the room have shared context around values and interests, and so “I want to read what thoughts people have in the shower” is a discussion more effectively policed than “let’s see what 1.73 billion people have to say about current affairs.”

II.

But community-based moderation creates injustices, too. Six years ago, for example, Reddit’s laissez-faire attitude toward moderation led a lot of people to post stolen nude photos on the site without the subjects’ consent. Reddit no longer permits that, but it has allowed a lot of overtly racist speech, including some that even the company acknowledges is meant as an incitement to violence. A prime offender in all this is a forum devoted to the president, which has been “quarantined” behind a warning screen for a year now.

As protests over the killing of George Floyd by police have spread around the world, lots of companies have made statements affirming a new commitment to the black community and its allies. Reddit issued such a statement last week. The company said it would honor the request of departing co-founder Alexis Ohanian to replace him on the board with a black person. And it acknowledged that historically it has been a haven for the spread of racist ideology. Here’s CEO Steve Huffman:

While we dealt with many communities themselves, we still did not provide the clarity — and it showed, both in our enforcement and in confusion about where we stand. In 2018, I confusingly said racism is not against the rules, but also isn’t welcome on Reddit. This gap between our content policy and our values has eroded our effectiveness in combating hate and racism on Reddit; I accept full responsibility for this. […]

Despite making significant progress over the years, we have to turn a mirror on ourselves and be willing to do the hard work of making sure we are living up to our values in our product and policies. This is a significant moment. We have a choice: return to the status quo or use this opportunity for change. We at Reddit are opting for the latter, and we will do our very best to be a part of the progress.

In response, Huffman was called out by former Reddit CEO Ellen Pao for not taking action against racist subreddits earlier. Several large subreddits joined in protest against the site, either temporarily banning new posts or going private to prevent new users from joining.

The furor has calmed since then, but most of the hard questions remain unanswered. How do you attract, train and retain an army of volunteers to moderate racist speech across every forum on Reddit? How much of the responsibility should belong to individual moderators, and how much should belong to Reddit itself? How much of Reddit’s delegation of authority was really just a dereliction of duty?

“So much of what is happening now lies at your feet,” Pao told Huffman. “You don’t get to say BLM when Reddit nurtures and monetizes white supremacy and hate all day long.”

III.

Of course, decentralizing moderation the way Reddit does isn’t a new idea to Facebook. In fact, the company employs a similar approach in Facebook Groups, the company’s own take on smaller forums. Zuckerberg has said that groups represent a top priority for the company, and so it feels like a good time to check in on the state of moderation there.

As on Reddit, Facebook group moderators can raise the ceiling for moderation to shape the discussion and ban certain kinds of posts. In The Verge, Ashley Carman has an important story about how Facebook groups have handled the recent surge of content related to Black Lives Matter. More conservative moderators have taken a heavy-handed approach to removing such posts — no non-shower thoughts in the shower thoughts forum! But that has outraged users who see the ongoing protests and the injustices they are drawing attention to as matters of basic human rights, worthy of discussion in whatever context the speaker finds relevant.

Carman writes:

Boss-Moms is one of many Facebook groups grappling with inadequate moderation policies as members attempt to discuss Black Lives Matter. The groups, which range in focus from video games to music to local communities, are moderated by other group members. The moderators have no formal training from Facebook or outside sources and make their own decisions about what content is and is not allowed. Most groups have no reference point for how to give everyone a voice, and that’s led to fighting between members, people leaving, groups temporarily shutting down, and splinter groups breaking off.

Many groups don’t have people of color as moderators, adding to the moderation problem. Roop Mangat belongs to a pair of local community Facebook groups — one in which no people of color or women are moderators, and another that has a white woman moderator who serves alongside only white men. Mangat posted the same message to both groups, urging people to take racism in the community seriously. One of them deleted it. “They think anything ‘political’ is not appropriate,” she tells me over Twitter DM. “Yet they still allow posts that include gossip and false information to spread.” The group threatened to ban her if she posted again.

After Carman published her piece, Facebook published new recommendations for moderators on how to shape discussions in their forums. Instead of banning all “political” discussions, for example, moderators who want to avoid politics are encouraged to prohibit “discussions on legislation, political candidates, or specific campaigns.” It also suggests that forums work to be more inclusive and recruit a more diverse set of moderators. The full list of suggestions is thoughtful and worth reading in full, and yet after reading it — and watching an embedded video about the perils of volunteer moderator burnout — I can imagine some moderators simply saying the hell with it and finding a new hobby. (Aviv Ovadya did; he has a nice list of problems he faced as a groups admin here.)

A similar dynamic is playing out on the neighborhood social network NextDoor, where a let-the-users-do-it approach to moderation has left many black users feeling fearing for their safety. Makena Kelly talked to some of those folks at The Verge, and uncovered how the company’s all-too-convenient embrace of community moderation has left it with significant blindspots.

This hands-off approach is what makes Nextdoor able to be as big as it is. By outsourcing moderation to untrained and unpaid volunteers, the company has been able to expand into over 200,000 neighborhoods across the country. But it has also empowered community members to strike down posts they personally don’t like. All across the country, Nextdoor posts advertising protests get struck down by community moderators while racist and inflammatory messages, some calling for direct violence against black people and protestors, are left to stand.

Leads don’t go through any formal training from Nextdoor before receiving the authority to strike posts, and the guidelines listed on the site are vague enough for leads to interpret them in different ways. There are no rules promoting diversity in moderation leadership either. In a private forum — known as the National Leads Forum, as first reported by BuzzFeed News — some community moderators were enraged by Nextdoor’s decision to support the Black Lives Matter movement. Around the same time the company issued its public statement last week, that same language was published on Nextdoor feeds, enraging some moderators active in the private forum.

“I would like to see Nextdoor post a ‘White lives matter’ [post],’” one moderator from Orlando, Florida, wrote. “Sometimes, we need to remember ‘All lives matter!’”

IV.

To recap: one approach to content moderation is to entrust all of human speech to a single person, putting the user base at the mercy of their decisions without any practical recourse. Another is to entrust moderation to a group of unpaid volunteers, without much regard to that group’s diversity, or training, or values, and hope that they don’t embarrass you too much along the way.

Maybe you can help improve the former system by setting up an independent body of experts to offer binding opinions on controversial subjects. And maybe you can improve the latter by offering volunteers more training, more mental health resources, and actual money for their labor.

But as discussions about racism and injustice light up forums around the internet, it’s clear that the mechanisms companies have built to date are often failing to support that conversation. As a result, offline injustices are being replicated in online speech. As Silicon Valley reconsiders how companies can support black people, their allies, and other minority communities, moderation ought to be front and center.

The Ratio

⬆️ Trending up: Tinder said it would stop banning users for posting Black Lives Matter fundraisers on their profiles. The company’s policies typically prohibit users from using their accounts for promotional reasons. (Julia Reinstein / BuzzFeed)

⬆️ Trending up: Tech companies are writing big checks to groups fighting racial injustice. Will the actions endure beyond the current moment? (Sofie Kodner / Protocol)

⬆️ Trending up: Instacart tweaked its product to discourage “tip baiting.” People were hiring Instacart shoppers with the promise of huge tips and then canceling the tips after their groceries were delivered. (Nick Statt / The Verge)

⬇️ Trending down: Three Amazon workers are suing the company alleging that it failed to properly track and prevent the spread of COVID-19 in their warehouses. (Annie Palmer / CNBC)

⬇️ Trending down: At least two contract workers for Facebook say they were fired by their direct employer, WiPro, after union organizing activity. (Caroline O’Donovan / BuzzFeed)

Governing

Facebook and Twitter removed a Trump campaign video tribute to George Floyd after a copyright complaint. The removal triggered a fresh round of outraged tweets from the president. Vlad Savov and Melissa Cheok report:

The @TeamTrump account had tweeted a video collage of images and clips depicting peaceful protests, moments of mourning and law enforcement officers hugging civilians in the wake of the killing of George Floyd, an African-American man, while in police custody. Accompanied by a gentle piano soundtrack and President Donald Trump’s speech about “healing, not hatred,” it urged Americans to unite.

The video, still available to view on the president’s YouTube channel, appears to have gathered most of its content from social media posts, and at least one copyright holder made a complaint to Twitter about the use of their photo, a company spokesperson told The Hill.

Actions taken against Trump by Twitter and Snap have led to calls for the company to take s similar stance on other politicians around the world. Non-governmental organizations say American companies pay as much attention to similar situations unfolding in India, Brazil, and Myanmar, among other places. (Pranav Dixit and Megha Rajagopalan / BuzzFeed)

Twitter fan pages are being repurposed to participate in Black Lives Matter protests. K-pop stans have never put their massive engagement to better use. (Kaitlyn Tiffany / The Atlantic)

Rumors spreading on Facebook have small-town America terrified that buses full of antifa super-soldiers are about to arrive in their towns and cause mayhem. (Russell Brandom / The Verge)

Facebook is making it harder to find groups related to “Boogaloo,” an idea encouraged by white supremacists that the United States is on the verge of a second Civil War. The groups will no longer appear in recommendations. I’d love to know how many people joined these groups after being recommended them to date. (Joseph Menn / Reuters)

Facebook removed two networks of inauthentic accounts in May. The networks, based in Tunisia and Iraq, appear to have been used to spread propaganda and influence elections.

Big tech companies are responding to the death of George Floyd much more aggressively than they did for the death of Michael Brown and other black victims of police brutality. (Jay Peters / The Verge)

Trump spent a record $1.48 million on Google advertising, the most money his campaign has spent in a week to date. (Eric Newcomer and Mark Bergen / Bloomberg)

Meanwhile, Joe Biden spent $5 million on Facebook advertising in recent days. (Shane Goldmacher / New York Times)

The US antitrust probe against Google is exploring its search product. DuckDuckGo said it had talked with regulators about how Google uses its dominant search position to put rivals at a disadvantage. (Gerrit De Vynck / Bloomberg)

Amazon is using a key ad slot to promote its own private-label brands during the pandemic, a fact that might factor into antitrust regulation. (Renee Dudley / ProPublica)

Online trolls came to use “swatting” — the tactic of baiting a heavily armed emergency response to someone’s house — because of the high likelihood of police escalating the conflict. “Police brutality is an on-demand service in the United States,” writes TC Sottek. (The Verge)

ByteDance said it would restrict Chinese engineers’ access to TikTok and other overseas products. The move comes as ByteDance comes under increasing scrutiny in the United States over how the Chinese government may be accessing Americans’ user data. (Chen Du / PingWest)

France set up a fund to ward off foreign buyers from purchasing homegrown technology companies. (Helene Fouquet / Bloomberg)

Industry

Instagram now says users should get the photographer’s permission before embedding pictures on the web. And if you don’t, you could be subject to a copyright lawsuit. This seems like a disaster in the making. Here’s Timothy B. Lee at Ars Technica:

Professional photographers are likely to cheer the decision, since it will strengthen their hand in negotiations with publishers. But it could also significantly change the culture of the Web. Until now, people have generally felt free to embed Instagram posts on their own sites without worrying about copyright concerns. That might be about to change.

Is Google Docs “the social media of the resistance”? “In just the last week, Google Docs has emerged as a way to share everything from lists of books on racism to templates for letters to family members and representatives to lists of funds and resources that are accepting donations. Shared Google Docs that anyone can view and anyone can edit, anonymously, have become a valuable tool for grassroots organizing during both the coronavirus pandemic and the police brutality protests sweeping the US.” (Tanya Basu / MIT Tech Review)

On the other hand: will government requests for Google Docs data put protesters at risk? (Chris Stokel-Walker / Protocol)

Amazon’s heavily automated human-resources system is struggling under the weight of sick-leave requests during the pandemic. Workers have been frustrated by their inability to get help from human beings. (Matt Day / Bloomberg)

Last week may have been Twitter’s best week ever. Third-party measurement companies say it achieved record downloads amid the spread of global protests over police brutality and news about the pandemic. (Sarah Perez / TechCrunch)

Zoom has good reason not to offer free end-to-end encryption to anyone with an email address. Live video services have been used to coordinate child abuse. (Dan Goodin / Ars Technica)

An estimated one in five children don’t have the technology they need for remote learning. And even those who do don’t seem to be learning as much, according to early data. (Tawnell D. Hobbs and Lee Hawkins / Wall Street Journal)

ByteDance shut down TopBuzz, a Western news aggregator app. As a result, we recommend that you get all your news from The Interface. (Yingzhi Yang and Brenda Goh / Reuters)

Telegram added new video editing tools and animated stickers. Something to spice up your cryptocurrency chats! (Taylor Lyles / The Verge)

Things to do

Search for black-owned businesses on Yelp. And then go support them!

Doomscroll! Or at least familiarize yourself with the term, which seems likely to be one of the neologisms of the year.

Those good tweets

Talk to us

Send us tips, comments, questions, and Karen posts from Nextdoor: casey@theverge.com and zoe@theverge.com.