Skip to main content

Facebook’s independent oversight board could be overwhelmed by the challenge

Facebook’s independent oversight board could be overwhelmed by the challenge

/

Can 40 part-time members oversee moderation on a platform of 2.37 billion?

Share this story

Illustration by Alex Castro / The Verge

Today let’s talk about Facebook’s independent Oversight Board, which just announced its co-chairs and initial membership. The board will allow Facebook and Instagram users to appeal when they believe that their posts have been removed in error, and upon request will issue advisory opinions to the company on emerging policy questions.

The scope of the work is expected to expand over time — within months of launching, for example, the board is expected to rule on which posts should come down as well as which ones stay up. (To lots of people, myself included, the latter feels like a more urgent problem.) Facebook handpicked the inaugural members, who will serve three-year terms, but over time the board will pick its own members. The company placed $130 million in an irrevocable trust to fund the board’s operations, and it has promised not to meddle.

After the announcement of the members this morning, I heard three main questions: how did we get here? Do the announced board members share any particular philosophy? And can any of this possibly work?

Let’s take them in order.

How did we get here?

I like to say that people basically all have the same policy when it comes to content moderation: take down the bad posts, and leave up the good ones. The trouble comes when people disagree about which posts are good and which posts are bad, and resolving those disputes in a manner that is principled, timely, and consistent has bedeviled every social network that has ever attempted the feat. The problems tend to get harder as you grow, and so Facebook — with 2.37 billion monthly users — arguably has the hardest moderation challenge of all.

Not that you should pity Facebook: the company has always taken growth much more seriously than the problems that come with it, and its investment in content moderation came only after a series of company-shaking scandals. But questions about how Facebook should handle tricky questions of moderation are almost as old as the company.

There were the drag queens forced to use their real names; the moms suspended over breastfeeding photos; the historians censored after publishing famous but disturbing photos; the lynch mobs organizing on WhatsApp in India; the Myanmar government promoting genocide. In some cases the policy decision was clear but badly enforced; in other cases the policy lines remain blurry and uncomfortable.

The most controversial cases go to the company’s CEO, Mark Zuckerberg, who owns the majority of Facebook through its dual-class stock structure. The benefit of this system, from Facebook’s perspective, is that it allows Zuckerberg to take principled stands without worrying that his board will get mad and fire him. (The decision not to fact-check speech in political ads, which has generated considerable furor, was a Zuckerberg call.)

The drawback is that the design of Facebook effectively puts the responsibility for policing the speech of 2.37 billion people into the hands of one person. And given that a great deal of global political discourse now takes place on Facebook’s servers, that’s a cause for concern. As I wrote here last year:

Facebook and its moderators currently police the boundaries of speech on an enormous portion of the internet. And for those who feel that the company made the wrong decision about a post, there has historically been very little recourse. You could fill out a little text box and pray, but you were unlikely to ever receive much more than an automated message in response. The system might work in the majority of cases, but it never felt particularly just — which is to say, open and accountable.

In 2018 Zuckerberg floated the idea of creating an independent oversight board for Facebook that would weigh in on these and other issues as they arise. Within a few months, the company began to lay the ground work, consulting with experts from civil society and holding a series of mock content-moderation trials around the world. Last June the board unveiled its charter; in December, Facebook said it had set aside $130 million to fund the board’s operations.

It’s an unprecedented experiment in devolving some of the power accumulated by a tech giant back to its own user base. And if it proves to be effective, it could serve as a new model for self-regulation of big platforms at a time when government efforts at regulation seem to have fallen into cautious paralysis, at least in the United States.

Do the announced board members share any particular philosophy?

Say hello to the inaugural four co-leaders of the Oversight Board: Catalina Botero-Marino, Jamal Greene, Michael W. McConnell and Helle Thorning-Schmidt. They are, respectively: a former special rapporteur on freedom of expression at the Organization of the American States; a law professor at Columbia; a law professor at Stanford; and a former prime minister of Denmark.

They wrote an op-ed to introduce themselves in the New York Times, and they described their philosophy this way:

The board members come from different professional, cultural and religious backgrounds and have various political viewpoints. Some of us have been publicly critical of Facebook; some of us haven’t. But all of us have training and experience that can help the board in considering the most significant content decisions facing online communities. We are all independent of Facebook. And we are all committed to freedom of expression within the framework of international norms of human rights. We will make decisions based on those principles and on the effects on Facebook users and society, without regard to the economic, political or reputational interests of the company.

Over the past several months, I had the chance to talk with several Facebook executives about their selection process for board members. The principle that came up more than any other was “free expression.” Zuckerberg, you may recall, gave a speech on the subject last year advocating for an internet that preserves the maximum amount of open discourse. It’s no accident that the first batch of Oversight Board members have sworn fealty to free expression — or that their first task will be to weigh in on posts that Facebook removed in error, unjustly limiting the free expression of the company’s users.

If you’re the sort of person who is generally more mad about posts that Facebook left up rather than posts that Facebook took down, you may be disappointed with the board’s early days. The company has told me that the board will begin considering removals within a few months of launch. We’ll see.

There are 16 more announced members, with 20 more to come. The initial group boasts impressive CVs, along with solid diversity of gender, race, and geography. (The board, which will hear cases in panels, has committed to include at least one member from the region where each case originated.) There are no particularly vocal critics of Facebook on the board — better luck next time, Kara Swisher — but the board was never designed to be a referendum on Facebook itself.

Can any of this possibly work?

Well, people have takes. Several of the announced board members wrote posts, either on Twitter or on Medium, expressing their optimism. Alan Rusbridger, a former editor of The Guardian who will join the board, said he relished the chance to be a check on Facebook’s power:

Facebook is an entity that defies description. It is a friend of the otherwise voiceless — but also an enabler of darkness. It brings harmony to some, discord to many. It promotes order and amplifies anarchy. It employs many brilliant engineers but has — too slowly — recognized that the multiple challenges it faces involve the realms of philosophy, ethics, journalism, religion, geography, and human rights. And it makes a whole lot of money, and a whole lot of enemies, while doing this.

To address this, it needs independent, external oversight.

Kate Klonick, a law professor who has followed the board’s development closely, called the announced members “an impressive group with incredible credentials on human rights, freedom of expression, and adjudication. But perhaps most importantly for the future of the board, this initial group have skills in institution building and establishing procedure in the rule of law.”

So what are the concerns? One is that Facebook will ignore the board’s opinions, which will not be legally binding. My read: unlikely, since the entire point of the board is to create a new body to blame for unpopular decisions. In practice, most users may continue to blast Facebook whenever a moderation decision goes against them. (After all, the board will only hear a tiny fraction of cases.) But Facebook is counting on the board reversing its decisions, because doing so is the only thing that can give the board legitimacy and give the company some distance from the thorniest cases. So I’m optimistic Facebook will do as the board advises — but then again, ask Brian Acton or Kevin Systrom how long Facebook’s promises of independence lasted. (About five years, as it turned out.)

Another concern, raised by the disinformation researcher Nina Jankowicz, is that the board’s sights are trained on the wrong place. While members ponder speech, she argues, the bigger issue is reach — and on the decisions made opaquely by algorithms on which content to promote and what to bury. I think the board has to start somewhere, but I agree that Facebook ought to be just as accountable to the machinations of its machine-learning systems as it is to the decisions of its human moderators.

A third concern is that the Oversight Board will expand to do moderation for YouTube, Twitter, and other social networks. Daphne Keller, platform regulation director at the Stanford Cyber Policy Center, worries about “big and small platforms converging on a single rule set,” robbing us of the benefits that come with competition and a more diverse set of viewpoints around moderation. She tells Issie Lapowsky at Protocol: “If this becomes a mechanism to move more and more of the internet toward one single set of rules, that’s a real loss.”

I’m most sympathetic to concerns that the board’s lofty intentions will be overwhelmed by the sheer size of the task. As Sarah Frier notes, board members are committing to an average of just 15 hours a month on the project. David Kaye, the United Nations’ special rapporteur on free expression, frames the issue this way:

Difficult content problems often take place at local levels, in languages and code that may be impenetrable to those outside. Will the board ever have the bandwidth to address the massive impact Facebook will continue to have in communities worldwide? Will the board, in other words, be more like a Band-Aid on a massive wound than an appellate body to solve the crises of online speech?

Alex Stamos, Facebook’s former chief security officer, describes how that problem looks from the inside. “Law professors love to come up with really thoughtful, complicated mental tests to distinguish between lawful and unlawful, and they are used to making arguments to highly educated and experienced appellate judges,” he tweeted. “This kind of thoughtful argumentation is common inside of FB’s policy team until it breaks upon the rocks of reality, which is that any hard speech decision has to be made by machines overseen by humans who can apply 30-60 seconds of judgment to a ‘case’, not a judge with weeks.”

I think the board can do meaningful work even if it only tackles the highest profile cases — just as the US Supreme Court has vast influence even though it only hears a relative handful of cases each year. But by dint of its global scale Facebook’s task is in many ways larger and more complicated than the Supreme Court’s. Independent though it may be, the board has to rely on Facebook to design its workflows and apply its decisions. It’s far too early to tell whether it will come to be seen as effective, or even legitimate. But it seems clear that for as much work as has gone into building the board so far, what follows will make the picking of board members look like the easy part.

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔼 Trending up: Twitter is stepping up its fight against 5G coronavirus conspiracy theories in the UK. Now, users who tweet about the 5G conspiracy theory will be prompted to read  government-verified information about the technology. (The Telegraph)

🔼 Trending up: Google committed another $50 million to COVID-19 relief efforts. The company’s charitable arm, Google.org, had already committed $50 million at the start of the pandemic.

🔼 Trending up: Google and the Gates Foundation are teaming up on a new initiative to bring real-time digital payments to developing countries. Their aim is to develop a free, open-source digital payments platform for nations and central banks. (David Z. Morris / Fortune)

Virus tracker

Total cases in the US: More than 1,220,200 

Total deaths in the US: At least 72,300 

Reported cases in California: 59,171

Total test results (positive and negative) in California: 779,902

Reported cases in New York: 326,659

Total test results (positive and negative) in New York: 1,028,899

Reported cases in New Jersey: 131,890

Total test results (positive and negative) in New Jersey: 287,623

Reported cases in Massachusetts: 70,271

Total test results (positive and negative) in Massachusetts: 333,349

Data from The New York Times. Test data from The COVID Tracking Project.

Governing

Stories about coronavirus mutations aren’t necessarily what they seem. Right now, there’s no clear evidence that the virus has evolved into significantly different forms — and there probably won’t be for months. Ed Yong has more at The Atlantic:

Whenever a virus infects a host, it makes new copies of itself, and it starts by duplicating its genes. But this process is sloppy, and the duplicates end up with errors. These are called mutations—they’re the genetic equivalent of typos. In comic books and other science fiction, mutations are always dramatic and consequential. In the real world, they’re a normal and usually mundane part of virology. Viruses naturally and gradually accumulate mutations as they spread.

As an epidemic progresses, the virus family tree grows new branches and twigs—new lineages that are characterized by differing sets of mutations. But a new lineage doesn’t automatically count as a new strain. That term is usually reserved for a lineage that differs from its fellow viruses in significant ways. It might vary in how easily it spreads (transmissibility), its ability to cause disease (virulence), whether it is recognized by the immune system in the same way (antigenicity), or how vulnerable it is to medications (resistance). Some mutations affect these properties. Most do not, and are either silent or cosmetic. “Not every mutation creates a different strain,” says Grubaugh. (Think about dog breeds as equivalents of strains: A corgi is clearly different from a Great Dane, but a black-haired corgi is functionally the same as a brown-haired one, and wouldn’t count as a separate breed.)

Health officials need better ways of countering misinformation online. The posts that reach people on Facebook and YouTube aren’t those with the most reliable information, they’re the ones that get the most likes. The World Health Organization and the Centers for Disease Control haven’t adapted to the way information now circulates, this misinformation researcher argues. (Renée DiResta / The Atlantic)

How a lack of information on Google (a “data void”) can erode trust during a pandemic. Data voids are one of my favorite subjects and something that too few people understand; read this illustrative account of how a recent void on Google reshaped perception. (Francesca Tripodi / Wired)

Americans are split on whether the government should be allowed to use location data to track the spread of COVID-19. Almost half of US adults say the practice is at least somewhat unacceptable. As a reminder, Apple and Google’s collaborative API will record users’ proximity to one another but not their location. (Pew Research Center)

The Department of Veterans Affairs has hired contractors with no experience to find respirators and masks, even agreeing to a 350 percent markup from the manufacturer’s list price. While waiting for the protective equipment to arrive, 20 VA staff have died of COVID-19. (J. David McSwane / ProPublica)

How the internet kept running even as society closed down around it, and internet usage spiked amid the pandemic. (Charles Fishman / The Atlantic)

As employees continue to work remotely, companies are using software to track their movements and productivity — even when it’s not related to work. (Adam Satariano / The New York Times)

Zoom announced Lieutenant General H.R. McMaster is joining the board as an independent director. The company also hired Josh Kallmer as its head of global public policy.

Three women are suing the anonymous secret app Whisper for exposing 900 million user records. The exposed data didn’t include peoples’ names, but had other identifying information like age, ethnicity, gender, hometown, and any membership in groups, many of which are devoted to sexual confessions. (Robert Burnson / Bloomberg)

Industry

The Libra Association, the group behind the proposed digital currency invented by Facebook, named former US Treasury Department official Stuart Levey as its first CEO. Levey has the daunting task of working with global regulators to push the project forward. Here’s Kurt Wagner at Bloomberg:

Libra, which was announced in June 2019, was conceived and developed by Facebook, the world’s largest social network. It’s now governed by a 24-member independent coalition of companies and nonprofits, though the group has changed since the project was launched. Levey will assume the role sometime this summer, and will be stationed in Washington. The Libra Association, based in Geneva, said last month that it aims to have its coins ready in late 2020.

Airbnb is laying off a quarter of its staff. It’s one of the largest layoffs that Silicon Valley has seen since the Covid-19 pandemic struck. Brian Chesky, the company’s founder and CEO, said the company’s revenue would be halved and that it would terminate about 1,900 of its 7,500 employees. (Theodore Schleifer / Recode)

Magic Leap is in talks to secure funding from a health care company, according to emails CEO Rony Abovitz sent to staff. The money could save the startup from making further cuts to its already diminished workforce. (Alex Heath / The Information)

“Adopt a high school senior” Facebook groups have proliferated as the school year comes to an end and students are unable to participate in graduation ceremonies. Group members “adopt” students in the comments, sending along baskets, gift cards, and presents in congratulations. (Terry Nguyen / Vox)

The pandemic has accelerated major changes in the way porn is produced and distributed. OnlyFans has become particularly popular, with subscriptions up 50 percent in April. But the porn industry’s future continues to be uncertain. (Otillia Steadman / BuzzFeed)

How “Karen” became a coronavirus villain. During the pandemic, the name has been adopted as a shorthand to call out a vocal minority of middle-aged white women who are opposed to social distancing, out of either ignorance or ruthless self-interest. (Kaitlyn Tiffany / The Atlantic)

YouTube creators said that ad rates fell as much as 30 percent in April. But longer-term sponsorship deals have proven more resilient. (Nick Bastone / The Information)

TikTok is leveraging its massive audience to draw A-list celebrities to the platform as users stay stuck at home during the pandemic. The Hype House, where a group of famous TikTokers live, is also being shopped around for a possible reality show. It’s being pitched as a modern-day Mickey Mouse Club. (Natalie Jarvey / The Hollywood Reporter)

Spotify is now testing video podcasts in its app, starting with YouTube stars Zane Hijazi and Heath Hussar. The global test, which allows the creators to upload their recorded video footage to the app, will show up for 50 percent of the show’s Spotify podcast listeners. (Ashley Carman / The Verge)

Epic’s Fortnite has more than 350 million registered players, making it one of the most popular games ever made. (Nick Statt / The Verge)

Twitch updated its channel pages. Now streamers will have way more control over what their channel looks like when it’s offline, with a more customizable home page, channel trailers, and more. (Bijan Stephen / The Verge)

Things to do

Stuff to occupy you online during the quarantine.

Watch Daniel Radcliffe, Stephen Fry, and Eddie Redmayne read chapters of Harry Potter and the Philosopher’s Stone on video. The videos will be available for free on harrypotterathome.com, and the audio is accessible exclusively on Spotify.

Watch a new trailer for The Last of Us: Part II, one of the year’s most anticipated video games.

Watch a bear taking a bath in an outdoor tub. Extremely relaxing.

Those good tweets

Talk to us

Send us tips, comments, questions, and your Oversight Board nominations: casey@theverge.com and zoe@theverge.com.