Are machine-learning algorithms biased, wrong, and racist? Let citizens decide.
Essentially rule-based structures for making decisions, machine-learning algorithms play an increasingly large role in our lives. They suggest what we should read and watch, whom we should date, and whether or not we are detained while awaiting trial. Their promise is huge–they can better detect cancers. But they can also discriminate based on the color of our skin or the zip code we live in.
Despite their ubiquity in society, no real structure exists to regulate algorithms’ use. We rely on journalists or civil society organizations to serendipitously report when things have gone wrong. In the meantime, the use of algorithms spreads to every corner of our lives and many agencies of our government. In the post-Covid-19 world, the problem is bound to reach colossal proportions.
A new report by OpenAI suggests we should create external auditing bodies to evaluate the societal impact of algorithm-based decisions. But the report does not specify what such bodies should look like.
We don’t know how to regulate algorithms, because their application to societal problems involves a fundamental incongruity. Algorithms follow logical rules in order to optimize for a given outcome. Public policy is all a matter of trade-offs: optimizing for some groups in society necessarily makes others worse off.
Resolving social trade-offs requires that many different voices be heard. This may sound radical, but it is in fact the original lesson of democracy: Citizens should have a say. We don’t know how to regulate algorithms, because we have become shockingly bad at citizen governance.
Is citizen governance feasible today? Sure, it is. We know from social scientists that a diverse group of people can make very good decisions. We also know from a number of recent experiments that citizens can be called upon to make decisions on very tough policy issues, including climate change, and even to shape constitutions. Finally, we can draw from the past for inspiration on how to actually build citizen-run institutions.
The ancient Athenians—the citizens of the world’s first large-scale experiment in democracy—built an entire society on the principle of citizen governance. One institution stands out for our purposes: the Council of Five Hundred, a deliberative body in charge of all decisionmaking, from war to state finance to entertainment. Every year, 50 citizens from each of the 10 tribes were selected by lot to serve. Selection occurred among those that had not served the year before and had not already served twice.
These simple organizational rules facilitated broad participation, knowledge aggregation, and citizen learning. First, because the term was limited and could not be iterated more than twice, over time a broad section of the population—rich and poor, educated and not—participated in decisionmaking. Second, because the council represented the whole population (each tribe integrated three different geographic constituencies), it could draw upon the diverse knowledge of its members. Third, at the end of their mandate, councillors returned home with a body of knowledge about the affairs of their city that they could share with their families, friends, and coworkers, some of whom already served and some who soon would. Certainly, the Athenians did not follow through on their commitment to inclusion. As a result, many people’s voices went unheard, including those of women, foreigners, and slaves. But we don’t need to follow the Athenian example on this front.
A citizen council for algorithms modeled on the Athenian example would represent the entire American citizen population. We already do this with juries (although it is possible that, when decisions affect a specific constituency, a better fit with the actual polity might be required). Citizens’ deliberations would be informed by agency self-assessments and algorithmic impact statements for decision systems used by government agencies, and internal auditing reports for industry, as well as reports from investigative journalists and civil society activists, whenever available. Ideally, the council would act as an authoritative body or as an advisory board to an existing regulatory agency. It could evaluate, as OpenAI recommends, a variety of issues including the level of privacy protection, the extent to (and methods by) which the systems were tested for safety, security, or ethical concerns, and the sources of data, labor, and other resources used.