Five Ways to Protect Democracy from Misinformation Online
Philip N. Howard—
We need mandatory reporting on the ultimate beneficiaries of data. Citizens should easily be able to see which organizations are receiving and manipulating personal data. Social media companies should be able to report back to users on which advertisers, data-mining firms, and political consulting firms have made use of user data. Many people now own consumer products that are part of the internet of things or live in smart cities, where public infrastructure is constantly collecting data about us. If a lobbyist can purchase data about your eating or video watching habits—data collected by the smart refrigerators or televisions you bought for your home—then that equipment should be designed to show you who is using the data and what political inferences they are making about you.
Indeed, governments should require mandatory reporting about the ultimate beneficiaries of data. This means that when queried, technology firms should be required to be transparent in reporting to users which advertisers, data miners, and political consultants have made use of information about them. Your Facebook app or your smart refrigerator should be required to reveal, on request, the list of third parties benefiting from the information the device is collecting. The trail of data should be fully, and clearly, mapped out for users so that if a data-mining firm aggregates users’ data and then sells it on to a political party, the users could still identify that ultimate beneficiary. Effectively, this would let you as a citizen see all the parts of the lie machine. Does your data flow through a Chinese government agency? A numbered corporation in Panama? A Polish consulting firm like Imitacja or a Brazilian consulting firm like Imitação? Your devices should reveal such things if so.
A system that allowed for such mandatory reporting could also allow citizens to donate their data, selecting additional civic groups to benefit from the flows of information we generate. Data is now the lifeblood of democracy. Social media firms effectively monopolize that data. Not everyone will express themselves this way. But now, and more and more in the future, this would be a wonderful opportunity for civic expression. If a firm monopolizes control of publicly valuable information, democracy is threatened.
Regulations should require social media platforms to facilitate data donation, empowering citizens actively to identify the civic groups, political parties, or research organizations to support. In freeing data from private actors, governments could create an opportunity for civic expression by allowing citizens to share their data with whichever organizations and causes they want to support—not just the ones that can afford to buy it, as is the case today. Lie machines perpetuate political myths and distribute political misinformation. But regulations like these would help small social movements, civic and neighborhood groups, local community actors, and single-issue public society groups to respond to misinformation. Right now, lie machines work so well because social media firms take data about you from you and sell it to people who make political inferences about you. Citizens can significantly weaken those machines by constraining the fuel supply—the personal data—and by sharing this fuel with libraries, independent researchers, and civic groups.
We need an information infrastructure that tithes—10 percent of ads on social media platforms should be for public service announcements. Indeed, if we know that a group of people is being targeted with misinformation and being victimized with malign intent, that group could benefit from extra attention from campaigns promoting accurate information and truths. For example, if one segment of the population is being targeted by a voter suppression campaign designed to discourage them from turning up on election day, public agencies could target the same population with clear instructions and encouragement to participate. Ten percent of the data needs to flow (in a secured way) to public health researchers, civil society groups, journalists who do computational journalism, political scientists, and public science agencies like the National Science Foundation and the European Research Council. Such a system would allow many kinds of advocacy groups and public agencies, beyond Facebook’s private clients, to use existing data to analyze and solve public problems. One reason lie machines exist is that they are profitable. Someone will always see a business opportunity in running misleading ads and manipulating public information. Directing some of the profits that technology firms make from independent journalism—and some of the data they take from us—back to journalism or into civic groups will restore public life.
Most democracies have rules that prevent firms from profiting from the sale of certain kinds of public data. In many US states, for example, data-mining firms can’t profit from the sale of voter registration data, which public agencies collect. The nonprofit rule regarding data needs to be expanded. We need to extend this to cover a broader range of data that we would all consider publicly valuable. The kinds of information that most democracies collect as part of a regular census, for example, and information relevant to critical public health and well-being, should be information that firms can’t profit from in trading. This rule needs to be extended to a wider range of socially valuable data, such as data that relates to places of employment, which is currently gathered by technology companies. Such classes of information could then be passed to public agencies, thus creating a broader set of data in the public domain. This would also diffuse the financial incentives to build lie machines. A complex system of political intrigue, financial greed, and social media algorithms has weakened our democratic institutions. Technology innovations move fast, and social media platforms are always changing. Making it hard to profit from data mining and misbehaving will discourage people from building new lie machines.
We also need regular algorithmic audits. There is no other domain of business in which a company can design and build something, release it onto the market, and change the product only when people complain.
Public agencies should conduct regular audits of social media algorithms and other automated systems that citizens now rely on for information. Technology companies will call these algorithms proprietary and above independent examination. However, public agencies already do audit everything from video gambling machines to financial trading algorithms in ways that don’t violate intellectual property. This would allow public officials to inspect and dismantle lie machines on a regular basis. If a social media platform is serving up misinformation at a critical time during an election, it needs to be called out and stopped.
Users should have access to clear explanations of the algorithms that determine what news and advertisements they are exposed to, and those explanations should be confirmed by regular public audits. Moreover, all ads, not just political ones, need to be archived for potential use by public investigators. Audits of today’s technology would also put the designers of new technologies—such as artificial intelligence—on notice that they need to anticipate scrutiny.
From Lie Machines by Philip N. Howard. Published by Yale University Press in 2020. Reproduced with permission.
Philip N. Howard is director of the Oxford Internet Institute and the author of nine books, including Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up, which was praised in the Financial Times as “timely and important.” He is a frequent commentator on the impact of technology on political life, contributing to the New York Times, Financial Times, and other media outlets.