The Disinformation Dilemma

Illustration by Doug Panton

Email Dipayan Ghosh

In the discussion of how Russian operatives manipulated public opinion during the 2016 presidential election, it’s easy to overlook how their malicious goals were easily advanced thanks to tools originally designed to further the economic interests of leading Internet companies like Facebook and Google.

Dipayan Ghosh, a fellow at the Kennedy School’s Shorenstein Center on Media, Politics and Public Policy who previously worked at Facebook on privacy and public policy and consulted on the Clinton campaign, spent the months immediately following the election researching how Russian disinformation campaigns had used tools such as search engine optimization, behavioral data collection, and social media management software (SMMS) to spread and promote “fake news” widely online. He teamed with Ben Scott, a senior adviser to the nonprofit Open Technology Institute at New America and a fellow adviser to the Clinton campaign, to raise awareness about these abuses by publishing a paper, “Digital Deceit: The Technologies Behind Precision Propaganda on The Internet,” with New America in January.

Disinformation agents were fundamentally successful, Ghosh says, because they were able to tap into the lifeblood of the modern digital advertising landscape—behavioral data. Those data exist because websites compile every click, share, and search query into a user profile. One way to do this is by using a “cookie,” a piece of data that tracks users’ activity in order to predict their preferences and interests. Advertisers use these inferred preferences to show users advertisements in line with those interests, like hiking boots instead of high heels. It seems a harmless, mutually beneficial marketplace, in which users are exposed to the kinds of content that they want to see and advertisers are able to generate revenue.

But Ghosh says that this practice of constant mass data collection also provides ample opportunities for disinformation agents to manipulate users’ experiences in the political landscape. Location data collected through apps and sites, for example, can be used by a disinformation campaign to determine where a voter lives, in order to tailor ads to races and hot-button issues for that specific region.

After using Internet data to determine what kinds of propagandized messages might speak to specific audiences, disinformation campaigns can also synchronize their efforts across platforms such as Twitter, Facebook, and Instagram through the use of SMMS. Such software helps brands schedule and select the kinds of content they wish to promote to certain audiences. Ghosh emphasizes that these tools are not inherently malicious—they help advertisers connect with consumers with less effort and more frequent success by reinforcing messages across media. But a political disinformation agent could just as easily use the software to push a fake story on multiple platforms while simultaneously tailoring each iteration of the story by using data on what is most likely to persuade specific audience segments. In cases like these, SMMS makes disseminating destabilizing rumors and sensationalized stories faster and easier.

One of the easiest ways to detect manipulation of search results from providers such as Google is to watch for instances where content from less credible sources ranks above that from well-established outlets. Foreign agents in 2016 used so-called black-hat (as in old Westerns) search engine optimization techniques to understand, replicate, and ultimately trick Google’s algorithm into promoting their propagandized content to the top of search results. Ghosh says there’s a scale issue in fighting such challenges. Even if Google wanted to “throw its entire security team at this problem” it couldn’t, because “the number of black-hat SEO attacks per security person at Google is just not a ratio in Google’s favor.” For this reason, he encourages companies to adopt “bug-bounty” programs that financially reward people outside the organization who can figure out ways to push disinformation through the existing system—thus pinpointing loopholes and security issues that companies can fix. “It’s throwing money at the problem,” Ghosh says, “which is really something we have to get more comfortable with doing.”

He and Scott offer a number of technical solutions to help ensure that SMMS companies, Internet platforms, and advertisers head into the 2018 and 2020 elections with more control over misuse of their digital toolkits. But in the effort to promote policy change and push Internet companies to implement better security processes, Ghosh believes primarily in the power of public opinion. “The best way we can raise awareness” about how “the threat of disinformation can linger on these platforms, and surface at the most critical times in our national history, is by talking about and writing about it,” he says. “I’m talking about the pitchforks coming out.”

Read more articles by: Oset Babür

You might also like

The Enterprise Research Campus, Part Two

Tishman Speyer signals readiness to pursue approval for second phase of commercial development.  

Slow and Steady

A Harvard Law School graduate completes marathons in all 50 states.  

Most popular

The Gravity of Groups

Mina Cikara explores how political tribalism feeds the American bipartisan divide.

Dominica’s “Bouyon” Star

Musician “Shelly” Alfred’s indigenous Caribbean sound

Claudine Gay in First Post-Presidency Appearance

At Morning Prayers, speaks of resilience and the unknown

More to explore

Exploring Political Tribalism and American Politics

Mina Cikara explores how political tribalism feeds the American bipartisan divide.

Private Equity in Medicine and the Quality of Care

Hundreds of U.S. hospitals are owned by private equity firms—does monetizing medicine affect the quality of care?

Construction on Commercial Enterprise Research Campus in Allston

Construction on Harvard’s commercial enterprise research campus and new theater in Allston