By: Yaël Eisenstat Published in: The Washington PostDate: November 4, 2019

I joined Facebook in June 2018 as its “head of Global Elections Integrity Ops” in the company’s business integrity organization, focused specifically on political advertising. I had spent much of my career working to strengthen and defend democracy — including freedom of speech — as an intelligence officer, diplomat and White House adviser. Now I had the opportunity to help a company that I viewed as playing a major role in one of the biggest threats to our democracy possibly correct course.

In the year leading up to our 2016 election, I began seeing the polarization and breakdown of civil discourse, exacerbated by social media, as our biggest national security threat; I had written about that before Facebook called. I didn’t go into it with rose-colored glasses on, and I didn’t think I was going to change the company. But I wanted to help Facebook think through the very challenging questions of what role it plays in politics, in the United States and around the world, and the best way to ensure that it is not harming democracy.

A year and a half later, as the company continues to struggle with how to handle political content and as another presidential election approaches, it’s clear that tinkering around the margins of advertising policies won’t fix the most serious issues. The real problem is that Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. Its business model exploits our data to let advertisers custom-target people, show us each a different version of the truth and manipulate us with hyper-customized ads — ads that, as of two weeks ago, can contain blatantly false and debunked information if they’re run by a political campaign. As long as Facebook prioritizes profit over healthy discourse, they can’t avoid damaging democracies. 

Early in my time there, I dug into the question of misinformation in political advertising. Posting in a “tribe” (Facebook’s internal collaboration platform), I asked our teams working on political advertising whether we should incorporate the same tools for political ads that other integrity teams at Facebook were developing to address misinformation in pages and organic posts. It was unclear to me why the company was applying different, siloed policies and tools across the platform. Most users do not differentiate organic content from ads — as I clearly saw on a trip to India, where we were testing our ads-integrity products — so why were we expecting users to understand that we applied different standards to different forms of content that all just appear in their news feeds? ?

The real problem with Facebook is not a data leak

The fact that we were taking money for political ads and allowing campaigns and other political organizations to target users based on the vast amounts of data we had gathered meant political ads should have an even higher bar for integrity than what people were posting in organic content. We verified advertisers to run political ads, giving them a check mark and a “paid for by” label, and I questioned if that gave the false impression that we were vouching for the validity of the content, boosting its perceived credibility even though we weren’t checking any facts or ensuring that ads weren’t spreading false information. 

Most of my colleagues agreed. People wanted to get this right. But above me, there was no appetite for my pushing, and I was accused of “creating confusion.” My leadership in the business integrity organization rejected some of the proactive solutions my team was building to try to solve highly consequential problems, particularly if they involved solutions that were not “scalable.” Every election has a different set of challenges and local realities, so “scalability” is antithetical to truly addressing the issues affecting individual elections around the world.

Ultimately, I was not empowered to do the job I was hired to do, and I left within six months. So unfortunately, I don’t know if anybody up the chain ever considered our proposals to combat misinformation in political ads. Based on the company’s current policy allowing politicians to lie in ads and the dissent letter last week signed by 250 Facebook employees disagreeing with the policy, it seems clear that they did not.

If fake news wasn’t on Facebook, people would find it somewhere else

As we now know, paid advertising was just a small fraction of the Russian activities ahead of our 2016 presidential election, and social media affects civil discourse and warps democracy in many other ways. But how the company decides to handle the current controversy is the biggest test for whether it will ever truly put society and democracy ahead of profit and ideology. The dissent letter made multiple references to all the hard work the “product teams have made in integrity over the last two years” and said they “don’t want to see that undermined by policy.” This is a very real tension at Facebook: I repeatedly saw passionate and thoughtful work in my own group not make it past the few voices who ultimately decided the company’s overall direction. 

During my interviews, I had been asked if I thought the company should ban political ads, and at the time, the answer was obvious to me. I said that although it seemed like the easiest solution, banning political advertising on the world’s largest social media platform would tilt the scales toward incumbents who already have disproportionate access to media, especially in countries with dictatorial regimes; Facebook would risk squashing the voices of smaller parties and candidates. This is the one point on which I agreed with Mark Zuckerberg’s defense of political advertising in his speech last month. 

But couching the issue now as simply a question of free speech is both disingenuous and an intentional distraction. Many of the fixes found in the company’s new ad transparency rules are laudable and necessary, but the core issue will not be solved before 2020 without addressing that fundamental systemic problem the business model causes.

Sheryl Sandberg, Facebook’s chief operating officer, said in a Bloomberg News interview Wednesday that the company is leading on transparency in political advertising and providing all the relevant information in the “ad library.” But true transparency would include information about the tools that differentiate advertising on Facebook from traditional print and television, and in fact make it more dangerous: Can I see if a political advertiser used the custom audience tool, and if so, if my email address was uploaded? Can I see what look-alike audience advertisers are seeking? Can I see a true, verified name of the advertiser in the disclaimer? Can I see if and how your algorithms amplified the ad? If not, the claim that Facebook is simply providing a level playing field for free expression is a myth. 

Free political speech is core to our democratic principles, and it’s true that social media companies should not be the arbiters of truth. But the only way Facebook or other companies that use our behavioral data to potentially manipulate us through targeted advertising can prevent abuse of their platform to harm our electoral process is to end their most egregious targeting and amplification practices and provide real transparency. Until they volunteer — or are forced by government — to do so, I now believe they should halt political advertising.

Banning political ads will unleash larger problems, such as determining what is an “issue ad” and stifling the ability of advocates of issues such as climate change policies to advertise. But allowing candidates to spread disinformation using sophisticated targeting tools that exploit our data cannot be the only other possible option.

Facebook seems to think it is: Now the company says it will let politicians lie in ads in the upcoming British elections, too. It’s clear that the company won’t make the necessary fixes without being forced to, either by advertisers who refuse to spend money on their platforms until Facebook cleans up the spread of misinformation and other harmful content; employees who continue to demand accountability and responsibility from their leaders; and most immediately, government action. We need lawmakers and regulators to help protect our children, our cognitive capabilities, our public square and our democracy by creating guardrails and rules to deal directly with the incentives and business models of these platforms and the societal harms they are causing. 

The “culture of fear,” nasty political campaigns and amplified extreme voices are not new in American society. But the scale to which these platforms have fueled and exacerbated this by using our emotional biases to keep our eyeballs on their screens, to vacuum up our data and sell their targeting tools to advertisers, has tilted the playing field toward the most salacious and fanatical voices.

Broader debates about whether politicians should run fake ads, who should decide whether claims are true and who should govern the Internet are important questions that society will continue to debate. But that is a different matter from whether companies should profit from providing potent information warfare tools for political advertisers to target us with disinformation. The answer there is clear: We can’t afford to let them anymore.