Throughout 2019, social media companies faced increased criticism regarding misleading political advertisements on their respective platforms. In response, Twitter founder and CEO Jack Dorsey announced on October 30 that the platform would ban all advertisements about political candidates and elections. On Nov. 20, 2019, Google announced that political ads on the company’s platforms, including YouTube, could no longer be directed to specific audiences based on individuals’ public voter records or political affiliations. Meanwhile, Facebook continued to defend the company’s strict policy against fact-checking political advertising.
Questions over whether platforms should remove false or misleading political advertisements was the focus of increased media attention in October 2019 after former Vice President and current Democratic presidential candidate Joe Biden requested that Facebook remove a Donald J. Trump for President campaign (Trump campaign) ad. The advertisement claimed that Biden had used the threat of withholding $1 billion to Ukraine to suppress an investigation of Burisma, a Ukrainian company on whose board his son, Hunter, served. The claim was repeatedly labeled untrue by news organizations.
In response to requests to remove the Trump campaign advertisement from its platform, Facebook declined, citing a company policy that direct political speech made by politicians is not fact-checked. According to an October 10 Washington Post report, although Facebook declined to comment on how it formulated the political advertisement policy, the company confirmed it had formally put it in place in September 2018 before the congressional midterm election.
Facebook’s refusal to remove the ad prompted backlash from other Democratic 2020 contenders. On October 10, presidential candidate and U.S. Sen. Elizabeth Warren (D-Mass.) responded to Facebook’s decision by running her own campaign ad, satirically stating that Facebook CEO Mark Zuckerberg supported President Trump for reelection. “Breaking news: Mark Zuckerberg and Facebook just endorsed Donald Trump for re-election,” the Warren ad stated in part. “But what Zuckerberg has done is given Donald Trump free rein to lie on his platform — and then to pay Facebook gobs of money to push out their lies to American voters.” When asked about the ad, a Facebook spokesperson told The Hill in an October 12 statement that the company “believes political speech should be protected.”
Zuckerberg also defended the decision in an October 17 interview with The Washington Post ahead of a speech scheduled at Georgetown University. “People worry, and I worry deeply, too, about an erosion of truth,” Zuckerberg said. “At the same time, I don’t think people want to live in a world where you can only say things that tech companies decide are 100 percent true. And I think that those tensions are something we have to live with.”
Additionally, on Dec. 1, 2019, several media outlets, including Axios and Deadline, reported that in a “CBS This Morning” interview aired on December 2, Zuckerberg once again defended the decision, stating, “I don’t think that a private company should be censoring politicians or news.” He added, “[T]his is clearly a very complex issue . . . and a lot of people have a lot of different opinions. At the end of the day, I just think that in a democracy, people should be able to see for themselves what politicians are saying.” The full clip is available online at: https://www.axios.com/zuckerberg-on-cbs-defends-facebook-ads-policy-93e391b1-1491-4405-b09e-689c88766be5.html.
However, observers questioned Facebook’s motives and criticized the company’s rationale. In an October 17 statement, Joe Biden for President campaign spokesperson Bill Russo asserted that Zuckerberg’s “attempted use [of] the Constitution as a shield for his company’s bottom line and his choice to cloak Facebook’s policy in a feigned concern for free expression demonstrates how unprepared his company is for this unique moment in our history and how little it has learned over the past few years.”
In a November 4 New York Times op-ed, Columbia Law School professor and First Amendment expert Tim Wu emphasized that Facebook’s “insisting on accepting not only political advertising, but even deliberate and malicious lies if they are in the form of paid advertisements” is “increasingly hard to understand.” Wu added that “[e]ven as Facebook’s ‘integrity’ teams try to stamp out other forms of deception, paid promotions gain access to the full power of Facebook’s tools of microtargeting, its machine learning and its unrivaled collection of private information, all to maximize the influence of blatant falsehoods.”
On November 21, The Wall Street Journal reported that Facebook was weighing whether to update the company’s advertising policy. “As we’ve said, we are looking at different ways we might refine our approach to political ads,” a Facebook spokesman told the Journal. As the Bulletin went to press, Facebook had not announced any policy changes.
Conversely, other social media and technology companies, including Twitter and Google, decided to take steps to limit political advertising or, in Twitter’s case, ban such advertisements altogether. Dorsey announced the new policy in a series of tweets on October 30. He tweeted, “We’ve made the decision to stop all political advertising on Twitter globally.” He also emphasized that political ads have the effect of “forcing highly optimized and targeted political messages on people” and that “while internet advertising is incredibly powerful and very effective for commercial advertisers, that power brings significant risks to politics, where it can be used to influence votes to affect the lives of millions.”
Twitter released the first iteration of the new policy to its website on November 15. The policy stated that Twitter would ban political content, which included “content that references a candidate, political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome.” According to the policy, candidates, political parties, and elected or appointed government officials will not be allowed to run ads of any kind, and in the United States, the ban will apply to PACs, super PACs, and 501(c)(4)s as well. The policy did state that Twitter would allow ads with messages about issues such as civil engagement, the economy, the environment, and social equity as long as they do not advocate for or against a specific political, judicial, legislative, or regulatory outcome related to those matters. The full policy is available online at: https://business.twitter.com/en/help/ads-policies/prohibited-content-policies/political-content.html.
In a November 20 company blog post, Google announced its new policy, which provided that political advertisements could no longer be directed to specific audiences based on their public voter records or political affiliations categorized as “left-leaning,” “right-leaning,” or “independent.” However, the policy still allowed political advertisements to be targeted to audiences based on age, gender, location, and the content of the websites where the ad is placed. “This will align our approach to election ads with long-established practices in media such as TV, radio and print, and result in election ads being more widely seen and available for public discussion,” wrote Scott Spencer, vice president of product management for Google ads. Google’s full policy is available online at: https://blog.google/technology/ads/update-our-political-ads-policy.
Both Twitter’s and Google’s policies attracted a mixed reception. Although some praised both companies’ policy updates, others argued that Google’s policy would not result in much change, and that Twitter went too far. Michael Posner, a professor at New York University’s Stern School of Business, told The New York Times on November 20 that Google’s policy change was a good start, but did not go far enough in dealing with potential misinformation. “It feels too much like a lawyer looking for language to give the company a lot of latitude,” Posner said.
Democratic digital strategist Keegan Goudiss expressed concerns about the implications of the policies in a November 20 New York Times article. “We are quickly transitioning to a world where corporate communications is prioritized over anything deemed ‘political,’” Goudiss said. “That’s dangerous for democracy.”
In a November 4 op-ed for The Guardian, University of Utah assistant professor Shannon McGregor argued that although Twitter’s ban was a great PR move for the company, it is “unnecessarily severe and simplistic” and disadvantages political candidates challenging incumbents. McGregor explained that “digital ads are much cheaper than television ads, drawing in a wider scope of candidates, especially for down-ballot races” and that most digital ads are actually used “as an organizational tool” rather than for nefarious purposes. She added, “Challenger campaigns use low-cost digital ads to build lists of supporters, solicit donations and mobilize volunteers” and to “introduce themselves.”
McGregor also questioned how Twitter would make determinations around whether content is in fact “political” and therefore banned. “A move to ban political ads still puts Twitter in the position of arbitrating political speech, as it must decide what is — and what is not — political,” McGregor wrote.
University of North Carolina (UNC) Hussman School of Journalism and Media political communication professor Daniel Kreiss explained in an October 30 Wired magazine article that the advertisements themselves were not the problem, but the way they were targeted to the most susceptible individuals. “The challenge is that we need to have a system where we recognize the important role that political ads have long played in American political discourse, but to build more friction into the system,” Kreiss wrote. “We’re stuck between two extremes: We have Facebook saying everything goes, and Twitter saying nothing goes. There’s a sensible position in the middle, which is why not allow paid political ads but get away from hypertargeting?”
— Sarah Wiley
Silha Research Assistant