Skip to main content

Big Tech, Fake News, and Political Advertising

by Vincent Grassi, Monmouth University Polling Institute Intern

Social media has had a huge impact on politics by shaping public discourse and revamping civic engagement. However, as we have seen with foreign interference in elections, social media is an outlet for everyone, including what many refer to as online “bad actors.” Here, we’ll look at how big tech companies (namely Facebook, Google, and Twitter) have been taxed with combating the spread of fake news and disinformation ahead of the 2020 election. And, more specifically, we’ll examine their role in safeguarding political speech while also acting to dismantle false or deceptive political advertisements shared on social media.

Fears concerning disinformation campaigns that target voters and our elections were chiefly birthed from the revelation of a Russian state-backed online influence operation that maliciously used social media to interfere in the 2016 election. According to a Monmouth University Poll from March of 2018, most Americans (87%) believed outside groups or agents were actively trying to plant fake news stories on social media sites like Facebook and YouTube, and 71% felt this was a serious problem. Nearly three-in-ten (29%) believed that social media outlets were mostly responsible for the dissemination of fake news, although a majority (60%) said they were partly responsible but other media sources were more to blame. In addition, over two-thirds of Americans (69%) felt that Facebook and YouTube weren’t doing enough to stop the spread of fake news.

In the lead up to the 2020 election, Facebook, Twitter, and Google have taken responsibility for eliminating social media accounts operated by foreign actors that intend to mislead the citizens of other countries. For example, Facebook has focused on removing accounts that take part in what it deems “coordinated inauthentic behavior,” or networks of accounts that are intentionally trying to mislead others. According to an article on Facebook Newsroom, Nathaniel Gleicher, the company’s head of cybersecurity policy, said, “In the past year alone, we have announced and taken down over 50 networks worldwide for engaging in CIB, including ahead of major democratic elections.”

Deceptive practices on social media platforms are not only attributed to foreign agents, but also American citizens. Gleicher said, “While significant public attention has been on foreign governments engaging in these types of violations, over the past two years, we have also seen non-state actors, domestic groups and commercial companies engaging in this behavior.”

Recently, the issue of false ads was brought to the forefront when a controversial Trump campaign advertisement about Joe Biden was published on Facebook, Twitter, and Google’s YouTube. The ad has been largely regarded as spreading false, unfounded claims about the former vice president’s past involvement in Ukraine. Biden’s campaign urged the social media giants to remove the advertisement from their platforms, but they declined.

Facebook’s CEO Mark Zuckerberg has been under fire for this decision. In a speech on free expression given at Georgetown University after his decision, Zuckerberg said, “I know many people disagree, but, in general, I don’t think it’s right for a private company to censor politicians or the news in a democracy. And we’re not an outlier here. The other major internet platforms and the vast majority of media also run these same ads.” Also in the speech, the Facebook CEO revealed that the company does not fact-check political ads.

Since then, Facebook has been under heavy scrutiny and, according to an article posted to Facebook Newsroom on October 21, they are looking to make some changes to address the problem. The article reads, “Over the next month, content across Facebook and Instagram that has been rated false or partly false by a third-party fact-checker will start to be more prominently labeled so that people can better decide for themselves what to read, trust and share.”  Facebook’s plan to protect the integrity of the U.S. 2020 elections includes fighting foreign interference campaigns, increasing transparency by showing how much presidential candidates have spent on ads, and reducing misinformation by improving its fact-checking labels and investing in media literacy programs.

In response to Zuckerberg’s defense of Facebook’s policy, Twitter CEO Jack Dorsey announced that Twitter will do the opposite and not allow any political advertising on its platform starting at the end of the month. Dorsey explained the decision by highlighting some factors that should be considered in the ongoing debate, tweeting, “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale.”

While both Twitter and Facebook have addressed their platform’s policies on political advertisements, Google has refrained from commenting on YouTube’s policy. According to Google’s transparency report, they received around $126 million in revenue from political advertisements since May 31st 2018 running 172,308 ads. Also, findings from Quartz show that the Trump campaign’s controversial advertisement appeared more often on YouTube than it did on Facebook.

Some of the Democratic candidates for president have signaled their frustration with the social media giants. Leading democratic candidate Elizabeth Warren targeted Facebook by posting her own purposely false advertisement on the platform stating that Zuckerberg had endorsed President Trump’s reelection in order to see if it would be approved (it was). Another candidate, Kamala Harris, urged Twitter to suspend President Trump’s twitter account partly due to a series of tweets about the Ukraine scandal whistleblower and the impeachment inquiry in which, she believes, the president violated Twitter’s terms of service by using the platform to obstruct justice and incite violence.

While big tech companies are taxed with disrupting foreign disinformation campaigns, they have also had to focus on domestic issues such as the viral spread of misinformation. Another concern has appeared over their political advertising policies as well and the debate over how social media companies should approach political speech on their platforms. Is it better to not fact-check political social media advertisements, ban them altogether, or is there some middle ground that can be deemed effective at safeguarding political speech online?

Leave a Reply

Your email address will not be published. Required fields are marked *