(New York, N.Y.) – Technology giants like Facebook, Twitter, and Google/YouTube have long maintained that they are merely neutral platforms and cannot be held responsible for what users choose to post and share on their sites. Section 230 of the Communications Decency Act (CDA) has provided blanket liability protection to encourage tech firms to proactively remove hateful, abusive, violent, and other unwanted content from their sites. Instead of availing themselves of the freedom to properly enforce their Terms of Service under the protection afforded by Section 230, tech companies have done the bare minimum to rein in dangerous content, while using the liability shield to fend of lawsuits from victims of terrorism.
An analysis of tech companies’ behavior reveals that they are not simply neutral platforms. In fact, these companies play a much more active role with regards to content—collecting and selling, promoting, and even producing their own content.
Big tech likes to point out that their platforms are free of charge—but there is still a cost to the public. In exchange for using these ostensibly free platforms, users post and share content as well as provide personal data to these firms. These companies then use this information for profit. For instance, Twitter packages streams of public posts and shares them with business partners around the world. Facebook insists that it does not sell user data, but argues instead that it simply sells access to its users from which others can harvest information—a distinction without a difference. The simple truth is that users’ content and personal information is the driver of Facebook’s profitability.
Dr. Hany Farid, professor of electrical engineering and computer science at UC Berkeley and senior advisor to the Counter Extremism Project (CEP), explained in a May 14 webinar that tech companies algorithmically amplify certain content over others. This practice by which tech companies use algorithms to micro-target content to specific audiences puts into doubt the tech industry’s claims that they are a neutral platform protected under Section 230. Dr. Farid referenced a study he co-authored in March on YouTube’s promotion of conspiracy videos on its platform, which noted that approximately 70 percent of watched content on YouTube is recommended by its algorithm. He explains, “So [those algorithm-based videos are what’s in the] ‘Watch Next’ or ‘Recommended For You’ down the right-hand column. So all of the action is in these recommendation engines. And so when YouTube says—hey guys watch this misinformation, watch this conspiracy, watch this hate video—they are the ones who are promoting this material. They're not just hosting it.”
The real-world effects of their “activist role” was most illustrative in May 2019, when a whistleblower complaint to the Securities and Exchange Commission alleged that Facebook’s auto-generation algorithm in fact created a “branded landing space” for extremist groups such as al-Qaeda, ISIS, and al-Shabab. The CEP report Spiders of the Caliphate also found that ISIS followers exploit Facebook’s algorithms, whether it be the “suggested friends” feature or the auto-generation of videos and pages.
Moreover, tech firms are also actively pursuing content producing endeavors despite protests to the contrary. In 2016, CEO Mark Zuckerberg insisted that Facebook is “a tech company, not a media company.” But by the following year, Facebook was reported to be willing to spend up to $1 billion on original content and looking to revamp its Facebook Watch tab. The so-called tech company was meeting with publishers and studio producers to develop new shows for Watch by 2019.
Tech’s actions make it very clear that these companies have been and will continue to be actively involved in the collection, manipulation, and creation of content—behaving as a publisher.
As U.S. Attorney General William P. Barr pointed out, Section 230 was intended to help shield tech companies from liability if they opted to moderate content. The tech industry unfortunately wants to continue to have it both ways. Big tech wants to be able to monetize, control, manipulate, and create new content. But somehow, they want to be treated under the law not as hugely powerful and demonstrably engaged corporations, but as a collection of blank slates. Indeed, tech has spent record amounts and lobbied extensively to maintain the coveted shield that is Section 230.
Yet, despite the industry’s massive lobbying, there is growing bipartisan support in Congress to amend the law. Given its proven unwillingness to act effectively in the interest of public safety, the tech industry no longer deserves the ability to hide behind such a broad shield.