December 23, 2024
Free speech access to the internet could be dramatically transformed, with the Supreme Court set to hear a high-stakes pair of oral arguments.

Free speech access to the internet as we know it could soon be dramatically transformed, with the Supreme Court set to hear a high-stakes pair of oral arguments over holding social media companies liable for content posted by third-party providers.

At issue in one case is the scope of legal protections in a nearly 30-year-old federal law and whether they apply to the algorithms that digital service providers like Google and Twitter use to recommend what videos and websites to show their users. 

Those two companies were separately sued by the families of terror victims. Oral arguments will be held over two days this week.

The family of Nohemi Gonzalez claims Google, the owner of YouTube, willingly allowed the Islamic State group to post hundreds of videos that helped incite violence and recruit potential supporters. The 23-year-old U.S. citizen studying abroad was one of 130 innocents killed in Paris during a series of IS-affiliated attacks in November 2015. 

SUPREME COURT DROPS TITLE 42 CASE FROM CALENDAR AFTER BIDEN ADMIN ARGUES CASE IS ‘MOOT’

The platform’s software algorithms – or targeted digital calculations – allegedly steered extremist online material to viewers most likely to be interested in them. 

It will be the first time the justices will review the Communications Decency Act’s Section 230, which has been in effect since 1996. It gives digital platforms a measure of immunity from some criminal and civil claims. Several lower courts have interpreted Section 230 as giving some of the largest companies in the world broad legal protection.

The Supreme Court in Washington, D.C.

The Supreme Court in Washington, D.C. (AP Photo/J. Scott Applewhite, File)

There have been bipartisan calls in Congress to limit the provision’s scope, with lawmakers saying it gives too much power to these platforms, has not kept pace with the rapidly evolving digital landscape, and leaves tech firms vulnerable to government pressure to suppress certain speech.  

A separate appeal to be argued before the high court deals with liability under Section 2333 of the Anti-Terrorism Act – and whether hosting terrorist content online could constitute “aiding and abetting” under federal civil law, regardless of liability protections in Section 230. 

That appeal was brought by American relatives of Jordanian citizen Nawras Alassaf, who was among the 39 people killed during a 2017 mass shooting inside an Istanbul nightclub. Three social media companies, including Twitter, were sued for civil damages, with the family claiming the companies provided a messaging platform for the gunman, who allegedly was recruited and directed by ISIS to carry out the attack.

In a broader sense, the nine justices will confront a range of competing interests: the boundaries of a company’s responsibility to identify and remove volatile or dangerous content, and to prevent its algorithms from promoting it; the push for even greater unfiltered “public forum” debate; and the role of government in policing such content. 

Tech experts say the high court’s decision would not only affect web search results, but potentially social media feeds, apps, online marketplace listings and streaming services content.

“The First Amendment prevents lawmakers from taking extreme steps that would potentially limit our freedom of speech,” said Ashley Johnson, senior policy analyst with the nonpartisan Information Technology and Innovation Foundation (ITIF). “Governments are not allowed to tell companies what content they can and can’t allow, and it’s not allowed to tell people what things they can and can’t say. But what it comes to trying to reform Section 230 – that debate over what the end goal should be is definitely impeding lawmakers from making forward momentum on the issue.” 

Justice Clarence Thomas may hold the key to deciding this case. He has repeatedly urged his colleagues to consider limiting Section 230’s reach, in an appropriate case.

“Applying old doctrines to new digital platforms is rarely straightforward,” Thomas wrote in 2021. “We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.” 

He and his benchmates now get that chance.

The law and its limits

It has become all too convenient: go to a web browser’s search engine and type in any topic of interest – from parachutes, health advice, or political discourse. Algorithms then recommend third-party sites to link: instant information and insight.

According to the Electronic Frontier Foundation, about 40 million people used the internet worldwide when Section 230 was enacted. By 2019, more than four billion people were online, with 3.5 billion of them using social media platforms. In 1996, there were fewer than 300,000 websites; by 2017, there were more than 1.7 billion. 

Section 230 of the CDA says in part, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The idea was that promoting more user speech online outweighed potential harms, while putting the responsibility on the speaker, not the service that hosts the speech. 

While some have dubbed it “The 26 Words That Created the Internet,” Section 230 more accurately enshrined the financial foundation that has allowed digital platforms – and free speech – to thrive.

The law also says that no interactive computer service provider “shall be held liable” for what Justice Thomas said in 2020 were “good-faith acts to restrict access to, or remove, certain types of objectionable content; or… giving consumers tools to filter the same types of content.”

Prior lower court precedents have said that includes forwarded email, video and photo file-sharing, message boards, even online dating services. 

Many social media companies see their role as the modern equivalent of the town square, where a thoughtful exchange of ideas and services can thrive and be accessible to anyone across the globe – pushing cultural, social, and political change. 

Members of the Supreme Court before the retirement of Justice Stephen Breyer.

Members of the Supreme Court before the retirement of Justice Stephen Breyer. (Erin Schaff-Pool/Getty Images)

But those companies also say they enjoy some power to limit what kind of content its users post in the name of public safety. That elusive quest for balance has led to claims of selective censorship, particularly over some political and ideological views. 

When announcing his intent to purchase Twitter, billionaire Elon Musk said he wanted it to be an “inclusive arena for free speech.” He promised to restore the accounts of former president Donald Trump and others, who had been kicked off the platform for controversial posts linked to the Jan. 6, 2021, U.S. Capitol violence. Trump’s accounts have been restored in recent weeks, including on Twitter. 

As president, he signed an executive order in May 2020 designed to remove some big tech protections if companies engaged in “selective censorship” harmful to national discourse. The order came shortly after Twitter attached fact-check warnings to some of the president’s tweets.

“Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike,” the executive order stated. 

The arguments

A federal appeals court offered separate, conflicting rulings over whether terror victim families could sue under the Anti-Terrorism Act (ATA) and the Justice Against Sponsors of Terrorism Act (JASTA).

Lawyers for the Gonzalez family said in their legal brief with the Supreme Court that “YouTube officials were well aware that the company’s services were assisting ISIS… that despite extensive media coverage, complaints, legal warnings, congressional hearings, and other attention for providing online social media platform and communications services to ISIS, prior to the Paris attacks, YouTube continued to provide those resources and services to ISIS and its affiliates.”

Google told the high court in its brief any ruling limiting the protections of Section 230 would “undercut a central building block of the modern internet.” 

“The stakes could not be higher. A decision undermining Section 230 would make websites either remove potentially controversial material or shut their eyes to objectionable content to avoid knowledge of it,” said Google general counsel Halimah DeLaine Prado in a recent blog post. “You would be left with a forced choice between overly curated mainstream sites or fringe sites flooded with objectionable content.”

The current Supreme Court cases are not the only legal challenges to Section 230.

Republican-led Missouri and Louisiana have filed suit against Biden administration officials – including the president and Dr. Anthony Fauci – accusing the federal government of “coercing” big tech to censor and suppress online critics over COVID-19, election integrity, and other issues – in the name of combating “misinformation.” That case is still moving through the lower federal courts.

Citing internal company emails recently published under the so-called “Twitter Files,” Missouri Attorney General Andrew Bailey said last month, “These emails confirm what we’ve known all along, The [Biden administration] has been colluding with social media companies to stifle opposing voices. I will continue to push back against this blatant attack on the 1st Amendment with every tool at my disposal.”

The FBI has admitted holding regular meetings with tech content managers over problematic content, but officials strongly deny any “inappropriate relationship” with any social media company.

“The FBI does not instruct or direct any social media company to censor an account or remove information from their platform,” the agency said Feb. 8. “In carrying out its law enforcement mission, the FBI receives voluntarily provided information from these companies, when the company believes there is a serious risk of death or serious physical injury. In addition, the FBI also shares identified malign foreign influence information with these companies,” including activities by Russia, China and Iran.

The politics

Even if the Supreme Court sides with the big tech firms, only Congress can step in and make legislative changes to Section 230 and other content moderation provisions.

But there are partisan differences over what any legislative reforms should target. Republicans seek an end to liability for what they call censoring conservative viewpoints by social media companies. Democrats have focused on liability limits for misinformation and extremist and hate-based speech.  

NEW ORLEANS JUDGE ACQUITTED OF TAX FRAUD CHARGES

And many lawmakers who call for changes are focusing not on the tech firms themselves, but on the federal government’s alleged role in pushing censorship.

President Joe Biden and his son, Hunter Biden.

President Joe Biden and his son, Hunter Biden. (AP Photo/Patrick Semansky)

February hearings in the GOP-controlled House were held over suppression of media stories regarding Hunter Biden’s laptop, and the alleged “weaponization” by federal law enforcement in other areas. That has promoted fresh congressional calls to scrub Section 230 and reform the public digital sphere.

President Biden too has called for changes to Section 230. 

“I’m concerned about how some in the industry collect, share and exploit our most personal data, deepen extremism and polarization in our country, tilt our economy’s playing field, violate the civil rights of women and minorities, and even put our children at risk,” he wrote in a January op-ed in the Wall Street Journal. “We must fundamentally reform Section 230 of the Communications Decency Act… We also need far more transparency about the algorithms Big Tech is using.”

 Those growing calls for a major revision of Section 230 might leave the Supreme Court reluctant to use the current cases for a sweeping ruling on free speech, preferring to let continued public debate percolate, for the time being.

 “In the most extreme result, the Supreme Court could say that Section 230 protections don’t apply when online services use algorithms to recommend content to users. This seems like a pretty narrow approach that actually would end up impacting a large proportion of online services,” said ITIF’s Ashley Johnson. 

“There’s just so much content, and algorithms are currently the most effective way to sort that content in a way that is under bombardment and not relevant to individual users. And so that will end up impacting a lot of online services and their users in a significant way if the Supreme Court chooses to rule that way. But it wouldn’t necessarily prevent Congress from doing something in the future.”

CLICK HERE TO GET THE FOX NEWS APP

The cases are Gonzalez v. Google LLC (21-1333) and Twitter, Inc. v. Taamneh (21-1496). Rulings are expected by late June.