December 25, 2024
Countries worldwide, including some top allies, are considering content regulations aimed at curbing misinformation and disinformation on Big Tech platforms, a strategy in tension with free speech priorities in the United States.

Countries worldwide, including some top allies, are considering content regulations aimed at curbing misinformation and disinformation on Big Tech platforms, a strategy in tension with free speech priorities in the United States.

Legislation was proposed in Australia Sunday that would give media regulators the power to pressure and penalize tech companies for handling misinformation. The proposed legislation is part of a more significant trend among nations to counter misinformation, with some of the U.S.’s allies taking a firmer stance on controlling misinformation over maintaining free speech.

CHINESE SPY BALLOON USED US TECHNOLOGY: REPORT

“It’s a different approach,” Jennifer Huddleston, a technology policy research fellow at the Cato Institute, told the Washington Examiner. “Particularly around issues like hate speech where we’ve seen European countries be more comfortable with regulation around that idea than in the U.S.”

Australia

Australia’s proposed legislation would give the Australian Media Authority additional powers to force Big Tech to explain how it handles many types of content. It would require the companies to maintain records on their practices related to misinformation and disinformation. The bill defines misinformation as “unintentionally false, misleading, or deceptive content” and disinformation as “intentionally disseminated” to cause serious harm.

The broad definition does not offer a standard for how the Australian government determines what sort of content is false. This has sparked some concerns about what such a policy would entail for free speech.

United Kingdom

The House of Commons is currently considering the Online Safety Bill, a bill proposed in 2022 to protect children and adults from harmful content online. The OSB would include an amendment that would have required social media companies to proactively tackle and counter state-sponsored disinformation campaigns from countries like Russia. It also would allow the secretary of state to designate content that he considers harmful to adults, including disinformation and misinformation.

The OSB defines disinformation as the “deliberate creation and dissemination of false and manipulated information intended to deceive and mislead audiences” to cause harm or political gain. Misinformation refers to the accidental spreading of false information.

European Union

The European Commission, one of the European Union’s governing bodies, has advanced several efforts to rein in disinformation. Its most prominent effort was the passage of the Digital Services Act, which requires companies to perform annual risk assessments and to take action to mitigate any risks identified. It also included amendments to the Union’s Code of Practice on Disinformation, the document setting the principles for what disinformation entails.

The Code of Practice on Disinformation now defines disinformation as “verifiably false or misleading information” for economic gain or intentional deception and could be intended as a threat to the democratic process. The definitions exclude “misleading advertising, reporting errors, satire and parody, or identified partisan news and commentary.”

Canada

The Canadian government has formed entities to study the phenomena of misinformation and disinformation. Canadian Heritage Minister Pablo Rodriguez and Justice Minister and Attorney General of Canada David Lametti formed an expert advisory group in March 2022 to advise on legislative responses to harmful content. The experts agreed that some content, such as foreign interference, could be easily identified as dangerous or disinformation. The group also concluded that holding Big Tech companies liable for the content on their platform is “neither practical nor justifiable” and would breach several international trade agreements.

United States

Federal officials in the United States have sought to rein in disinformation since the 2016 elections. Congressional lawmakers have proposed multiple bills targeting certain types of content, including information related to elections and abortion availability. Federal agencies have also tried to counter disinformation, including through the Department of Homeland Security’s now-disbanded Disinformation Governance Board from 2022. Those efforts have struggled to gain traction when compared to other nations.

Republicans have slammed efforts like the Disinformation Governance Board and argued that they are used to promote censorship. The House Judiciary Committee, led by Chairman Jim Jordan (R-OH) has sent several letters to disinformation researchers accusing them of assisting the federal government in censoring conservatives under the flag of combating misinformation.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

The fact that the U.S. has not followed peer nations in trying to regulate misinformation may be due to “our devotion to the First Amendment, and our squeamishness about quelling conversation, even if part of the conversation comes from bad actors sharing false information,” Susan Campbell, a professor of media literacy at the University of New Haven, told the Washington Examiner.

The fact that many Big Tech platforms are American companies could also contribute to U.S. hesitation to regulate. “It’s easier for governments elsewhere to act without fear of undermining their competitiveness,” argued Jessica Brandt, policy director of the Artificial Intelligence and Emerging Technology Initiative at Brookings Institute.

Leave a Reply