December 22, 2024
Efforts to combat artificial intelligence‘s threat to democracy and humanity through regulations and guardrails face a number of legal complications, including new rules that will clash with free speech protections and that it is logistically difficult to control the use of algorithms. The surge of interest in AI-powered chatbots like ChatGPT in 2023 led to […]

Efforts to combat artificial intelligence‘s threat to democracy and humanity through regulations and guardrails face a number of legal complications, including new rules that will clash with free speech protections and that it is logistically difficult to control the use of algorithms.

The surge of interest in AI-powered chatbots like ChatGPT in 2023 led to experts and critics ringing alarms about what AI could do to the country and world, saying that it could spread misinformation, destabilize the job market, and even destroy humanity. The legislative and executive branches moved quickly to write new regulations to ensure the technology was used safely by companies and individuals, without stifling innovation. But it will not be easy to write rules for the complicated technology.

“A government’s role is to incentivize [AI’s] development and to mitigate its many risks,” Susan Aaronson, a professor of AI governance at George Washington University, told the Washington Examiner. However, “no one knows how to do it in a way that addresses the AI ecosystem and that is effective. It’s challenging to govern such a fast-moving technology.”

Dozens of bills to regulate AI have been introduced in Congress, and President Joe Biden issued an expansive executive order in October to set rules on AI. There is a clear desire in the government to set rules for how AI models are used safely.

But if lawmakers want to set meaningful regulations, they will need to realize that there are aspects of the technology that are not easily handled legislatively.

AI and Free Speech

One major obstacle to regulating AI is that, in the end, it is merely code and, thus. a form of speech that has certain First Amendment protections. Regulating AI models is “much harder to do because now you’re talking about software and data regulation, and that intersects very closely with free speech and other issues,” Daniel Castro, vice president of the Information Technology and Innovation Foundation, told the Washington Examiner. 

Software code is also considered a form of speech, according to the higher courts. The Ninth Circuit Court of Appeals decided in a 1995 ruling that the source code of software was protected by the First Amendment and that the federal government cannot place restrictions on encryption.

The speech issues come into play when discussing how to handle AI-generated fake images, video, or audio, also known as deepfakes. Deepfakes are seen as a threat to sow misinformation and execute scams on a mass scale. But users still have the right to create images and other media through AI. “Any attempts to regulate the content produced by generative AI, including [AI models], run the risk of operating broadly to restrict protected expression,” Esha Bhandari, the deputy director of the ACLU’s Speech, Privacy and Technology Project, wrote in a blog post. The ability to lie is constitutionally protected outside of narrowly defined circumstances, so generative AI models must be allowed to create the images.

Open-source software

Another aspect of AI that will make it difficult to regulate is the existence of powerful open-source models. These large language models allow anyone to view or manipulate the code powering a program. They are believed to be integral tools for helping smaller companies compete with larger closed-source AI models like Open-AI’s GPT-4.

Open-source AI models are particularly hard to set guardrails around because the developers could live anywhere in the world. Once a model is released, any person with access to the internet can change, update, or modify the model. That open access could also be used to spread misinformation or to develop biological weapons.

Congressional lawmakers are worried that malicious actors like Russia could abuse open-source models. Sens. Richard Blumenthal and Josh Hawley sent a letter to Meta CEO Mark Zuckerberg in June asking questions about the release of the company’s open-source large language model LLaMA, which they allege could be misused for “spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms.”

Hugging Face CEO Clement Delangue, who operates one of the leading open-source AI hubs, defended open-source models before Congress that same month, noting how they “are critical to incentivize and are extremely aligned with the American values and interests.” There is a blurriness around open-source models when it comes to regulations. While lawmakers like Hawley and Blumenthal may wish to bar the model due to potential malicious actors using it, the constraint would also limit all of the good that the model does as well.

The complexities around open-source models are why the European Union decided to exempt most open-source models from the AI Act, which is expected to pass this spring. The EU’s comprehensive AI framework will place varying restrictions upon AI models depending on the “risk level.” These include scanning the foundation models used to perform various tasks, restricting high-risk technologies like facial scanners, and requiring AI developers to file summaries of their training data for regulatory review.

Regulatory alternatives

AI is advancing faster than agencies may be able to write regulation. Lawmakers may find more success by focusing on specific applications of the technology rather than the models themselves.

There are already “technology neutral laws [that] apply to the way AI is used,” Duane Pozza, former FTC staffer and Partner at Wiley Rein, told the Washington Examiner. For example, the Federal Communications Commission announced last week that it was banning the use of AI-generated voice in robocalls. This new rule didn’t add new rules, but it did define how “AI fits into existing regulations around robocalls,” Pozza said.

That’s why Castro encourages regulating an AI model’s functions rather than the model as a whole. For example, the Federal Trade Commission moved on Thursday to put rules into place to crack down on the use of generative AI in scam calls. The FTC already has the authority to deal with scams, but now it is focusing on the specific use of AI to scam people, rather than doing anything about the software behind it.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

Some experts in the AI space urge lawmakers not to push legislation through without understanding the implications. “The more we rush to regulate, the more costly it’s gonna be,” Will Rinehart, a senior fellow at the American Enterprise Institute, told the Washington Examiner. Hawley attempted in December to expedite the passage of legislation stripping AI of Section 230 protections, a reform that would have significant speech implications for the internet and online platforms. Sen. Ted Cruz (R-TX) blocked the bill, stating that it needed to go through proper committee channels before it could be considered.

“When you want something that protects consumers and protects individuals, but also is not onerous to innovation and entrepreneurship. It takes some time to figure out what the true problems are,” Rinehart argued.

Leave a Reply