November 5, 2024
Silicon Valley has moved to regulate the use of artificial intelligence to create misinformation ahead of the 2024 election while the federal government struggles to establish guardrails. This week, on the same day as the Iowa caucuses, ChatGPT developer OpenAI unveiled tools meant to help combat misinformation and provide accurate voting information to users. The […]

Silicon Valley has moved to regulate the use of artificial intelligence to create misinformation ahead of the 2024 election while the federal government struggles to establish guardrails.

This week, on the same day as the Iowa caucuses, ChatGPT developer OpenAI unveiled tools meant to help combat misinformation and provide accurate voting information to users. The first major election cycle for a lot of artificial intelligence technology has Big Tech companies and election officials equally worried about its abuse, including the use of generative AI to create false images for attacking political opponents or to discourage people from voting.

OpenAI focused its efforts on adopting protocols that will allow users to identify images through “credentials” attached to all images made by DALL-E, its generative image generator. The company partnered with the National Association of Secretaries of State to provide up-to-date voter information. Finally, OpenAI updated its usage policies for ChatGPT and DALL-E to bar anyone from using the software to impersonate government officials or institutions. The company previously banned political campaigns from using the software to target specific demographics.

OpenAI’s decision to set guardrails for how ChatGPT can be used for seeking or promoting political information is “an interesting move that says, ‘We confess that our technology is not ready for prime time,’” Jim Kascade, the CEO of the AI-powered software company Conversica, told the Washington Examiner. He argued that it is easier for the chatbot developer to restrict what users can do with ChatGPT and DALL-3 than it is for them to risk it being used in harmful ways. 

The industry and AI

Officials across the United States and the world have expressed worry about the technologies being used to attack campaigns. State and county officials listed AI-based misinformation as one of their top fears in the 2024 election, according to a survey by the cybersecurity company Arctic Wolf. AI-powered disinformation was ranked as one of the highest threats facing the world in 2024, according to a survey by the World Economic Forum.

The technology itself has not played out as some claimed so far, however. AI’s presence in the 2024 elections has been limited to a few ads posted by former President Donald Trump and Gov. Ron DeSantis (R-FL).

OpenAI’s chief investor projected confidence that the world is ready. “It’s not like this is the first election where disinformation, or misinformation, and election interference is going to be a real challenge that we all have to tackle,” Microsoft CEO Satya Nadella said Tuesday in Davos, Switzerland, according to Bloomberg.

OpenAI CEO Sam Altman was a bit more hesitant. “I don’t think this will be the same as before,” Altman said at the same event. “It’s always a mistake to try to fight the last war.”

Google and Meta preemptively announced guidelines last fall requiring political advertisements to disclose if AI-generated images were used. Google also stated in December that it will “restrict the types of election-related queries for which [Google’s chatbots Bard and SGE] will return responses.”

What is unclear is whether these guidelines or restrictions will be sufficient to stop potential abuses or if bad actors can find loopholes. 

“OpenAI taking initial steps to remove potential AI abuse is encouraging,” Alon Yamin, the CEO of the AI text analysis startup Copyleaks, said in a statement to the Washington Examiner. “But as we’ve witnessed with social media over the years, these actions can be difficult to implement due to the vast size of a user base.”

Slow Federal Response to AI

The companies have turned to self-regulation as the federal government has been slow to pass any rules. Senate Majority Leader Chuck Schumer (D-NY) told reporters in the fall that the first AI-focused legislation that Congress would pass would focus on AI in elections. Sen. Amy Klobuchar (D-MN) and Rep. Yvette Clarke (D-NY) introduced the REAL Political Ads Act in May 2023, legislation that would restrict AI-powered political ads, but the bill has not advanced in either chamber. Clarke’s and Klobuchar’s offices did not respond to requests for comment from the Washington Examiner.

The Federal Election Commission, which oversees political advertising and donations in campaigns, announced in August 2023 that it was considering new rules for regulating AI in campaign ads, but it has yet to release any details since it sought comments from the public in the fall. The agency is “diligently reviewing the thousands of public comments submitted” and intends to address the subject by early summer, FEC Chairman Sean Cooksey told the Washington Post. That means the rules would be implemented weeks after many states have run their primaries or caucuses.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

While elections will likely be under high scrutiny by the press and by tech companies for AI-powered advertising, it is unlikely that any laws will be passed in time to affect this cycle, Kascade said.

States are attempting to introduce legislation to restrict the use of the technology, but they are still in the early stages. Florida’s lawmakers introduced legislation this month that would outlaw the use of AI in campaign ads. Arizona also started testing out its own faux deepfakes to see how well it can detect false images.

Leave a Reply