November 25, 2024
"A Threat To Democracy": Activists Sound Alarm Over 'Deceptive AI' In Politics

Artificial Intelligence (AI) has made its way into politics - with politicians increasingly using AI to accelerate mundane but critical tasks such as analyzing voter rolls, assembling mailing lists, and even writing speeches, Bloomberg reports. The creation of a recent Republican ad, however, has left-wing lawmakers in a snit.

The ad by the the Republican National Committee (RNC) contained fictional scenarios of a Chinese attack on Taiwan and martial law in San Francisco, which was rendered using eerily realistic visuals via AI imagery. In response, Rep. Yvette Clarke (D-NY) swiftly introduced legislation demanding the disclosure of AI-generated content in political ads. That said, the bill is essentially a giant exercise in virtue signaling given the Republican-dominated chamber.

"This is going too far," Clarke said in an interview.

Beyond potentially deceptive ads, AI is boosting productivity while also threatening to eliminate certain political roles. Legal professionals and administrative workers are among those vulnerable to disruption, potentially altering the political workforce dynamics.

That said, according to Clarke, the most alarming aspect of AI's intrusion into politics is its potential to trick voters. The notorious "deepfakes," which manipulate audio and video content to make it indistinguishable from reality, are now within AI's grasp. This raises the specter of unscrupulous political actors using AI to mislead voters.

A video ad from the Republican National Committee included AI-generated images of a fictional Chinese attack on Taiwan.
Source: Republican National Committee (RNC) YouTube Channel

"AI can save or destroy democracy. It’s like a knife fight, right? You can kill someone, or you can make the best dinner," said Juri Schnöller, the managing director of Cosmonauts & Kings, a German political communication firm, after a conservative political party recently distributed AI-generated images of angry immigrants without telling viewers that they were digital creations.

Attempts to combat the rise of AI-generated deepfakes are already underway. Microsoft and other tech giants are pledging to embed digital watermarks in AI-generated content to help distinguish it from authentic media. But as the technology improves, it becomes increasingly challenging to detect these AI-generated materials. According to critics, this threatens the integrity of political discourse.

And so, as concerns rise over the potential misuse of AI in politics, regulation is becoming a contentious issue. In the US, policymakers have struggled to keep up with emerging technologies, leading to a lack of comprehensive legislation to address AI's impact on the political landscape. Meanwhile, the European Union and China have already taken steps to regulate AI, particularly in its most concerning applications, such as biometric surveillance.

European officials are separately pressing companies including Alphabet Inc.’s Google and Meta Platforms Inc. to label content and images generated by artificial intelligence, in order to help combat disinformation from adversaries like Russia.

Chinese regulators are aggressively imposing new rules on technology companies to ensure Communist Party control over AI and related information available in the country. Every AI model must be submitted for government review before introduction into the market, and synthetically generated content must carry "conspicuous labels," according to a Carnegie Endowment for International Peace paper this week. -Bloomberg

According to a spokesperson for OpenAI, maker of ChatGPT, there has been an uptick in the use of the chatbot for political purposes. In March, the company published new guidelines which prohibit "political campaigning or lobbying" using ChatGPT, which includes generating campaign materials which target particular demographics, or producing "high volumes" of materials. OpenAI's trust and safety teams, meanwhile, are trying to identify other political uses of the chatbot which violate their policies.

An online ad posted by Florida Governor Ron DeSantis’s presidential campaign featured AI-generated images of President Donald Trump hugging and kissing Anthony Fauci, with text overlaid. 
Source: DeSantis War Room Twitter

One group, the American Association of Political Consultants, called the use of deceptive generative AI in political ads a "threat to democracy."

As the AI-politics nexus continues to evolve, navigating the precarious terrain will undoubtedly become a central focus in all future elections. Striking a balance between AI's benefits and mitigating its potential threats will require collaboration among policymakers, tech companies, and political organizations.

"In politics, the truth is already in short supply," said GOP strategist Frank Luntz. "Thanks to AI, even those who care about the truth won’t know the truth."

Tyler Durden Mon, 07/17/2023 - 06:55

Artificial Intelligence (AI) has made its way into politics – with politicians increasingly using AI to accelerate mundane but critical tasks such as analyzing voter rolls, assembling mailing lists, and even writing speeches, Bloomberg reports. The creation of a recent Republican ad, however, has left-wing lawmakers in a snit.

The ad by the the Republican National Committee (RNC) contained fictional scenarios of a Chinese attack on Taiwan and martial law in San Francisco, which was rendered using eerily realistic visuals via AI imagery. In response, Rep. Yvette Clarke (D-NY) swiftly introduced legislation demanding the disclosure of AI-generated content in political ads. That said, the bill is essentially a giant exercise in virtue signaling given the Republican-dominated chamber.

This is going too far,” Clarke said in an interview.

Beyond potentially deceptive ads, AI is boosting productivity while also threatening to eliminate certain political roles. Legal professionals and administrative workers are among those vulnerable to disruption, potentially altering the political workforce dynamics.

That said, according to Clarke, the most alarming aspect of AI’s intrusion into politics is its potential to trick voters. The notorious “deepfakes,” which manipulate audio and video content to make it indistinguishable from reality, are now within AI’s grasp. This raises the specter of unscrupulous political actors using AI to mislead voters.

A video ad from the Republican National Committee included AI-generated images of a fictional Chinese attack on Taiwan.
Source: Republican National Committee (RNC) YouTube Channel

“AI can save or destroy democracy. It’s like a knife fight, right? You can kill someone, or you can make the best dinner,” said Juri Schnöller, the managing director of Cosmonauts & Kings, a German political communication firm, after a conservative political party recently distributed AI-generated images of angry immigrants without telling viewers that they were digital creations.

Attempts to combat the rise of AI-generated deepfakes are already underway. Microsoft and other tech giants are pledging to embed digital watermarks in AI-generated content to help distinguish it from authentic media. But as the technology improves, it becomes increasingly challenging to detect these AI-generated materials. According to critics, this threatens the integrity of political discourse.

And so, as concerns rise over the potential misuse of AI in politics, regulation is becoming a contentious issue. In the US, policymakers have struggled to keep up with emerging technologies, leading to a lack of comprehensive legislation to address AI’s impact on the political landscape. Meanwhile, the European Union and China have already taken steps to regulate AI, particularly in its most concerning applications, such as biometric surveillance.

European officials are separately pressing companies including Alphabet Inc.’s Google and Meta Platforms Inc. to label content and images generated by artificial intelligence, in order to help combat disinformation from adversaries like Russia.

Chinese regulators are aggressively imposing new rules on technology companies to ensure Communist Party control over AI and related information available in the country. Every AI model must be submitted for government review before introduction into the market, and synthetically generated content must carry “conspicuous labels,” according to a Carnegie Endowment for International Peace paper this week.Bloomberg

According to a spokesperson for OpenAI, maker of ChatGPT, there has been an uptick in the use of the chatbot for political purposes. In March, the company published new guidelines which prohibit “political campaigning or lobbying” using ChatGPT, which includes generating campaign materials which target particular demographics, or producing “high volumes” of materials. OpenAI’s trust and safety teams, meanwhile, are trying to identify other political uses of the chatbot which violate their policies.

An online ad posted by Florida Governor Ron DeSantis’s presidential campaign featured AI-generated images of President Donald Trump hugging and kissing Anthony Fauci, with text overlaid. 
Source: DeSantis War Room Twitter

One group, the American Association of Political Consultants, called the use of deceptive generative AI in political ads a “threat to democracy.”

As the AI-politics nexus continues to evolve, navigating the precarious terrain will undoubtedly become a central focus in all future elections. Striking a balance between AI’s benefits and mitigating its potential threats will require collaboration among policymakers, tech companies, and political organizations.

“In politics, the truth is already in short supply,” said GOP strategist Frank Luntz. “Thanks to AI, even those who care about the truth won’t know the truth.”

Loading…