November 5, 2024
Vice President Kamala Harris' former adviser, Jonathan Mayer, was named the Justice Department's first chief AI officer and will advise AG Merrick Garland.
Vice President Kamala Harris’ former adviser, Jonathan Mayer, was named the Justice Department’s first chief AI officer and will advise AG Merrick Garland.



The Justice Department named its first-ever official focused on artificial intelligence (AI) on Thursday in anticipation of the rapidly evolving technology’s impact on the criminal justice system. 

Jonathan Mayer, a professor at Princeton University who focuses on the “intersection of technology and law, with emphasis on national security, criminal procedure, consumer privacy, network management, and online speech,” according to his online biography, was selected to serve as the DOJ’s chief science and technology adviser and chief AI officer, Reuters reported. 

“The Justice Department must keep pace with rapidly evolving scientific and technological developments in order to fulfill our mission to uphold the rule of law, keep our country safe and protect civil rights,” U.S. Attorney General Merrick Garland said in a statement.


Mayer previously served as the technology adviser to Vice President Kamala Harris during her time as a U.S. senator, and as the Chief Technologist of the Federal Communications Commission Enforcement Bureau. In his new role, he is expected to advise Garland and DOJ leadership on matters related to emerging technologies, including how to responsibly integrate AI into the department’s investigations and criminal prosecutions, according to Reuters. 

NEW HAMPSHIRE AG TRACES ROBOCALLS WITH ‘AI-GENERATED CLONE’ OF BIDEN’S VOICE BACK TO TEXAS-BASED COMPANIES

Mayer is set to lead a newly formed board of law enforcement and civil rights officials that will advise Garland and others at the Justice Department on the ethics and efficacy of AI systems, according to Reuters. He will also seek to recruit more technological experts to the department.

See also  Charlotte protesters attack officers, set tractor-trailer on fire in riot at Eritrean ‘cultural event’: police

U.S. officials have been weighing how to best balance benefiting from AI, while also minimizing the dangers of the loosely regulated and rapidly expanding technology. 

During a speech at Oxford University in the United Kingdom last week, U.S. Deputy Attorney General Lisa Monaco said the Justice Department has already deployed AI to classify and trace the source of opioids and other drugs, to help “triage and understand the more than one million tips submitted to the FBI by the public every year,” and “to synthesize huge volumes of evidence collected in some of our most significant cases, including January 6.” 

“Every new technology is a double-edged sword, but AI may be the sharpest blade yet. It has the potential to be an indispensable tool to help identify, disrupt, and deter criminals, terrorists, and hostile nation-states from doing us harm,” Monaco said. 

FCC MAKES AI-GENERATED ROBOCALLS THAT CAN FOOL VOTERS ILLEGAL AFTER BIDEN VOICE CLONING IN NEW HAMPSHIRE

“Yet for all the promise it offers,” she continued. “AI is also accelerating risks to our collective security. We know it has the potential to amplify existing biases and discriminatory practices. It can expedite the creation of harmful content, including child sexual abuse material. It can arm nation-states with tools to pursue digital authoritarianism, accelerating the spread of disinformation and repression. And we’ve already seen that AI can lower the barriers to entry for criminals and embolden our adversaries. It’s changing how crimes are committed and who commits them — creating new opportunities for wanna-be hackers and supercharging the threat posed by the most sophisticated cybercriminals.”

See also  Putin Dissatisfied with Tucker Carlson Interview, Points Out Key Detail He Didn’t Like

Monaco highlighted the potential risk to election security posed by AI, saying how foreign adversaries could radicalize users on social media with incendiary content created with generative AI, misinform voters by impersonating trusted sources and spreading deepfakes, and spread falsehoods using chatbots, fake images and even cloned voices. 

“This year, over half the world’s population – more than four billion people – will have the chance to vote in an election. That includes some of the world’s largest democracies – from the United States to Indonesia and India, from Brazil to here in Britain,” Monaco said. “We’ve already seen the misuse of AI play out in elections from Chicago and New Hampshire to Slovakia. And I fear it’s just the start. Left without guardrails, AI poses immense challenges for democracies around the world. So, we’re at an inflection point with AI. We have to move quickly to identify, leverage, and govern its positive uses while taking measures to minimize its risks.” 

Share this article:
Share on FacebookTweet about this on Twitter
→ What are your thoughts? ←
Scroll down to leave a comment: