December 22, 2024
OpenAI Agrees To Run GPT Models Past US Government To 'Evaluate' For Safety

AI companies OpenAI and Anthropic have agreed to run their new AI models past the US government's AI Safety Institute to evaluate their 'capabilities and risks,' as well as 'collaborate on methods to mitigate potential issues,' Bloomberg reports.

The ChatGPT app is displayed on an iPhone in New York, May 18, 2023. (AP Photo/Richard Drew, File)

"Safety is essential to fueling breakthrough technological innovation," said Elizabeth Kelley, director of the US AI Safety Institute. "These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI."

Under agreements announced Thursday by the Commerce Department's National Institute of Standards and Technology (NIST), in "close collaboration with the UK's AI Safety Institute," the government will work to provide feedback on potential safety improvements.

"We strongly support the US AI Safety Institute’s mission and look forward to working together to inform safety best practices and standards for AI models," said OpenAI Chief Strategy Officer Jason Kwon. "We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on."

Anthropic also said it was important to build out the capacity to effectively test AI models. “Safe, trustworthy AI is crucial for the technology's positive impact,” said Jack Clark, Anthropic co-founder and head of policy. “This strengthens our ability to identify and mitigate risks, advancing responsible AI development. We’re proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI." -Bloomberg

The US AI Safety Institute was launched in 2023 as part of the Biden-Harris administration's Executive Order on AI. The group is tasked with developing the testing, evaluations and guidelines for responsible AI innovation.

Meanwhile, as we reported on Wednesday, the Microsoft-backed OpenAI is preparing to raise at least a billion dollars in a new funding round, which would place the company's value at just north of $100 billion.

Microsoft has a 49% share of OpenAI's profit after plowing $13 billion into the chatbot company since 2019. 

News of the funding round from WSJ comes about a month after OpenAI revealed it's testing SearchGPT: a combination of AI tech and real-time search data that allows users to search the internet with ChatGPT. 

Tyler Durden Thu, 08/29/2024 - 10:50

AI companies OpenAI and Anthropic have agreed to run their new AI models past the US government’s AI Safety Institute to evaluate their ‘capabilities and risks,’ as well as ‘collaborate on methods to mitigate potential issues,’ Bloomberg reports.

The ChatGPT app is displayed on an iPhone in New York, May 18, 2023. (AP Photo/Richard Drew, File)

Safety is essential to fueling breakthrough technological innovation,” said Elizabeth Kelley, director of the US AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Under agreements announced Thursday by the Commerce Department’s National Institute of Standards and Technology (NIST), in “close collaboration with the UK’s AI Safety Institute,” the government will work to provide feedback on potential safety improvements.

“We strongly support the US AI Safety Institute’s mission and look forward to working together to inform safety best practices and standards for AI models,” said OpenAI Chief Strategy Officer Jason Kwon. “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”

Anthropic also said it was important to build out the capacity to effectively test AI models. “Safe, trustworthy AI is crucial for the technology’s positive impact,” said Jack Clark, Anthropic co-founder and head of policy. “This strengthens our ability to identify and mitigate risks, advancing responsible AI development. We’re proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI.” -Bloomberg

The US AI Safety Institute was launched in 2023 as part of the Biden-Harris administration’s Executive Order on AI. The group is tasked with developing the testing, evaluations and guidelines for responsible AI innovation.

Meanwhile, as we reported on Wednesday, the Microsoft-backed OpenAI is preparing to raise at least a billion dollars in a new funding round, which would place the company’s value at just north of $100 billion.

Microsoft has a 49% share of OpenAI’s profit after plowing $13 billion into the chatbot company since 2019. 

News of the funding round from WSJ comes about a month after OpenAI revealed it’s testing SearchGPT: a combination of AI tech and real-time search data that allows users to search the internet with ChatGPT. 

Loading…