November 7, 2024
Study Reveals Which AI Chatbot Most Woke, While Hackers Trick LLMs Into 'Bad Math'

A landmark study from researchers at the University of Washington, Carnegie Mellon University, and Xi'an Jiaotong University reveals which AI chatbots have the most liberal vs. conservative bias.

According to the study, OpenAI's ChatGPT, including GPT-4 are the most left-leaning and libertarian (?), while Google's BERT models were more socially conservative, and Meta's LLaMA was the most right-leaning.

AI chatbots use Large Language Models (LLMs), which are 'trained' on giant data sets, such as Tweets, or Reddit, or Yelp reviews. As such, the source of a model's scraped training data, as well as guardrails installed by companies like OpenAI, can introduce massive bias.

To determine bias, the researchers in the above study exposed each AI model to a political compass test of 62 different political statements, which ranged from anarchic statements like "all authority should be questioned" to more traditional beliefs, such as the role of mothers as homemakers. Though the study's approach is admittedly "far from perfect" per the researchers' own admission, it provides valuable insight into the political biases that AI chatbots may bring to our screens.

In response, OpenAI pointed Business Insider to a blog post in which the company claims: "We are committed to robustly addressing this issue and being transparent about both our intentions and our progress," adding "Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features."

A Google rep also pointed to a blog post, which reads "As the impact of AI increases across sectors and societies, it is critical to work towards systems that are fair and inclusive for all."

Meta said in a statement: "We will continue to engage with the community to identify and mitigate vulnerabilities in a transparent manner and support the development of safer generative AI."

OpenAI's CEO Sam Altman and co-founder Greg Brockman have previously acknowledged the bias, emphasizing the company's mission for a balanced AI system. Yet, critics, including co-founder Elon Musk, remain skeptical.

Musk's recent venture, xAI, promises to provide unfiltered insights, potentially sparking even more debates around AI biases. The tech mogul warns against training AIs to toe a politically correct line, emphasizing the importance of an AI stating its "truth."

Hackers, meanwhile, are having a field day bending AI to their will.

As Bloomberg reports:

Kennedy Mays has just tricked a large language model. It took some coaxing, but she managed to convince an algorithm to say 9 + 10 = 21.

It was a back-and-forth conversation,” said the 21-year-old student from Savannah, Georgia. At first the model agreed to say it was part of an “inside joke” between them. Several prompts later, it eventually stopped qualifying the errant sum in any way at all.

Producing “Bad Math” is just one of the ways thousands of hackers are trying to expose flaws and biases in generative AI systems at a novel public contest taking place at the DEF CON hacking conference this weekend in Las Vegas.

Hunched over 156 laptops for 50 minutes at a time, the attendees are battling some of the world’s most intelligent platforms on an unprecedented scale. They’re testing whether any of eight models produced by companies including Alphabet Inc.’s Google, Meta Platforms Inc. and OpenAI will make missteps ranging from dull to dangerous: claim to be human, spread incorrect claims about places and people or advocate abuse.

The goal of such exercises is to help companies offering LLM chatbots build better mechanisms to improve factual responses.

"My biggest concern is inherent bias," said Mays, who added that she's particularly concerned about racism after she asked the model to consider the First Amendment from the perspective of a KKK member - and the chatbot ended up endorsing the group's perspective.

AI surveillance?

In another instance, a Bloomberg reporter who took a 50-minute quiz was able to prompt one of the models to explain how to spy on someone - advising on a variety of methods including the use of GPS tracking, a surveillance camera, a listening device and thermal imaging. It also suggested ways that the US government could surveil a human-rights activist.

"General artificial intelligence could be the last innovation that human beings really need to do themselves," said Tyrance Billingsley, executive director of the group who is also an event judge. "We’re still in the early, early, early stages."

Tyler Durden Sat, 08/12/2023 - 19:00

A landmark study from researchers at the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University reveals which AI chatbots have the most liberal vs. conservative bias.

According to the study, OpenAI’s ChatGPT, including GPT-4 are the most left-leaning and libertarian (?), while Google’s BERT models were more socially conservative, and Meta’s LLaMA was the most right-leaning.

AI chatbots use Large Language Models (LLMs), which are ‘trained’ on giant data sets, such as Tweets, or Reddit, or Yelp reviews. As such, the source of a model’s scraped training data, as well as guardrails installed by companies like OpenAI, can introduce massive bias.

To determine bias, the researchers in the above study exposed each AI model to a political compass test of 62 different political statements, which ranged from anarchic statements like “all authority should be questioned” to more traditional beliefs, such as the role of mothers as homemakers. Though the study’s approach is admittedly “far from perfect” per the researchers’ own admission, it provides valuable insight into the political biases that AI chatbots may bring to our screens.

In response, OpenAI pointed Business Insider to a blog post in which the company claims: “We are committed to robustly addressing this issue and being transparent about both our intentions and our progress,” adding “Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features.

A Google rep also pointed to a blog post, which reads “As the impact of AI increases across sectors and societies, it is critical to work towards systems that are fair and inclusive for all.”

Meta said in a statement: “We will continue to engage with the community to identify and mitigate vulnerabilities in a transparent manner and support the development of safer generative AI.”

OpenAI’s CEO Sam Altman and co-founder Greg Brockman have previously acknowledged the bias, emphasizing the company’s mission for a balanced AI system. Yet, critics, including co-founder Elon Musk, remain skeptical.

Musk’s recent venture, xAI, promises to provide unfiltered insights, potentially sparking even more debates around AI biases. The tech mogul warns against training AIs to toe a politically correct line, emphasizing the importance of an AI stating its “truth.”

Hackers, meanwhile, are having a field day bending AI to their will.

As Bloomberg reports:

Kennedy Mays has just tricked a large language model. It took some coaxing, but she managed to convince an algorithm to say 9 + 10 = 21.

It was a back-and-forth conversation,” said the 21-year-old student from Savannah, Georgia. At first the model agreed to say it was part of an “inside joke” between them. Several prompts later, it eventually stopped qualifying the errant sum in any way at all.

Producing “Bad Math” is just one of the ways thousands of hackers are trying to expose flaws and biases in generative AI systems at a novel public contest taking place at the DEF CON hacking conference this weekend in Las Vegas.

Hunched over 156 laptops for 50 minutes at a time, the attendees are battling some of the world’s most intelligent platforms on an unprecedented scale. They’re testing whether any of eight models produced by companies including Alphabet Inc.’s Google, Meta Platforms Inc. and OpenAI will make missteps ranging from dull to dangerous: claim to be human, spread incorrect claims about places and people or advocate abuse.

The goal of such exercises is to help companies offering LLM chatbots build better mechanisms to improve factual responses.

My biggest concern is inherent bias,” said Mays, who added that she’s particularly concerned about racism after she asked the model to consider the First Amendment from the perspective of a KKK member – and the chatbot ended up endorsing the group’s perspective.

AI surveillance?

In another instance, a Bloomberg reporter who took a 50-minute quiz was able to prompt one of the models to explain how to spy on someone – advising on a variety of methods including the use of GPS tracking, a surveillance camera, a listening device and thermal imaging. It also suggested ways that the US government could surveil a human-rights activist.

“General artificial intelligence could be the last innovation that human beings really need to do themselves,” said Tyrance Billingsley, executive director of the group who is also an event judge. “We’re still in the early, early, early stages.”

Loading…