November 24, 2024
Will AI Go Rogue?

Following this week’s release of GPT-4, OpenAI’s new multimodal model accepting image and text inputs rather than ChatGPT’s text-only prompts, people on social media have been marveling about the new engine’s results in performing a variety of tasks, such as creating a working website based on a simple sketch, outperforming humans in a variety of standardized tests or writing code.

But, as Statista's Felix Richter notes, as people are only beginning to understand the capabilities (and limitations) of artificial intelligence models such as ChatGPT and now GPT-4, there’s also growing concern over what the rapid advancements in AI could ultimately lead to.

“GPT-4 is exciting and scary,” New York Times columnist Kevin Roose wrote, adding that there two kinds of risks involved in AI systems: the good ones, i.e. the ones we anticipate, plan for and try to prevent and the bad ones, i.e. the ones we cannot anticipate.

“The more time I spend with AI systems like GPT-4,” Roose writes, “the less I’m convinced that we know half of what’s coming.”

According to Ipsos Global Advisor’s 2023 Predictions, many people seem to share Roose’s reservations with regard to artificial intelligence.

Infographic: Will AI Go Rogue? | Statista

You will find more infographics at Statista

According to the survey conducted among 24,471 adults in 34 countries, an average of 27 percent of respondents per country consider it likely that a rogue AI program will cause problems around the world this year, with some countries such as India, Indonesia and China seeing significantly higher degrees of AI angst.

Interestingly, the share of those expressing their concern over the potential of AI going rogue is virtually unchanged from the previous year.

Considering the very public leaps the technology has taken over the past few months, it’ll be interesting to see how this changes going forward.

Tyler Durden Fri, 03/17/2023 - 23:50

Following this week’s release of GPT-4, OpenAI’s new multimodal model accepting image and text inputs rather than ChatGPT’s text-only prompts, people on social media have been marveling about the new engine’s results in performing a variety of tasks, such as creating a working website based on a simple sketch, outperforming humans in a variety of standardized tests or writing code.

But, as Statista’s Felix Richter notes, as people are only beginning to understand the capabilities (and limitations) of artificial intelligence models such as ChatGPT and now GPT-4, there’s also growing concern over what the rapid advancements in AI could ultimately lead to.

“GPT-4 is exciting and scary,” New York Times columnist Kevin Roose wrote, adding that there two kinds of risks involved in AI systems: the good ones, i.e. the ones we anticipate, plan for and try to prevent and the bad ones, i.e. the ones we cannot anticipate.

“The more time I spend with AI systems like GPT-4,” Roose writes, “the less I’m convinced that we know half of what’s coming.”

According to Ipsos Global Advisor’s 2023 Predictions, many people seem to share Roose’s reservations with regard to artificial intelligence.

Infographic: Will AI Go Rogue? | Statista

You will find more infographics at Statista

According to the survey conducted among 24,471 adults in 34 countries, an average of 27 percent of respondents per country consider it likely that a rogue AI program will cause problems around the world this year, with some countries such as India, Indonesia and China seeing significantly higher degrees of AI angst.

Interestingly, the share of those expressing their concern over the potential of AI going rogue is virtually unchanged from the previous year.

Considering the very public leaps the technology has taken over the past few months, it’ll be interesting to see how this changes going forward.

Loading…