Authored by Charles Hugh Smith via OfTwoMinds blog,
In the real-world, the costs are all we know for sure and profits remain elusive and contingent.
No one knows how the flood of AI products will play out, but we do know it's unleashed a corporate frenzy to "get our own AI up and running." Corporate fads are one of the least discussed but most obvious dynamics in the economy. Corporations follow fads as avidly as any other heedless consumer, rushing headlong into whatever everyone else is doing.
Globalization is a recent example. Back in the early 2000s, I sat next to corporate employees on flights to China and other Asian destinations who described the travails and costly disasters created by their employers' mad rush to move production overseas: quality control cratered, proprietary technologies were stolen and quickly copied, costs soared rather than declined, and so on.
So let's talk about costs of AI rather than just the benefits. Like many other heavily-hyped technologies, Large Language Model (LLM) AI is presented as stand-alone and "free." But it's actually not stand-alone or free: it requires an army of humans toiling away to make it functional: "We Are Grunt Workers": The Lowly Humans Helping Run ChatGPT Make Just $15 Per Hour (Zero Hedge).
"We are grunt workers, but there would be no AI language systems without it. You can design all the neural networks you want, you can get all the researchers involved you want, but without labelers, you have no ChatGPT. You have nothing."
The tasks performed by this hidden army of human workers is euphemistically sanitized by corporate-speak as data enrichment work.
Then there's the stupendous costs of all the extra computing power needed to deliver AI to the masses: For tech giants, AI like Bing and Bard poses billion-dollar search problem
What makes this form of AI pricier than conventional search is the computing power involved. Such AI depends on billions of dollars of chips, a cost that has to be spread out over their useful life of several years, analysts said. Electricity likewise adds costs and pressure to companies with carbon-footprint goals.
Corporations are counting on the magic of the Waste Is Growth / Landfill Economy to generate higher margins from whatever AI touches--don't ask, it's magic--but few ask how all this magic will work in a global recession where consumers will have less income and credit to buy, buy, buy.
LLM-AI is riddled with errors, and nobody can tell what's semi-accurate, what's misleading and what's flat-out wrong. Despite wildly optimistic claims, locating the errors and semi-accuracies can't be fully automated. Errors are inconsequential in an AI-generated book report, but when patients' health is on the line, they become very consequential: I'm an ER doctor: Here's what I found when I asked ChatGPT to diagnose my patients.
This raises fundamental questions about precisely how much work LLM-AI can perform without human oversight, and the all-too breezy claims that tens of millions of jobs will be lost as this iteration of AI automates vast swaths of human labor.
AI excels at echo-chamber reinforcement of risky or error-prone suppositions and policies: Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous. What's the threshold for concern that the AI conclusions are riskier than presented? How do we calculate the possibilities that the AI conclusions are catastrophically misguided?
At what point will decision-makers realize that trusting AI is not worth the risk? If history is any guide, that realization will only arise from financial losses and bad decisions. For the rest of us, it might just be the novelty wears off as the inadequacies pile up: Noam Chomsky: The False Promise of ChatGPT.
Since all this LLM-AI is "free," what AI-created goods and services will generate hundreds of billions of dollars in new revenues and tens of billions in new profits? The general answer is the profits will flow from firing millions of costly humans and replacing them with "nearly free" AI software.
But since all your competitors are rushing down the same frenzied path to AI, what competitive advantage will accrue to what is already a commodity (LLM-AI)? Nobody asks such questions because the euphoria of tech revolutions is so much fun.
The enthusiasm unleashed by new technologies is selectively euphoric: the benefits will prove immeasurable and the costs will soon be near-zero. But in the real-world, the costs are all we know for sure and profits remain elusive and contingent.
Exactly what gets wiped out by the meteor strike is not yet known.
* * *
My new book is now available at a 10% discount ($8.95 ebook, $18 print): Self-Reliance in the 21st Century.
Authored by Charles Hugh Smith via OfTwoMinds blog,
In the real-world, the costs are all we know for sure and profits remain elusive and contingent.
No one knows how the flood of AI products will play out, but we do know it’s unleashed a corporate frenzy to “get our own AI up and running.” Corporate fads are one of the least discussed but most obvious dynamics in the economy. Corporations follow fads as avidly as any other heedless consumer, rushing headlong into whatever everyone else is doing.
Globalization is a recent example. Back in the early 2000s, I sat next to corporate employees on flights to China and other Asian destinations who described the travails and costly disasters created by their employers’ mad rush to move production overseas: quality control cratered, proprietary technologies were stolen and quickly copied, costs soared rather than declined, and so on.
So let’s talk about costs of AI rather than just the benefits. Like many other heavily-hyped technologies, Large Language Model (LLM) AI is presented as stand-alone and “free.” But it’s actually not stand-alone or free: it requires an army of humans toiling away to make it functional: “We Are Grunt Workers”: The Lowly Humans Helping Run ChatGPT Make Just $15 Per Hour (Zero Hedge).
“We are grunt workers, but there would be no AI language systems without it. You can design all the neural networks you want, you can get all the researchers involved you want, but without labelers, you have no ChatGPT. You have nothing.”
The tasks performed by this hidden army of human workers is euphemistically sanitized by corporate-speak as data enrichment work.
Then there’s the stupendous costs of all the extra computing power needed to deliver AI to the masses: For tech giants, AI like Bing and Bard poses billion-dollar search problem
What makes this form of AI pricier than conventional search is the computing power involved. Such AI depends on billions of dollars of chips, a cost that has to be spread out over their useful life of several years, analysts said. Electricity likewise adds costs and pressure to companies with carbon-footprint goals.
Corporations are counting on the magic of the Waste Is Growth / Landfill Economy to generate higher margins from whatever AI touches–don’t ask, it’s magic–but few ask how all this magic will work in a global recession where consumers will have less income and credit to buy, buy, buy.
LLM-AI is riddled with errors, and nobody can tell what’s semi-accurate, what’s misleading and what’s flat-out wrong. Despite wildly optimistic claims, locating the errors and semi-accuracies can’t be fully automated. Errors are inconsequential in an AI-generated book report, but when patients’ health is on the line, they become very consequential: I’m an ER doctor: Here’s what I found when I asked ChatGPT to diagnose my patients.
This raises fundamental questions about precisely how much work LLM-AI can perform without human oversight, and the all-too breezy claims that tens of millions of jobs will be lost as this iteration of AI automates vast swaths of human labor.
AI excels at echo-chamber reinforcement of risky or error-prone suppositions and policies: Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous. What’s the threshold for concern that the AI conclusions are riskier than presented? How do we calculate the possibilities that the AI conclusions are catastrophically misguided?
At what point will decision-makers realize that trusting AI is not worth the risk? If history is any guide, that realization will only arise from financial losses and bad decisions. For the rest of us, it might just be the novelty wears off as the inadequacies pile up: Noam Chomsky: The False Promise of ChatGPT.
Since all this LLM-AI is “free,” what AI-created goods and services will generate hundreds of billions of dollars in new revenues and tens of billions in new profits? The general answer is the profits will flow from firing millions of costly humans and replacing them with “nearly free” AI software.
But since all your competitors are rushing down the same frenzied path to AI, what competitive advantage will accrue to what is already a commodity (LLM-AI)? Nobody asks such questions because the euphoria of tech revolutions is so much fun.
The enthusiasm unleashed by new technologies is selectively euphoric: the benefits will prove immeasurable and the costs will soon be near-zero. But in the real-world, the costs are all we know for sure and profits remain elusive and contingent.
Exactly what gets wiped out by the meteor strike is not yet known.
* * *
My new book is now available at a 10% discount ($8.95 ebook, $18 print): Self-Reliance in the 21st Century.
Loading…