December 22, 2024
There's Just One Problem: AI Isn't Intelligent, And That's A Systemic Risk

Authored by Charles Hugh Smith via OfTwoMinds blog,

Mimicry of intelligence isn't intelligence, and so while AI mimicry is a powerful tool, it isn't intelligent.

The mythology of Technology has a special altar for AI, artificial intelligence, which is reverently worshiped as the source of astonishing cost reductions (as human labor is replaced by AI) and the limitless expansion of consumption and profits. AI is the blissful perfection of technology's natural advance to ever greater powers.

The consensus holds that the advance of AI will lead to a utopia of essentially limitless control of Nature and a cornucopia of leisure and abundance.

If we pull aside the mythology's curtain, we find that AI mimics human intelligence, and this mimicry is so enthralling that we take it as evidence of actual intelligence. But mimicry of intelligence isn't intelligence, and so while AI mimicry is a powerful tool, it isn't intelligent.

The current iterations of Generative AI--large language models (LLMs) and machine learning--mimic our natural language ability by processing millions of examples of human writing and speech and extracting what algorithms select as the best answers to queries.

These AI programs have no understanding of the context or the meaning of the subject; they mine human knowledge to distill an answer. This is potentially useful but not intelligence.

The AI programs have limited capacity to discern truth from falsehood, hence their propensity to hallucinate fictions as facts. They are incapable of discerning the difference between statistical variations and fatal errors, and layering on precautionary measures adds additional complexity that becomes another point of failure.

As for machine learning, AI can project plausible solutions to computationally demanding problems such as how proteins fold, but this brute-force computational black-box is opaque and therefore of limited value: the program doesn't actually understand protein folding in the way humans understand it, and we don't understand how the program arrived at its solution.

Since AI doesn't actually understand the context, it is limited to the options embedded in its programming and algorithms. We discern these limits in AI-based apps and bots, which have no awareness of the actual problem. For example, our Internet connection is down due to a corrupted system update, but because this possibility wasn't included in the app's universe of problems to solve, the AI app/bot dutifully reports the system is functioning perfectly even though it is broken. (This is an example from real life.)

In essence, every layer of this mining / mimicry creates additional points of failure: the inability to identify the difference between fact and fiction or between allowable error rates and fatal errors, the added complexity of precautionary measures and the black-box opacity all generate risks of normal accidents cascading into systems failure.

There is also the systemic risk generated by relying on black-box AI to operate systems to the point that humans lose the capacity to modify or rebuild the systems. This over-reliance on AI programs creates the risk of cascading failure not just of digital systems but the real-world infrastructure that now depends on digital systems.

There is an even more pernicious result of depending on AI for solutions. Just as the addictive nature of mobile phones, social media and Internet content has disrupted our ability to concentrate, focus and learn difficult material--a devastating decline in learning for children and teens--AI offers up a cornucopia of snackable factoids, snippets of coding, computer-generated TV commercials, articles and entire books that no longer require us to have any deep knowledge of subjects and processes. Lacking this understanding, we're no longer equipped to pursue skeptical inquiry or create content or coding from scratch.

Indeed, the arduous process of acquiring this knowledge now seems needless: the AI bot can do it all, quickly, cheaply and accurately. This creates two problems: 1) when black-box AI programs fail, we no longer know enough to diagnose and fix the failure, or do the work ourselves, and 2) we have lost the ability to understand that in many cases, there is no answer or solution that is the last word: the "answer" demands interpretation of facts, events, processes and knowledge bases are that inherently ambiguous.

We no longer recognize that the AI answer to a query is not a fact per se, it's an interpretation of reality that's presented as a fact, and the AI solution is only one of many pathways, each of which has intrinsic tradeoffs that generate unforeseeable costs and consequences down the road.

To discern the difference between an interpretation and a supposed fact requires a sea of knowledge that is both wide and deep, and in losing the drive and capacity to learn difficult material, we've lost the capacity to even recognize what we've lost: those with little real knowledge lack the foundation needed to understand AI's answer in the proper context.

The net result is we become less capable and less knowledgeable, blind to the risks created by our loss of competency while the AI programs introduce systemic risks we cannot foresee or forestall. AI degrades the quality of every product and system, for mimicry does not generate definitive answers, solutions and insights, it only generates an illusion of definitive answers, solutions and insights which we foolishly confuse with actual intelligence.

While the neofeudal corporate-state cheers the profits to be reaped by culling human labor on a mass scale, the mining / mimicry of human knowledge has limits. Relying on the AI programs to eliminate all fatal errors is itself a fatal error, and so humans must remain in the decision loop (the OODA loop of observe, orient, decide, act).

Once AI programs engage in life-safety or healthcare processes, every entity connected to the AI program is exposed to open-ended (joint and several) liability should injurious or fatal errors occur.

If we boil off the mythology and hyperbole, we're left with another neofeudal structure: the wealthy will be served by humans, and the rest of us will be stuck with low-quality, error-prone AI service with no recourse.

The expectation of AI promoters is that Generative AI will reap trillions of dollars in profits from cost savings and new products / services. This story doesn't map the real world, in which every AI software tool is easily copied / distributed and so it will be impossible to protect any scarcity value, which is the essential dynamic in maintaining the pricing power needed to reap outsized profits.

There is little value in software tools that everyone possesses unless a monopoly restricts distribution, and little value in the content auto-generated by these tools: the millions of AI-generated songs, films, press releases, essays, research papers, etc. will overwhelm any potential audience, reducing the value of all AI-generated content to zero.

The promoters claim the mass culling of jobs will magically be offset by entire new industries created by AI, echoing the transition from farm labor to factory jobs. But the AI dragon will eat its own tail, for it creates few jobs or profits that can be taxed to pay people for not working (Universal Basic Income).

Perhaps the most consequential limit to AI is that it will do nothing to reverse humanity's most pressing problems. It can't clean up the Great Pacific Trash Gyre, or limit the 450 million tons of mostly unrecycled plastic spewed every year, or reverse climate change, or clean low-Earth orbits of the thousands of high-velocity bits of dangerous detritus, or remake the highly profitable waste is growth Landfill Economy into a sustainable global system, or eliminate all the sources of what I term Anti-Progress. It will simply add new sources of systemic risk, waste and neofeudal exploitation.

*  *  *

Become a $3/month patron of my work via patreon.com.

Subscribe to my Substack for free

Tyler Durden Fri, 08/09/2024 - 17:00

Authored by Charles Hugh Smith via OfTwoMinds blog,

Mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.

The mythology of Technology has a special altar for AI, artificial intelligence, which is reverently worshiped as the source of astonishing cost reductions (as human labor is replaced by AI) and the limitless expansion of consumption and profits. AI is the blissful perfection of technology’s natural advance to ever greater powers.

The consensus holds that the advance of AI will lead to a utopia of essentially limitless control of Nature and a cornucopia of leisure and abundance.

If we pull aside the mythology’s curtain, we find that AI mimics human intelligence, and this mimicry is so enthralling that we take it as evidence of actual intelligence. But mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.

The current iterations of Generative AI–large language models (LLMs) and machine learning–mimic our natural language ability by processing millions of examples of human writing and speech and extracting what algorithms select as the best answers to queries.

These AI programs have no understanding of the context or the meaning of the subject; they mine human knowledge to distill an answer. This is potentially useful but not intelligence.

The AI programs have limited capacity to discern truth from falsehood, hence their propensity to hallucinate fictions as facts. They are incapable of discerning the difference between statistical variations and fatal errors, and layering on precautionary measures adds additional complexity that becomes another point of failure.

As for machine learning, AI can project plausible solutions to computationally demanding problems such as how proteins fold, but this brute-force computational black-box is opaque and therefore of limited value: the program doesn’t actually understand protein folding in the way humans understand it, and we don’t understand how the program arrived at its solution.

Since AI doesn’t actually understand the context, it is limited to the options embedded in its programming and algorithms. We discern these limits in AI-based apps and bots, which have no awareness of the actual problem. For example, our Internet connection is down due to a corrupted system update, but because this possibility wasn’t included in the app’s universe of problems to solve, the AI app/bot dutifully reports the system is functioning perfectly even though it is broken. (This is an example from real life.)

In essence, every layer of this mining / mimicry creates additional points of failure: the inability to identify the difference between fact and fiction or between allowable error rates and fatal errors, the added complexity of precautionary measures and the black-box opacity all generate risks of normal accidents cascading into systems failure.

There is also the systemic risk generated by relying on black-box AI to operate systems to the point that humans lose the capacity to modify or rebuild the systems. This over-reliance on AI programs creates the risk of cascading failure not just of digital systems but the real-world infrastructure that now depends on digital systems.

There is an even more pernicious result of depending on AI for solutions. Just as the addictive nature of mobile phones, social media and Internet content has disrupted our ability to concentrate, focus and learn difficult material–a devastating decline in learning for children and teens–AI offers up a cornucopia of snackable factoids, snippets of coding, computer-generated TV commercials, articles and entire books that no longer require us to have any deep knowledge of subjects and processes. Lacking this understanding, we’re no longer equipped to pursue skeptical inquiry or create content or coding from scratch.

Indeed, the arduous process of acquiring this knowledge now seems needless: the AI bot can do it all, quickly, cheaply and accurately. This creates two problems: 1) when black-box AI programs fail, we no longer know enough to diagnose and fix the failure, or do the work ourselves, and 2) we have lost the ability to understand that in many cases, there is no answer or solution that is the last word: the “answer” demands interpretation of facts, events, processes and knowledge bases are that inherently ambiguous.

We no longer recognize that the AI answer to a query is not a fact per se, it’s an interpretation of reality that’s presented as a fact, and the AI solution is only one of many pathways, each of which has intrinsic tradeoffs that generate unforeseeable costs and consequences down the road.

To discern the difference between an interpretation and a supposed fact requires a sea of knowledge that is both wide and deep, and in losing the drive and capacity to learn difficult material, we’ve lost the capacity to even recognize what we’ve lost: those with little real knowledge lack the foundation needed to understand AI’s answer in the proper context.

The net result is we become less capable and less knowledgeable, blind to the risks created by our loss of competency while the AI programs introduce systemic risks we cannot foresee or forestall. AI degrades the quality of every product and system, for mimicry does not generate definitive answers, solutions and insights, it only generates an illusion of definitive answers, solutions and insights which we foolishly confuse with actual intelligence.

While the neofeudal corporate-state cheers the profits to be reaped by culling human labor on a mass scale, the mining / mimicry of human knowledge has limits. Relying on the AI programs to eliminate all fatal errors is itself a fatal error, and so humans must remain in the decision loop (the OODA loop of observe, orient, decide, act).

Once AI programs engage in life-safety or healthcare processes, every entity connected to the AI program is exposed to open-ended (joint and several) liability should injurious or fatal errors occur.

If we boil off the mythology and hyperbole, we’re left with another neofeudal structure: the wealthy will be served by humans, and the rest of us will be stuck with low-quality, error-prone AI service with no recourse.

The expectation of AI promoters is that Generative AI will reap trillions of dollars in profits from cost savings and new products / services. This story doesn’t map the real world, in which every AI software tool is easily copied / distributed and so it will be impossible to protect any scarcity value, which is the essential dynamic in maintaining the pricing power needed to reap outsized profits.

There is little value in software tools that everyone possesses unless a monopoly restricts distribution, and little value in the content auto-generated by these tools: the millions of AI-generated songs, films, press releases, essays, research papers, etc. will overwhelm any potential audience, reducing the value of all AI-generated content to zero.

The promoters claim the mass culling of jobs will magically be offset by entire new industries created by AI, echoing the transition from farm labor to factory jobs. But the AI dragon will eat its own tail, for it creates few jobs or profits that can be taxed to pay people for not working (Universal Basic Income).

Perhaps the most consequential limit to AI is that it will do nothing to reverse humanity’s most pressing problems. It can’t clean up the Great Pacific Trash Gyre, or limit the 450 million tons of mostly unrecycled plastic spewed every year, or reverse climate change, or clean low-Earth orbits of the thousands of high-velocity bits of dangerous detritus, or remake the highly profitable waste is growth Landfill Economy into a sustainable global system, or eliminate all the sources of what I term Anti-Progress. It will simply add new sources of systemic risk, waste and neofeudal exploitation.

*  *  *

Become a $3/month patron of my work via patreon.com.

Subscribe to my Substack for free

Loading…