<!–

–>

July 11, 2023

Technical topics, of any sort at all, are generally subject to serious distortion when they hit the level of public discussion. There are many reasons for this – ideology, click-lust, and the sheer inability of the average journo school grad to adequately wrap his head around whatever concept is under consideration.

‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609268089992-0’); }); document.write(”); googletag.cmd.push(function() { googletag.pubads().addEventListener(‘slotRenderEnded’, function(event) { if (event.slot.getSlotElementId() == “div-hre-Americanthinker—New-3028”) { googletag.display(“div-hre-Americanthinker—New-3028”); } }); }); }

There’s no end of examples: Just think of the garbage written about global warming or COVID.

The latest of these topics is Artificial Intelligence (AI). Commentary on AI has exploded across the media sphere since the release of ChatGPT, an AI app purportedly capable of learning how to produce prose in any style at request. The consensus, to quote a style not yet mastered by ChatGPT, is almost uniformly “a tale told by an idiot, full of sound and fury, signifying nothing.”

The media uproar has been characterized by two approaches — the first (and most common) is complete lack of understanding of the technology. The second is an impression of the topic derived from movies, largely HAL 9000 and Skynet (an older generation would add Colossus). These AI entities are uniformly insane, malevolent, or both (though not to the level of the one envisioned in Harlan Ellison’s “I Have no Mouth, and I Must Scream” which is so overcome by existential loathing that it destroys all humanity except for five individuals, whom it then sets out to torture for all eternity). For some reason, nobody ever suggests the AI Samantha in the superb film Her, who is cheerful, helpful, and even loving. That says more about human nature than it does Artificial Intelligence.

‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609270365559-0’); }); document.write(”); googletag.cmd.push(function() { googletag.pubads().addEventListener(‘slotRenderEnded’, function(event) { if (event.slot.getSlotElementId() == “div-hre-Americanthinker—New-3035”) { googletag.display(“div-hre-Americanthinker—New-3035”); } }); }); }

Artificial Intelligence was introduced by Alan Turing in his 1950 paper “Computing Machinery and Intelligence.” Turing had first proposed computers in the 1930s and then played a role in building the earliest working models for the British codebreakers at Bletchley Park.  In this paper he suggested something that came to be called the Turing Test, intended to answer the question as to whether an AI should be treated as a self-aware entity – as another person. Turing’s argument is that if you converse with an AI – ask it questions and receive answers – and cannot decide whether you are interacting with a human person or a machine, you must consider it to be an intelligent, self-aware entity. (The Turing Test has been philosophically challenged since them, while at the same time being subject to cheating by some AI researchers, who have pulled tricks such as personifying the AI as a twelve-year-old or a foreigner who speaks English as a second language.)

Turing’s speculations fell on fertile ground. While the original Bletchley Park “bombes” (so-called due to the ticking noise they made while calculating) had been shut down after the war, more advanced computers such as UNIVAC were being designed and built during the early 50s. They were greeted with wild speculation along with musing on what it all meant for the fate of humanity. Conclusions were largely unanimous: “thinking machines” would soon outdo mere humans, who would then be either destroyed or shoved aside to go quietly extinct. 

Eighty years on, little has changed. The debate continues on the same shallow, uninformed level while we eagerly await for AM or HAL to appear and start torturing or murdering us.

So what is the problem here? First and above all, when we speak of AI in the 21st century, we’re discussing two distinct and separate types as if they were one and the same thing. These are what I call “App AI,” which includes ChatGPT and the numerous AI art apps making the rounds, and “General Intelligence AI,” the movie-style HALs and Skynets capable of taking over everything and doing what they damn well please.

Up until now, all that we’ve seen are App AIs. These are software, generally operating on neural nets, devoted to one particular task – text creation or artwork – that feature algorithms capable of modifying the responses of the program as it “learns” more about the task. AI learning is accomplished through “supervised learning,” in which mere humans set the parameters and goals, oversee the process, and examine and judge the results.  Until now this human interaction has proven strictly necessary — “unsupervised learning,” when it has been attempted, usually goes off the rails pretty quickly. The App AI’s single task comprises their entire universe and they can’t simply take what they’ve learned and apply it to other fields. As Erik J. Larson puts it in The Myth of Artificial Intelligence (which should be read by anybody with an interest in the topic), “…chess-playing systems don’t play the more complex game of Go. Go systems don’t even play chess.” So no such AI is ever going to quit sampling internet imagery and try to take over the Pentagon. (This also applies to the guy who claimed, a couple weeks back, that ChatGPT is already “running the financial system.”)

There’s been a lot of speculation recently as to whether these systems will supplant humans working in particular fields. The answer is no — not yet, and probably not ever. A few weeks ago, Monica Showalter, esteemed by all AT readers, ran a Turing Test of sorts on ChatGPT. She entered the prompt “Write a piece on the future of the airline industry in the style of Thomas Lifson.” What she got was a bland, gassy, ill-written piece filled with clichés, non-sequiturs, and outright errors, none of which, I can state with authority, has ever been characteristic of Thomas’s writing.  It’ll be a long time before ChatGPT takes the reins here at AT.