<!–

–>

August 28, 2023

A few weeks ago, Joe Allen responded to my “Artificial Intelligence: The Facts” by suggesting that the real danger of AIs is that the best, most efficient models will be operated by the Deep State and its technofascist allies, rendering opposition difficult if not futile: “…AI won’t be a “digital defense” against “the WEF” and “tech giants” when the latter have the most powerful systems.” This is a perceptive comment, towering well above the “SkyNet is comin’ with his cyborgs” nonsense that this discussion usually attracts.

‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609268089992-0’); }); document.write(”); googletag.cmd.push(function() { googletag.pubads().addEventListener(‘slotRenderEnded’, function(event) { if (event.slot.getSlotElementId() == “div-hre-Americanthinker—New-3028”) { googletag.display(“div-hre-Americanthinker—New-3028”); } }); }); }

Vernor Vinge, an outsized influence on infotech culture, once speculated that a truly efficient totalitarian system wouldn’t consist of jackbooted secret police and continuous surveillance but in fact would scarcely be evident. Things would just happen to enemies of the state, accidents that nobody could predict or explain. For instance, an individual presenting a problem for the powers would order something for dinner that arrives chock full of botulin toxin, injected in the lab that transforms insect proteins into edible food. As the toxin hits, his phone, internet, and monitoring equipment all go down. After the job is done, they all come back up, their logs edited to show nothing at all unusual. When the body is discovered, there’s no evidence of anything out of the ordinary. Death by misadventure. What ya gonna do?

And so it would go on, day after day, year after year, the bonds growing ever tighter, and nobody even noticing.

You’d need AI to handle things on this level of efficiency. In fact, you’d need quite a few, all optimized to operate a particular system: surveillance, analysis, and what might be called “assassination by Siri.”

‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609270365559-0’); }); document.write(”); googletag.cmd.push(function() { googletag.pubads().addEventListener(‘slotRenderEnded’, function(event) { if (event.slot.getSlotElementId() == “div-hre-Americanthinker—New-3035”) { googletag.display(“div-hre-Americanthinker—New-3035”); } }); }); }

It’s quite a plausible nightmare. But how feasible is it? For that, we need to return to a basic principle of computer science: GIGO: Garbage In, Garbage Out.

It’s not difficult to see how the garbage would get into an AI system in the first place. Despite what the public has been taught by media and film, AIs require human interaction to learn anything at all:

AI learning is accomplished through “supervised learning,” in which mere humans set the parameters and goals, oversee the process, and examine and judge the results.  Until now this human interaction has proven strictly necessary — “unsupervised learning,” when it has been attempted, usually goes off the rails pretty quickly. 

(Note here that we’re talking about “App AI” such as ChatGPT or the art AIs, which are limited to a single area of activity. General Intelligence AI, the HAL 9000s and Skynets of the movies, doesn’t exist and likely never will.)

Clearly, any AI is going to directly reflect the attitudes and ideas of its creator(s), and if that creator is a woke cancel-culture nitwit – or an entire clown car full of them – then you’re going to wind up with a pretty lame Artificial Intelligence. A system based on conspiracy theories, wish-fulfillment daydreams, and delusions, developed and programmed using whatever intellectual fad happens to be dominant at the moment. The cybernetic equivalent of a B-list Hollywood actor.

The elites fail because they have no principles, no knowledge, and no understanding. How can we be sure of this? Because elites have always acted this way. It comes with the territory.