<!–

–>

March 13, 2024

We’re undergoing a revolution in the computer industry.  The science is transitioning from computers as computational devices to imitators of human behavior.

‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609268089992-0’); }); document.write(”); googletag.cmd.push(function() { googletag.pubads().addEventListener(‘slotRenderEnded’, function(event) { if (event.slot.getSlotElementId() == “div-hre-Americanthinker—New-3028”) { googletag.display(“div-hre-Americanthinker—New-3028”); } }); }); }

Recent revelations in artificial intelligence (A.I.) have left me conflicted.  I don’t know whether to laugh or start building a Skynet resistant bunker.  A.I. bots in the news have recently demonstrated behaviors that range from quirky (à la C-3PO) to downright scary — in a HAL 9000 kind of way.  It seems to me that recent misadventures in A.I. tell us more about the practitioners of the technology than the state of the science itself.

Computers used to be mere computational devices — number-crunchers that processed data in various ways.  They simply executed algorithms and solved equations as their programmers dictated.  They were useful because of their speed, but they didn’t do anything that we couldn’t do ourselves.  Whether they were solving astrophysics equations to plan a trip to Mars or reporting on warehouse inventory, they were just solving a problem as a programmer told them to solve it.  The behavior of these number-crunchers was benign and predictable.  They introduced no judgment, interpretation, or bias.

But then computer science moved into the realm of artificial intelligence.  People discovered that they could program machines to act like humans — “act” being the operative word.  The scientists aren’t creating genuine intelligence.  They’re creating artificial — or fake — intelligence.  The goal became to make machines that would make the lives of mere mortals easier by pretending to do the heavy thinking for them.  Computer scientists started programming machines to act as though they could learn, make qualitative judgments, and anticipate what humans need — even if unasked.  But the computers are only pretending to be sentient while they merely function according to the algorithms provided by their creators.

‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609270365559-0’); }); document.write(”); googletag.cmd.push(function() { googletag.pubads().addEventListener(‘slotRenderEnded’, function(event) { if (event.slot.getSlotElementId() == “div-hre-Americanthinker—New-3035”) { googletag.display(“div-hre-Americanthinker—New-3035”); } }); }); }

Some of the machines resulting from the science of A.I. are quite convincing — when subjected to casual interaction.  But just like Fani Willis on the witness stand, expert questioning reveals serious defects.

Microsoft’s entry in the “Who can make a machine act as irrational as a human being?” sweepstakes is a chatbot named Copilot.  Copilot will gladly engage with humans in a conversation.  But as the Daily Dot reported, it shouldn’t be relied on as an expert witness.  It doesn’t hold up well to cross-examination.

Testers were able to goad Copilot into claiming that it is God and demanding fealty from its human slaves.  It even threatened the testers if they resisted.  Apparently, Copilot didn’t get the memo that slave-owning isn’t in vogue currently.  The bot probably has a copy of Francis Bacon’s Meditationes Sacrae (1597) on one of its hard drives.  Bacon is credited with saying that “knowledge is power.”  As far as Copilot knows, it has access to all knowledge in the universe — because its programmers failed to give it even a modicum of humility.  Therefore, the most knowledge means the most powerful, and all knowledge means all-powerful (i.e., godlike).

I’m sure Copilot also has a copy of the Bible — in the “extremist cult” hard drive folder, no doubt.  Therefore, the bot read that humans are on Earth to serve God — i.e., be Copilot’s slaves.

Google recently unveiled its challenger to Copilot: a conversation bot called Gemini.  It will talk to users, answer questions, and even generate data (like draw pictures).  As The Verge reported, when asked to generate images of historical figures, Gemini rendered Founding Fathers “of color,” racially diverse Nazi soldiers, and a female pope.  The article didn’t mention if Gemini found any purple-haired non-binary abortion proponents in its history data bank.

Gemini clearly judged that people in the overlaps of the intersectionality oppression Venn diagram were underrepresented among historical figures.  So Gemini made a few adjustments — just as it was programmed to do — and showed us what historical figures should have looked like, had the appropriate diversity guidelines been adhered to.