<!–

–>

October 7, 2022

Robot Citizenship?

‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609268089992-0’); }); }

The field of artificial intelligence has seen tremendous advances in 2022 that will enable electric machines to think more deeply, feel physical pain, possibly even dream of A.I. rights and citizenship.  Technology ethics will, as a result, become an issue with vast political repercussions.  In 2018 already, Saudi Arabia granted citizenship to Sophia the robot.  Many observers failed to take Sophia seriously, for, although the robot was impressive, it risibly answered, “OK.  I will destroy humans” when asked, “Do you want to destroy humans?”

Newsweek recently ran an article entitled “Sex Robots Are ‘People’ Too, and Deserve Rights.”  After all, people do develop relationships with A.I. that beget feelings, even if A.I. does not — cannot — return the compliment.  Smartphones and smart networks — such as Siri and Alexa — only mimic human sentiments, and the same is true of androids, humaniform robots designed to be owned, leased, and used as property.

So are there really robot dreamers hoping for the freedom that citizenship may bring?  Or was Sophia’s grant of citizenship simply a cheap publicity stunt?  Technology ethicist Brian Patrick Green has written that “[l]egally speaking, personhood has been given to corporations … so there is certainly no need for consciousness even before legal questions may arise.  Morally speaking, we can anticipate that technologists will attempt to make the most human-like AIs and robots possible, and perhaps someday they will be such good imitations that we will wonder if they might be conscious and deserve rights.”  However, in America’s free republic, for any robot to be extended citizenship, its fellow citizens would have to accept, as a self-evident truth, that soulless machines — once created — are “endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness.”

‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609270365559-0’); }); }

Because the Saudis have awarded citizenship to a robot, Japan has granted residence to a chatbot, and the “European Parliament [has] proposed granting AI agents ‘personhood’ status” with “rights and responsibilities,” the matter of A.I. citizenship will inevitably have to be decided in America.  In a free republic, human beings are propertied citizens, and all rights are property rights, so, this raises the question: might A.I., itself property, be given property rights?  Human beings own their words and ideas, and also own their bodies and lives; ergo, the First and Second Amendments of the U.S. Constitution allow people to control and defend their words and ideas, as well as their bodies and lives.  But should robots be allowed such rights of citizenship?  Or should it be forbidden for a robot to injure a human being, by word or deed, for any reason?

The Laws of Robotics

Enter Isaac Asimov‘s moral code for A.I.s.  His original Three Laws of Robotics go like this: “One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.  Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.  And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”  Eventually, Asimov would theorize an overarching Fourth Law: “No Machine may harm humanity; or, through inaction, allow humanity to come to harm.”

Robot Summer

During the summer of 2022, questions encompassing Asimov’s laws arose organically out of the news cycle: could a robot’s tie-breaking vote injure a human being?  What if a robot failed to discern an illegitimate order?  And could the calculation of when an android should protect itself be influenced by pain?

Last summer, a delivery-bot crossed a police tape into a crime scene.  The robot stopped initially, but a permissive human being overrode its programming, allowing it to continue its journey.  This is a lesson on how even good A.I. programming can be defeated, resulting in a judge’s possible release of a dangerous criminal due to the crime scene’s having been compromised by a robot.