Pope Francis’s call for international regulations on artificial intelligence is an extension of his longtime work to prevent war and promote peace.
Francis’s call for global rules for AI, issued Thursday in his World Day of Peace message, drew significant commentary, and criticism, from industry figures who suggested he stood in the way of technological progress. His motivation for calling for guardrails, though, is founded more on a desire to prevent conflict than to criticize the industry.
FOUR REASONS SPEAKER MIKE JOHNSON MIGHT STRUGGLE TO SUCCEED IN 2024
Francis is “coming at the issue of artificial intelligence, about which he has spoken in the past, from the perspective of peace,” the Rev. Phillip Larrey, former dean of the philosophy department at Rome’s Pontifical Lateran University, told the Washington Examiner. Francis has declared “no to war” for years and stepped up his peace efforts as conflict has erupted in Gaza, Ukraine, and other parts of the world.
In his remarks, he focused on “lethal autonomous weapon systems,” or weapons that act without human input, warning that they opened up new possibilities for ethical violations.
“Autonomous weapon systems can never be morally responsible subjects,” Francis wrote. “The unique human capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms, and that capacity cannot be reduced to programming a machine, which, as ‘intelligent’ as it may be, remains a machine.”
The moral responsibility is always in the hands of the human, Larrey noted. A machine cannot make moral decisions in the same way that a person can.
Francis’s remarks align with the call from the United Nations and the Red Cross in October to restrict autonomous weapons, Larrey said.
The pope did not spell out how such weapons or AI in general would be regulated. “I would like to understand what he really means by regulation,” Eugene Gan, professor of multimedia at Franciscan University, told the Washington Examiner. “We need to be very careful with that term.” Gan has taught on AI and its implications for several years.
While things like driver’s licenses and nuclear power clearly need regulation, Gan said, that it isn’t clear what that looks like on an international level. Gan said he is skeptical of the heavy-handed approach of the European Union’s AI Act but believes something needs to be done to ensure the technology is not used to harm human dignity and human relationships. Some prominent industry figures, such as OpenAI CEO Sam Altman, have also called for the creation of an international regulatory agency.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Francis is “trying to lend his voice toward that international action because he understands that this is a crucial time right now in terms of directing [AI] toward good purposes,” Brian Patrick Green, director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University, a Jesuit school, told the Washington Examiner.
The pope’s call for an international treaty echoed remarks by the Vatican at the U.N. General Assembly. In 2019, Francis also called on Silicon Valley to ensure that technological advances like AI did not become an “enemy of the common good” and lead to a “new form of barbarism.” The Vatican also hosted discussions about the role of ethics in AI development in October 2022, a month before OpenAI released the game-changing ChatGPT.