Authored by Petr Svab via The Epoch Times (emphasis ours),
Cutting-edge weapons powered by artificial intelligence are emerging as a global security hazard, especially in the hands of the Chinese Communist Party (CCP), according to several experts.
Eager to militarily surpass the United States, the CCP is unlikely to heed safeguards around lethal AI technologies, which are increasingly dangerous in their own right, the experts have argued. The nature of the technology is prone to feeding some of the worst tendencies of the regime and the human psyche in general, they warned.
“The implications are quite dramatic. And they may be the equal of the nuclear revolution,” said Bradley Thayer, a senior fellow at the Center for Security Policy, an expert on a strategic assessment of China, and a contributor to The Epoch Times.
Killer Robots
The development of AI-powered autonomous weapons unfortunately is rapidly progressing, according to Alexander De Ridder, an AI developer and co-founder of Ink, an AI marketing firm.
“They’re becoming quickly more efficient and quickly more effective,” he told The Epoch Times, adding that “they’re not at the point where they can replace humans.”
Autonomous drones, tanks, ships, and submarines have become a reality along with more exotic modalities, such as the quadruped robot dogs, already armed with machine guns in China.
Even AI-powered humanoid robots, the stuff of sci-fi horrors, are in production. Granted, they’re still rather clumsy in the real world, but they won’t be for long, De Ridder suggested.
“The capabilities for such robots are quickly advancing,” he said.
Once they reach marketable usefulness and reliability, China is likely to turn its manufacturing might to their mass production, according to De Ridder.
“The market will be flooded with humanoid robots, and then it’s up to the programming how they’re used.”
That would mean military use, too.
“It’s kind of inevitable,” he said.
Such AI-powered machines are very effective at processing images to discern objects—to detect a human with their optical sensors, for example, explained James Qiu, an AI expert, founder of GIT Research Institute, and former CTO at FileMaker.
That makes AI robots very good at targeting.
“It’s a very effective killing machine,” he said.
AI Generals
On a broader level, multiple nations are working on an AI capable of informing and coordinating battlefield decisions—an electronic general, according to Jason Ma, an AI expert and data research lead at a multinational Fortune 500 company. He didn’t want the company’s name mentioned to prevent any impression he was speaking on its behalf.
The People’s Liberation Army (PLA), the CCP military, recently conducted battle exercises in which an AI was directly put in command.
The U.S. military also has projects in this area, Ma noted.
“It’s a very active research and development topic.”
The need is obvious, he explained. Battlefield decisions are informed by a staggering amount of data from historical context and past intelligence to near-real time satellite data, all the way to millisecond-by-millisecond input from every camera, microphone, and whatever sensor on the battlefield.
It’s “very hard” for humans to process such disparate and voluminous data streams, he said.
“The more complex the warfare, the more important part it becomes how can you quickly integrate, summarize all this information to make the right decision, within seconds, or within even sub-second,” he said.
Destabilization
AI weapons are already redefining warfare, the experts agreed. But the consequences are much broader. The technology is making the world increasingly volatile, Thayer said.
On the most rudimentary level, AI-powered weapon targeting will likely make it much easier to shoot down intercontinental ballistic missiles, detect and destroy submarines, and shoot down long-range bombers. That could neutralize the U.S. nuclear triad capabilities, allowing adversaries to “escalate beyond the nuclear level” with impunity, he suggested.
“AI would affect each of those components, which we developed and understood during the Cold War as being absolutely essential for a stable nuclear deterrent relationship,” he said.
“During the Cold War, there was a broad understanding that conventional war between nuclear powers wasn’t feasible. … AI is undermining that, because it introduces the possibility of conventional conflict between two nuclear states.”
If people continue developing AI-powered weapon systems without restrictions, the volatility will only worsen, he predicted.
“AI is greatly affecting the battlefield, but it’s not yet determinative,” he said.
If AI capabilities reach “the effect of nuclear war without using nuclear weapons,” that would sit the world on a powder keg, he said.
“If that’s possible, and it’s quite likely that it is possible, then that’s an extremely dangerous situation and incredibly destabilizing situation because it compels somebody who’s on the receiving end of an attack to go first, not to endure the attack, but to aggress.”
In warfare lexicon, the concept is called “damage limitation,” he said.
“You don’t want the guy to go first, because you’re going to get badly hurt. So you go first. And that’s going to be enormously destabilizing in international politics.”
The concern is not just about killer robots or drones but also various unconventional AI weapons. An AI, for example, could be developed to find vulnerabilities in critical infrastructure such as the electric grid or water supply systems.
Controlling the proliferation of such technologies appears particularly daunting. AI is just a piece of software. Even the largest models fit on a regular hard drive and can run on a small server farm. Simple but increasingly lethal AI weapons, such as killer drones, can be shipped in parts without raising alarm.
“Both vertical and horizontal proliferation incentives are enormous, and it’s easily done,” Thayer said.
De Ridder pointed out that the Chinese state wants to be seen as responsible on the world stage.
But that hasn’t stopped the CCP from supplying weapons or aiding weapon programs of other regimes and groups that aren’t so reputationally constrained, other experts have noted.
It wouldn’t be a surprise if the CCP were to supply autonomous weapons to terrorist groups that would then tie up the U.S. military in endless asymmetrical conflicts. The CCP could even keep its distance and merely supply the parts, letting proxies assemble the drones, much like Chinese suppliers provide fentanyl precursors to Mexican cartels and let them manufacture, ship, and sell the drugs.
The CCP, for example, has a long history of aiding Iranian weapon programs. Iran, in turn, supplies weapons to a panopticon of terrorist groups in the region.
“There would be little disincentive for Iran to do this,” Mr. Thayer said.
Human in the Loop
It’s generally accepted, at least in the United States and among its allies, that the most crucial safeguard against AI weapons wreaking unforeseen havoc is keeping a human in control of important decisions, particularly the use of deadly force.
“Under no circumstances should any machines autonomously independently be allowed to take a human life ever,” De Ridder said.
The principle is commonly summarized in the phrase “human in the loop.”
“A human has a conscience and needs to wake up in the morning with remorse and the consequences of what they’ve done, so that they can learn from it and not repeat atrocities,” said De Ridder.
Some of the experts pointed out, however, that the principle is already being eroded by the nature of combat transformed by AI capabilities.
In the Ukraine war, for example, the Ukrainian military had to equip its drones with some measure of autonomy to guide themselves to their targets because their communication with human operators was being jammed by the Russian military.
Such drones only run simpler AI, Ma said, given the limited power of the drone’s onboard computer. But that may soon change as both AI models and computers are getting faster and more efficient.
Apple is already working on an AI that could run on a phone, says Ma.
“It’s highly likely it will be in the future put into a small chip.”
Moreover, in a major conflict where hundreds or perhaps thousands of drones are deployed at once, they can share computational power to perform much more complex autonomous tasks.
“It’s all possible,” he said. “It’s gotten to the point where it’s not science fiction; it’s just [a matter of] if there is a group of people who want to devote the time to work on that. It’s tangible technology.”
Removing human control out of necessity isn’t a new concept, according to James Fanell, former naval intelligence officer and an expert on China.
He gave the example of the Aegis Combat System deployed on U.S.-guided missile cruisers and destroyers. It automatically detects and tracks aerial targets and launches missiles to shoot them down. Normally, a human operator controls the missile launches, but there’s also a way to switch it to automatic mode, such as when there’s too many targets for the human operator to track. The system then identifies and destroys targets on its own, Fanell said.
In mass drone warfare, where an AI coordinates thousands of drones in a systematic attack, the side that gives its AI autonomy to shoot will gain a major speed advantage over the side where humans must approve each shot.
“On the individual shooting level, people have to give up control because they can’t really make all the decisions so quickly,” Ma said.
De Ridder pointed out that a drone shooting another drone on its own would be morally acceptable. But that could unleash a lot of autonomous shooting on a battlefield where there may be humans too, opening the door to untold collateral casualties.
No Rules
Whatever AI safeguards may be practicable, the CCP is unlikely to abide by them anyway, most of the experts agreed.
“I don’t really see there will be any boundaries for China to be cautious about,” Ma said. “Whatever is possible, they will do it.”
“The idea that China would constrain themselves in the use of it, I don’t see that,” Fanell said.
“They’re going to try to take advantage of it and be able to exploit it faster than we can.”
The human-in-the-loop principle could simply be reinterpreted to apply to “a bigger, whole battle level” rather than “the individual shooting level,” Ma said.
But once one accepts that AI can start shooting on its own in some circumstances, the principle of human control becomes malleable, Fanell said.
“If you’re willing to accept that in a tactical sense, who’s to say you won’t take it all the way up to the highest level of warfare?” he said.
“It’s the natural evolution of a technology like this, and I’m not sure what we can do to stop it. It’s not like you’re going to have a code of ethics that says in warfare [let’s abide by] the Marquess of Queensberry Rules of boxing. It’s not going to happen.”
Even if humans are kept in control of macro decisions, such as whether to launch a particular mission, AI can easily dominate the decision-making process, multiple experts agreed.
The danger wouldn’t be a poorly performing AI but rather one that works so well that it instills trust in the human operators.
De Ridder was skeptical of prognostications about superintelligent AI that vastly exceeds humans. He acknowledged, though, that AI obviously exceeds humans in some regards, particularly speed. It can crunch mountains of data and spit out a conclusion almost immediately.
It’s virtually impossible to figure out how exactly an AI comes to its conclusions, according to Ma and Qiu.
De Ridder said that he and others are working on ways to restrict AI to a human-like workflow, so the individual steps of its reasoning are more transparent.
But given the incredible amount of data involved, it would be impossible for the AI to explain how each piece of information factored into its reasoning without overwhelming the operator, Ma acknowledged.
“If the human operator clearly knows this is a decision [produced] after the AI processed terabytes of data, he won’t really have the courage to overrule that in most cases. So I guess yes, it will be formality,” he said.
“Human in the loop is a comfortable kind of phrase, but in reality, humans will give up control quickly.”
Public Pressure
Even if humans are kept in the loop only nominally, it’s still important, De Ridder said.
“As long as we keep humans in the loop, we can keep humans accountable,” he said.
Indeed, all the experts agreed that public pressure is likely to constrain AI weapon development and use, at least in the United States.
Ma gave the example of Google terminating a defense contract over the objections of its staff.
He couldn’t envision an analogous situation in China, though.
Qiu agrees.
“Anything inside China is a resource the CCP can leverage,” he said. “You cannot say, ‘Oh, this is a private company.’ There is no private company per se [in China].”
Even the CCP cannot dispose of public sentiment altogether, De Ridder said.
“The government can only survive if the population wants to collaborate.”
But there’s no indication that the Chinese populace sees AI military use as an urgent concern.
On the contrary, companies and universities in China appear to be eager to pick up military contracts, Ma said.
De Ridder called for “an international regulatory framework that can be enforced.”
It’s not clear how such regulations could be enforced against China, which has a long history of refusing any limits on its military development. The United States has long vainly attempted to bring China to the table on nuclear disarmament. Recently, China refused a U.S. request to guarantee that it wouldn’t use AI for nuclear strike decisions.
If the United States regulates its own AI development, it could create a strategic vulnerability, multiple experts suggested.
“Those regulations will be very well studied by the CCP and used as an attack tool,” Qiu said.
Even if some kind of agreement is reached, the CCP has a poor track record of keeping promises, according to Thayer.
“Any agreement is a pie crust made to be broken.”
Solutions
De Ridder says he hopes that perhaps nations would settle for using AI in less destructive ways.
“There’s a lot of ways that you can use AI to achieve your objectives that does not involve sending a swarm of killer drones to each other,” he said.
“When push comes to shove, nobody wants these conflicts to happen.”
The other experts believed, however, that the CCP wouldn’t mind starting such a conflict—as long as it would see a clear path to victory.
“The Chinese are not going to be constrained by our ruleset,” Fanell said. “They’re going to do whatever it takes to win.”
Reliance on the whispers of an AI military advisor, one that instills confidence by processing mountains of data and producing convincing battle plans, could be particularly dangerous as it may create a vision of victory where there previously wasn’t one, according to Thayer.
“You can see how that might be very attractive to a decision maker, especially one that is hyper aggressive, as is the CCP,” Thayer said. “It may make aggression more likely.”
“There’s only one way to stop it, which is to be able to defeat it,” Fanell said.
Chuck de Caro, former consultant for the Pentagon’s Office of Net Assessment, recently called for the United States to develop electromagnetic weapons that could disable computer chips. It may even be possible to develop energy weapons that could disable a particular kind of chips, he wrote in a Blaze op-ed.
“Obviously, without functioning chips, AI doesn’t work.”
Another option might be to develop an AI superweapon that could serve as a deterrent.
“Is there an AI Manhattan Project that the U.S. is doing that can create the effect that Nagasaki and Hiroshima would have on the PRC and the Chinese Communist Party, which is to bring them to the realization that, ‘Okay, maybe we don’t want to go there. This is mutually assured destruction?’ I don’t know. But that’s what I would be [doing],” Fanell said.
That could leave the world in a Cold War-like stand-off—hardly an ideal state, but one likely seen as preferable to abnegating military advantage to the CCP.
“Every country knows it’s dangerous, but nobody can stop because they are afraid they will be left behind,” Ma said.
De Ridder’s says it might take a profound shock to halt the AI arms race.
“We might need like a world war, with immense human tragedy, to ban the use of autonomous AI killing machines,” he said.
Authored by Petr Svab via The Epoch Times (emphasis ours),
Cutting-edge weapons powered by artificial intelligence are emerging as a global security hazard, especially in the hands of the Chinese Communist Party (CCP), according to several experts.
Eager to militarily surpass the United States, the CCP is unlikely to heed safeguards around lethal AI technologies, which are increasingly dangerous in their own right, the experts have argued. The nature of the technology is prone to feeding some of the worst tendencies of the regime and the human psyche in general, they warned.
“The implications are quite dramatic. And they may be the equal of the nuclear revolution,” said Bradley Thayer, a senior fellow at the Center for Security Policy, an expert on a strategic assessment of China, and a contributor to The Epoch Times.
Killer Robots
The development of AI-powered autonomous weapons unfortunately is rapidly progressing, according to Alexander De Ridder, an AI developer and co-founder of Ink, an AI marketing firm.
“They’re becoming quickly more efficient and quickly more effective,” he told The Epoch Times, adding that “they’re not at the point where they can replace humans.”
Autonomous drones, tanks, ships, and submarines have become a reality along with more exotic modalities, such as the quadruped robot dogs, already armed with machine guns in China.
Even AI-powered humanoid robots, the stuff of sci-fi horrors, are in production. Granted, they’re still rather clumsy in the real world, but they won’t be for long, De Ridder suggested.
“The capabilities for such robots are quickly advancing,” he said.
Once they reach marketable usefulness and reliability, China is likely to turn its manufacturing might to their mass production, according to De Ridder.
“The market will be flooded with humanoid robots, and then it’s up to the programming how they’re used.”
That would mean military use, too.
“It’s kind of inevitable,” he said.
Such AI-powered machines are very effective at processing images to discern objects—to detect a human with their optical sensors, for example, explained James Qiu, an AI expert, founder of GIT Research Institute, and former CTO at FileMaker.
That makes AI robots very good at targeting.
“It’s a very effective killing machine,” he said.
AI Generals
On a broader level, multiple nations are working on an AI capable of informing and coordinating battlefield decisions—an electronic general, according to Jason Ma, an AI expert and data research lead at a multinational Fortune 500 company. He didn’t want the company’s name mentioned to prevent any impression he was speaking on its behalf.
The People’s Liberation Army (PLA), the CCP military, recently conducted battle exercises in which an AI was directly put in command.
The U.S. military also has projects in this area, Ma noted.
“It’s a very active research and development topic.”
The need is obvious, he explained. Battlefield decisions are informed by a staggering amount of data from historical context and past intelligence to near-real time satellite data, all the way to millisecond-by-millisecond input from every camera, microphone, and whatever sensor on the battlefield.
It’s “very hard” for humans to process such disparate and voluminous data streams, he said.
“The more complex the warfare, the more important part it becomes how can you quickly integrate, summarize all this information to make the right decision, within seconds, or within even sub-second,” he said.
Destabilization
AI weapons are already redefining warfare, the experts agreed. But the consequences are much broader. The technology is making the world increasingly volatile, Thayer said.
On the most rudimentary level, AI-powered weapon targeting will likely make it much easier to shoot down intercontinental ballistic missiles, detect and destroy submarines, and shoot down long-range bombers. That could neutralize the U.S. nuclear triad capabilities, allowing adversaries to “escalate beyond the nuclear level” with impunity, he suggested.
“AI would affect each of those components, which we developed and understood during the Cold War as being absolutely essential for a stable nuclear deterrent relationship,” he said.
“During the Cold War, there was a broad understanding that conventional war between nuclear powers wasn’t feasible. … AI is undermining that, because it introduces the possibility of conventional conflict between two nuclear states.”
If people continue developing AI-powered weapon systems without restrictions, the volatility will only worsen, he predicted.
“AI is greatly affecting the battlefield, but it’s not yet determinative,” he said.
If AI capabilities reach “the effect of nuclear war without using nuclear weapons,” that would sit the world on a powder keg, he said.
“If that’s possible, and it’s quite likely that it is possible, then that’s an extremely dangerous situation and incredibly destabilizing situation because it compels somebody who’s on the receiving end of an attack to go first, not to endure the attack, but to aggress.”
In warfare lexicon, the concept is called “damage limitation,” he said.
“You don’t want the guy to go first, because you’re going to get badly hurt. So you go first. And that’s going to be enormously destabilizing in international politics.”
The concern is not just about killer robots or drones but also various unconventional AI weapons. An AI, for example, could be developed to find vulnerabilities in critical infrastructure such as the electric grid or water supply systems.
Controlling the proliferation of such technologies appears particularly daunting. AI is just a piece of software. Even the largest models fit on a regular hard drive and can run on a small server farm. Simple but increasingly lethal AI weapons, such as killer drones, can be shipped in parts without raising alarm.
“Both vertical and horizontal proliferation incentives are enormous, and it’s easily done,” Thayer said.
De Ridder pointed out that the Chinese state wants to be seen as responsible on the world stage.
But that hasn’t stopped the CCP from supplying weapons or aiding weapon programs of other regimes and groups that aren’t so reputationally constrained, other experts have noted.
It wouldn’t be a surprise if the CCP were to supply autonomous weapons to terrorist groups that would then tie up the U.S. military in endless asymmetrical conflicts. The CCP could even keep its distance and merely supply the parts, letting proxies assemble the drones, much like Chinese suppliers provide fentanyl precursors to Mexican cartels and let them manufacture, ship, and sell the drugs.
The CCP, for example, has a long history of aiding Iranian weapon programs. Iran, in turn, supplies weapons to a panopticon of terrorist groups in the region.
“There would be little disincentive for Iran to do this,” Mr. Thayer said.
Human in the Loop
It’s generally accepted, at least in the United States and among its allies, that the most crucial safeguard against AI weapons wreaking unforeseen havoc is keeping a human in control of important decisions, particularly the use of deadly force.
“Under no circumstances should any machines autonomously independently be allowed to take a human life ever,” De Ridder said.
The principle is commonly summarized in the phrase “human in the loop.”
“A human has a conscience and needs to wake up in the morning with remorse and the consequences of what they’ve done, so that they can learn from it and not repeat atrocities,” said De Ridder.
Some of the experts pointed out, however, that the principle is already being eroded by the nature of combat transformed by AI capabilities.
In the Ukraine war, for example, the Ukrainian military had to equip its drones with some measure of autonomy to guide themselves to their targets because their communication with human operators was being jammed by the Russian military.
Such drones only run simpler AI, Ma said, given the limited power of the drone’s onboard computer. But that may soon change as both AI models and computers are getting faster and more efficient.
Apple is already working on an AI that could run on a phone, says Ma.
“It’s highly likely it will be in the future put into a small chip.”
Moreover, in a major conflict where hundreds or perhaps thousands of drones are deployed at once, they can share computational power to perform much more complex autonomous tasks.
“It’s all possible,” he said. “It’s gotten to the point where it’s not science fiction; it’s just [a matter of] if there is a group of people who want to devote the time to work on that. It’s tangible technology.”
Removing human control out of necessity isn’t a new concept, according to James Fanell, former naval intelligence officer and an expert on China.
He gave the example of the Aegis Combat System deployed on U.S.-guided missile cruisers and destroyers. It automatically detects and tracks aerial targets and launches missiles to shoot them down. Normally, a human operator controls the missile launches, but there’s also a way to switch it to automatic mode, such as when there’s too many targets for the human operator to track. The system then identifies and destroys targets on its own, Fanell said.
In mass drone warfare, where an AI coordinates thousands of drones in a systematic attack, the side that gives its AI autonomy to shoot will gain a major speed advantage over the side where humans must approve each shot.
“On the individual shooting level, people have to give up control because they can’t really make all the decisions so quickly,” Ma said.
De Ridder pointed out that a drone shooting another drone on its own would be morally acceptable. But that could unleash a lot of autonomous shooting on a battlefield where there may be humans too, opening the door to untold collateral casualties.
No Rules
Whatever AI safeguards may be practicable, the CCP is unlikely to abide by them anyway, most of the experts agreed.
“I don’t really see there will be any boundaries for China to be cautious about,” Ma said. “Whatever is possible, they will do it.”
“The idea that China would constrain themselves in the use of it, I don’t see that,” Fanell said.
“They’re going to try to take advantage of it and be able to exploit it faster than we can.”
The human-in-the-loop principle could simply be reinterpreted to apply to “a bigger, whole battle level” rather than “the individual shooting level,” Ma said.
But once one accepts that AI can start shooting on its own in some circumstances, the principle of human control becomes malleable, Fanell said.
“If you’re willing to accept that in a tactical sense, who’s to say you won’t take it all the way up to the highest level of warfare?” he said.
“It’s the natural evolution of a technology like this, and I’m not sure what we can do to stop it. It’s not like you’re going to have a code of ethics that says in warfare [let’s abide by] the Marquess of Queensberry Rules of boxing. It’s not going to happen.”
Even if humans are kept in control of macro decisions, such as whether to launch a particular mission, AI can easily dominate the decision-making process, multiple experts agreed.
The danger wouldn’t be a poorly performing AI but rather one that works so well that it instills trust in the human operators.
De Ridder was skeptical of prognostications about superintelligent AI that vastly exceeds humans. He acknowledged, though, that AI obviously exceeds humans in some regards, particularly speed. It can crunch mountains of data and spit out a conclusion almost immediately.
It’s virtually impossible to figure out how exactly an AI comes to its conclusions, according to Ma and Qiu.
De Ridder said that he and others are working on ways to restrict AI to a human-like workflow, so the individual steps of its reasoning are more transparent.
But given the incredible amount of data involved, it would be impossible for the AI to explain how each piece of information factored into its reasoning without overwhelming the operator, Ma acknowledged.
“If the human operator clearly knows this is a decision [produced] after the AI processed terabytes of data, he won’t really have the courage to overrule that in most cases. So I guess yes, it will be formality,” he said.
“Human in the loop is a comfortable kind of phrase, but in reality, humans will give up control quickly.”
Public Pressure
Even if humans are kept in the loop only nominally, it’s still important, De Ridder said.
“As long as we keep humans in the loop, we can keep humans accountable,” he said.
Indeed, all the experts agreed that public pressure is likely to constrain AI weapon development and use, at least in the United States.
Ma gave the example of Google terminating a defense contract over the objections of its staff.
He couldn’t envision an analogous situation in China, though.
Qiu agrees.
“Anything inside China is a resource the CCP can leverage,” he said. “You cannot say, ‘Oh, this is a private company.’ There is no private company per se [in China].”
Even the CCP cannot dispose of public sentiment altogether, De Ridder said.
“The government can only survive if the population wants to collaborate.”
But there’s no indication that the Chinese populace sees AI military use as an urgent concern.
On the contrary, companies and universities in China appear to be eager to pick up military contracts, Ma said.
De Ridder called for “an international regulatory framework that can be enforced.”
It’s not clear how such regulations could be enforced against China, which has a long history of refusing any limits on its military development. The United States has long vainly attempted to bring China to the table on nuclear disarmament. Recently, China refused a U.S. request to guarantee that it wouldn’t use AI for nuclear strike decisions.
If the United States regulates its own AI development, it could create a strategic vulnerability, multiple experts suggested.
“Those regulations will be very well studied by the CCP and used as an attack tool,” Qiu said.
Even if some kind of agreement is reached, the CCP has a poor track record of keeping promises, according to Thayer.
“Any agreement is a pie crust made to be broken.”
Solutions
De Ridder says he hopes that perhaps nations would settle for using AI in less destructive ways.
“There’s a lot of ways that you can use AI to achieve your objectives that does not involve sending a swarm of killer drones to each other,” he said.
“When push comes to shove, nobody wants these conflicts to happen.”
The other experts believed, however, that the CCP wouldn’t mind starting such a conflict—as long as it would see a clear path to victory.
“The Chinese are not going to be constrained by our ruleset,” Fanell said. “They’re going to do whatever it takes to win.”
Reliance on the whispers of an AI military advisor, one that instills confidence by processing mountains of data and producing convincing battle plans, could be particularly dangerous as it may create a vision of victory where there previously wasn’t one, according to Thayer.
“You can see how that might be very attractive to a decision maker, especially one that is hyper aggressive, as is the CCP,” Thayer said. “It may make aggression more likely.”
“There’s only one way to stop it, which is to be able to defeat it,” Fanell said.
Chuck de Caro, former consultant for the Pentagon’s Office of Net Assessment, recently called for the United States to develop electromagnetic weapons that could disable computer chips. It may even be possible to develop energy weapons that could disable a particular kind of chips, he wrote in a Blaze op-ed.
“Obviously, without functioning chips, AI doesn’t work.”
Another option might be to develop an AI superweapon that could serve as a deterrent.
“Is there an AI Manhattan Project that the U.S. is doing that can create the effect that Nagasaki and Hiroshima would have on the PRC and the Chinese Communist Party, which is to bring them to the realization that, ‘Okay, maybe we don’t want to go there. This is mutually assured destruction?’ I don’t know. But that’s what I would be [doing],” Fanell said.
That could leave the world in a Cold War-like stand-off—hardly an ideal state, but one likely seen as preferable to abnegating military advantage to the CCP.
“Every country knows it’s dangerous, but nobody can stop because they are afraid they will be left behind,” Ma said.
De Ridder’s says it might take a profound shock to halt the AI arms race.
“We might need like a world war, with immense human tragedy, to ban the use of autonomous AI killing machines,” he said.
Loading…