November 2, 2024
Are New-World-Order Elites Plotting To Use AI To 'Deprogram' So-Called Conspiracy Theorists?

Authored by Jacob Burns via HeadlineUSA.com,

Might the New World Order use biased, pre-manipulated artificial intelligence programs to try to “deprogram” those with unpopular opinions by persuading them that their logic does not compute?

A recent study on that subject underwritten by the John Templeton Foundation might give so-called conspiracy theorists one more thing to be paranoid about, according to Popular Science.

Critics have already sounded the alarm that leftist radicals in Silicon Valley and elsewhere were manipulating the algorithms used to train AI so that it automatically defaulted to anti-conservative biases.

The next step may be programming any verboten viewpoints into the realm of “conspiracy theory,” then having powerful computers challenge human users to a battle of logic that inevitably is stacked against them with cherrypicked data.

The study, titled “Durably reducing conspiracy beliefs through dialogues with AI,” attempted to counter the common view that some people will not change their minds, even when presented with facts and evidence.

Addressing the problem of “widespread belief in unsubstantiated conspiracy theories,” researchers postulated that conspiracy theories can, contrary to the scientific narrative, be countered by way of systematic fact-checking.

Among those theories tested were more traditional conspiracies such as those involving the assassination of John F. Kennedy or the possibility of alien landings that were known to the United States government.

But others included more immediately politicized claims, such as the lawfulness of COVID lockdowns or the validity of the 2020 presidential election, both of which are a “major source of public concern.”

The study was conducted by having conspiratorial participants engage in brief conversations with AI, with the aim of “curing” the participants of their ostensibly false opinions.

Researchers concluded that “the treatment reduced participants’ belief in their chosen conspiracy theory by 20% on average,” suggesting that “treating” people with certain facts can indeed alter their opinions, particularly when those facts come from AI bots.

The “treatment” received also reportedly “persisted undiminshed for at least 2 months,” meaning that such conditioning could eventuate in regular treatment for those deemed conspiracy theorists.

Ultimately, then, AI conditioning was determined to be a potentially useful tool in addressing the “psychological needs and motivations” of such people. Researchers speculated that the technology could be implemented online in the coming years, particularly in online forums or on social media.

David Rand, a professor at the Massachusetts Institute of Technology who co-authored the study, told reporters that he was optimistic about the future of AI conditioning.

“This is really exciting,” he said. “It seemed like it worked and it worked quite broadly.”

Tyler Durden Sat, 09/14/2024 - 23:20

Authored by Jacob Burns via HeadlineUSA.com,

Might the New World Order use biased, pre-manipulated artificial intelligence programs to try to “deprogram” those with unpopular opinions by persuading them that their logic does not compute?

A recent study on that subject underwritten by the John Templeton Foundation might give so-called conspiracy theorists one more thing to be paranoid about, according to Popular Science.

Critics have already sounded the alarm that leftist radicals in Silicon Valley and elsewhere were manipulating the algorithms used to train AI so that it automatically defaulted to anti-conservative biases.

The next step may be programming any verboten viewpoints into the realm of “conspiracy theory,” then having powerful computers challenge human users to a battle of logic that inevitably is stacked against them with cherrypicked data.

The study, titled “Durably reducing conspiracy beliefs through dialogues with AI,” attempted to counter the common view that some people will not change their minds, even when presented with facts and evidence.

Addressing the problem of “widespread belief in unsubstantiated conspiracy theories,” researchers postulated that conspiracy theories can, contrary to the scientific narrative, be countered by way of systematic fact-checking.

Among those theories tested were more traditional conspiracies such as those involving the assassination of John F. Kennedy or the possibility of alien landings that were known to the United States government.

But others included more immediately politicized claims, such as the lawfulness of COVID lockdowns or the validity of the 2020 presidential election, both of which are a “major source of public concern.”

The study was conducted by having conspiratorial participants engage in brief conversations with AI, with the aim of “curing” the participants of their ostensibly false opinions.

Researchers concluded that “the treatment reduced participants’ belief in their chosen conspiracy theory by 20% on average,” suggesting that “treating” people with certain facts can indeed alter their opinions, particularly when those facts come from AI bots.

The “treatment” received also reportedly “persisted undiminshed for at least 2 months,” meaning that such conditioning could eventuate in regular treatment for those deemed conspiracy theorists.

Ultimately, then, AI conditioning was determined to be a potentially useful tool in addressing the “psychological needs and motivations” of such people. Researchers speculated that the technology could be implemented online in the coming years, particularly in online forums or on social media.

David Rand, a professor at the Massachusetts Institute of Technology who co-authored the study, told reporters that he was optimistic about the future of AI conditioning.

“This is really exciting,” he said. “It seemed like it worked and it worked quite broadly.”

Loading…