RawPixel.com
When up against a serious threat, how does one endure if there are no easy answers how to do so?In the epic 1979 film Apocalypse Now, an audio transmission of the Colonel Kurtz character cast a light on the chaos of the Viet Nam mess in which he found himself:
“I watched a snail crawl along the edge of a straight razor. That’s my dream. That’s my nightmare. Crawling, slithering, along the edge of a straight… razor… and surviving.”
A thought-provoking nugget emerged when viewing Kurtz’s insanity: how does one survive if facing a true dilemma? When up against a serious threat, how does one endure if there are no easy answers how to do so?
While nowhere near in as much of a precarious position as the fictional Kurtz, we in the U.S. are wrestling with some constitutional quandaries that have emerged after 9/11, Orlando, San Bernardino, New York City, New Orleans, and Las Vegas. The legal conundrums that challenge us are these: in this era of artificial intelligence’s (AI) advanced anti-terrorism capabilities, the cutting-edge powers it brings to U.S. security agencies may cause backups for investigation and deployment of assets because of misinterpreting allowed First Amendment speech.
Now, at this point, almost everyone understands that shouting “fire” in a public movie theater, if there is no imminent threat to the assembled patrons, likely violates the First Amendment protections of free speech because of the harm it may create. Speech is also not protected if it incites a mob to violence, as some think that Ray Epps initiated on January 6, 2021 when Epps exhorted protestors: “we’re going into the Capitol!”
One problem that has arisen is that in a social media-based world one may not actually be present to be harmed or incited to participate in violent acts. If speech consumers are not in fact in a public space to face an impending menace or to be provoked to commit violence immediately, how should those communications be interpreted?
A further complication surrounds the question of whether AI can distinguish incitement of immediate, intended malice so it flags only the most serious threats? Can its programming perceive or discern context if someone is making an allowed political statement if using innocuous, but possibly poor analogies in speech or writings? For example: “… this… would hand…. a dangerous new tool it could use to… target political opponents and punish disfavored groups.”
<img alt captext="RawPixel.com” class=”post-image-right” src=”https://conservativenewsbriefing.com/wp-content/uploads/2025/01/can-the-first-amendment-accommodate-an-ai-solution.jpg” width=”450″>Prior U.S. Supreme Court decisions have specified that speech can be restricted if it is directed at inciting or producing imminent lawless actions. But in virtual settings, since there may be no assembled masses present, can the author or speaker be considered to be fomenting an immediate, inciting threat? Or do these situations just qualify as excited rants earning private speech protection.
The federal government is already testing these definitions. It is well known that some U.S. intel agencies are equipped with banks of highly sophisticated listening devices that scan telecommunications transmissions 24/7. They are programmed to alert authorities to specific words that are spoken or when hearing terms communicated that, on their face, could be the catalyst for further investigations.
So, for example, one question might be how AI would treat texts that say: “Make sure you’re there by 3:45 to hide for the “Surprise Party”; it’s gonna be a blast. They’ll remember us for years.”
Or, would AI redline a text exchange and characterize it as a threat if it took place between FBI officials similar to “[Trump’s] not ever going to become president, right?“ “No. No he won’t. We’ll stop it.”
Scenarios under these conditions could generate unintended consequences. Decisions to investigate cases of this sort among the thousands of monitored daily postings or transmissions could cause critical delays and/or trigger misuse of assets instead of exploring situations that needed more urgent scrutiny.
Nonetheless, the use of AI in intelligence circles has generally been well received and there is pressure to put more of it to work straight away. But a few departments in the justice system have earned the opprobrium of a significant percentage of the public over the last decade and some are quite reluctant to automatically allow the upgrades. In the recent past, notorious actions by the FBI and Justice Department have eroded the public’s support for granting these increased powers because of intrusive surveillance of church services and school board gatherings.
While these examples of government misuse do require additional analysis prior to proceeding, the legions of nihilistic crazies who wish to murder and maim innocent U.S. citizens are an ongoing threat. The jackals who travel these malicious paths are clearly intent on the wanton slaughter of innocents and there is no question they need to be identified and investigated so they are stopped in their tracks. AI could theoretically illuminate their violent intentions by surveying social media accounts, appraising their postings, identifying terms that illustrate their motivations, and intercept them before they could create carnage.
The tricky questions about where to draw the criminal speech lines in the virtual realm needs to be approached very judiciously. Ultimately, the courts will have to recalibrate and delineate immutable free speech rights from laws that provide citizen protections from threats in virtual milieus.
But the issues surrounding the interpretations of constitutionally allowed speech in the virtual world may drop the courts onto the perilous edge of a straight razor, with no easy answers how to proceed and survive.
Marc E. Zimmerman is a former legislative assistant to a Member of the U.S. Congress
Image: RawPixel.com