A new bill introduced in the House of Representatives on Monday is aimed at making sure American consumers know the difference between fantasy and reality online by cracking down on generative artificial intelligence technology.
Rep. Ritchie Torres, D-N.Y., is leading the effort on the AI Disclosure Act of 2023, which would force AI-generated content to include the disclaimer, “Disclaimer: this output has been generated by artificial intelligence.”
In a statement announcing the bill, Torres predicted that “regulatory framework for managing the existential risks of AI will be one of the central challenges confronting Congress in the years and decades to come.”
He noted risks in going too far with policing AI as well as not regulating it enough.
“The simplest place to start is disclosure. All generative AI – whether the content it generates is text or images, video or audio – should be required to disclose itself as AI,” Torres said. “Disclosure is by no means a magic bullet but it’s a commonsense starting point to what will surely be a long road to regulation.”
His bill, if passed, would give the Federal Trade Commission oversight over the new rule.
WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
And there appears to be an appetite on both sides of the aisle for promoting transparency in AI content.
Rep. Nancy Mace, R-S.C., one of the GOP’s leading voices on AI in the House, said Torres bill was not the “best solution” but agreed that Americans need to be informed if the content they are viewing, particularly as the 2024 presidential cycle heats up, is real or fake.