November 22, 2024
The European Union's new framework for regulating artificial intelligence could drive developers out.

The European Union’s new framework for regulating artificial intelligence could drive developers out.

EU lawmakers over the weekend reached an agreement on legislation to be voted on in early 2024, the AI Act, that would set several rules for AI. It would ban the bulk scraping of facial images, using emotional recognition programs, and adopting “social scoring” software. Designers of the large language models powering AI must also submit summaries of the data used in the programs for review.

WASHINGTON WIZARDS AND CAPITALS ANNOUNCE PLANS TO DITCH DC AND MOVE TO VIRGINIA

The AI Act is seen as the strictest rule set for the technology and the first attempt as a framework for the technology. AI industry experts say that the guardrails could force international and local companies out of Europe due to the high costs of compliance.

While “local technology development is hampered in Europe as a result of demanding regulations, it will continue to develop elsewhere relatively unchecked, and it will be challenging to rely on local regulations to stop non-compliant AI software generated around the world from finding its ways to European markets and users,” Ron Moscona, a partner at the international law firm Dorsey & Whitney, told the Washington Examiner in an email. Moscona has been tracking the AI Act closely in Europe.

AI industry figures have long said the EU risks pushing major companies out. OpenAI CEO Sam Altman said in May that he would consider ceasing European operations if the AI Act was too strict. He reversed his position, though, 48 hours after receiving pushback from lawmakers.

Over 150 industry leaders signed a letter in June saying that the proposal would require high compliance costs and “disproportionate liability risks.” These costs could cut out several smaller AI startups from the market. The Computer and Communications Industry Association, a tech industry lobbying firm representing Big Tech, said in a December statement that the act could lead to an “exodus of European AI companies and talent seeking growth elsewhere.” No companies yet, though, have outright stated they will leave. The companies will have until 2025 before the regulations go into effect.

French President Emmanuel Macron, too, has said the bill threatens the industry. “We can decide to regulate much faster and much stronger than our major competitors,” Macron said Monday. “But we will regulate things that we will no longer produce or invent. This is never a good idea.”

French officials encouraged the EU’s regulators to water down the new rules, according to the Financial Times. That pressure may be partly due to the lobbying efforts of the European AI startups Mistral AI and Aleph Alpha.

The AI Act could be a repeat of the EU’s General Data Protection Regulation, AI Association President Josh Jackson told the Washington Examiner. The 2016 legislation added new data protections for EU citizens. European regulators required technology companies to adopt additional measures to protect users. The law placed increased demands on several European tech companies, leading to several smaller game developers and social networks closing up shop based on the cost of compliance.

The EU’s legislators spent last week parsing out details around several exemptions to be included in the AI Act. The exemptions provide for the use of facial recognition software for military purposes and for identifying victims of terrorist attacks.

The European Parliament “wants to encourage innovation, but they wanna regulate in ways that protect the fundamental human rights of users,” Susan Aaronson, a professor of international affairs at George Washington University, told the Washington Examiner. Lawmakers faced pressure from human rights groups like Amnesty International and trade groups like the CCIA to balance the interests of both sides.

One of the biggest questions is how the regulations will affect ChatGPT. The chatbot would be allowed in the EU, but OpenAI must abide by basic transparency requirements. These include disclosing how its data is governed, how it complies with EU copyright law, and how much energy is used to train its models.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

Additional restrictions would be applied to “high-risk” systems, a standard that has not been spelled out. “Everybody is trying to figure out what this whole risk-based approach looks like,” Jackson told the Washington Examiner. “I don’t think it’s entirely clear what that means in terms of regulating AI.”

The full text of the act is expected to be released in the next few weeks and will offer additional details on how the rules work.

Leave a Reply