November 23, 2024
EU AI Act Comes Into Effect - Here's What To Expect

Authored by Savannah Fortis via CoinTelegraph.com,

The European Union’s Artificial Intelligence Act officially takes effect on Aug. 1, following its publication in the Official Journal of the EU on July 12.

The landmark legislation marks a significant step toward regulating the rapidly evolving landscape of AI within the EU. As stakeholders across various industries prepare for the new rules, understanding the phased implementation and key aspects of the AI Act is crucial.

AI Act implemented

Under the AI Act’s implementation scheme, the legislation will be introduced gradually, similar to the EU’s approach to the introduction of its Markets in Crypto-Assets Regulation, which allows organizations time to adjust and comply. 

The EU is well-known for its complex bureaucracy. As a result, on Aug. 1, the official countdown will commence on the practical implementations of the AI Act, with key stages set to come into effect throughout 2025 and 2026.

The first will be the “Prohibitions of Certain AI Systems,” which will take effect in February 2025. This set of rules will prohibit AI applications that exploit individual vulnerabilities, engage in non-targeted scraping of facial images from the internet or CCTV footage, and create facial recognition databases without consent.

Following this, general-purpose AI models will have a new set of requirements implemented in August 2025. These AI systems are made to handle various tasks rather than being used for unique and specific purposes, such as image identification.

Rules for certain high-risk AI (HRAI) systems with specific transparency risks will come into effect by August 2026. 

For example, if the HRAI system is part of a product subject to EU health and safety laws, such as toys, the rules will apply by August 2027. For HRAI systems used by public authorities, compliance is mandatory by August 2030, irrespective of any design changes.

Companies and compliance

The enforcement of the AI Act will be robust and multifaceted. The EU intends to establish and designate national regulatory authorities in each of the 27 member states to oversee compliance. 

These authorities will have the power to conduct audits, demand documentation and enforce corrective actions. The European Artificial Intelligence Board will coordinate and ensure consistent application across the EU.

Companies dealing with AI will have to meet compliance obligations in risk management, data governance, information transparency, human oversight and post-market monitoring.

Industry insiders have recommended that for companies to comply with these obligations, they should begin to conduct thorough audits of their AI systems, establish comprehensive documentation practices, and invest in robust data governance frameworks.

Noncompliance with the AI Act can result in severe penalties, such as fines of up to 35 million euros or 7% of the company’s total worldwide annual turnover, depending on which figure is bigger.

The AI Act complements the General Data Protection Regulation (GDPR) enacted in May 2018 by addressing AI-specific risks and ensuring that AI systems respect fundamental rights.

While GDPR focuses on data protection and privacy, the AI Act emphasizes safe and ethical AI deployment. Already, major tech companies such as Meta, the parent company of Facebook and Instagram, have delayed AI-integrated products in the EU due to “regulatory uncertainty” around GDPR and the AI Act.

Tyler Durden Thu, 08/01/2024 - 06:30

Authored by Savannah Fortis via CoinTelegraph.com,

The European Union’s Artificial Intelligence Act officially takes effect on Aug. 1, following its publication in the Official Journal of the EU on July 12.

The landmark legislation marks a significant step toward regulating the rapidly evolving landscape of AI within the EU. As stakeholders across various industries prepare for the new rules, understanding the phased implementation and key aspects of the AI Act is crucial.

AI Act implemented

Under the AI Act’s implementation scheme, the legislation will be introduced gradually, similar to the EU’s approach to the introduction of its Markets in Crypto-Assets Regulation, which allows organizations time to adjust and comply. 

The EU is well-known for its complex bureaucracy. As a result, on Aug. 1, the official countdown will commence on the practical implementations of the AI Act, with key stages set to come into effect throughout 2025 and 2026.

The first will be the “Prohibitions of Certain AI Systems,” which will take effect in February 2025. This set of rules will prohibit AI applications that exploit individual vulnerabilities, engage in non-targeted scraping of facial images from the internet or CCTV footage, and create facial recognition databases without consent.

Following this, general-purpose AI models will have a new set of requirements implemented in August 2025. These AI systems are made to handle various tasks rather than being used for unique and specific purposes, such as image identification.

Rules for certain high-risk AI (HRAI) systems with specific transparency risks will come into effect by August 2026. 

For example, if the HRAI system is part of a product subject to EU health and safety laws, such as toys, the rules will apply by August 2027. For HRAI systems used by public authorities, compliance is mandatory by August 2030, irrespective of any design changes.

Companies and compliance

The enforcement of the AI Act will be robust and multifaceted. The EU intends to establish and designate national regulatory authorities in each of the 27 member states to oversee compliance. 

These authorities will have the power to conduct audits, demand documentation and enforce corrective actions. The European Artificial Intelligence Board will coordinate and ensure consistent application across the EU.

Companies dealing with AI will have to meet compliance obligations in risk management, data governance, information transparency, human oversight and post-market monitoring.

Industry insiders have recommended that for companies to comply with these obligations, they should begin to conduct thorough audits of their AI systems, establish comprehensive documentation practices, and invest in robust data governance frameworks.

Noncompliance with the AI Act can result in severe penalties, such as fines of up to 35 million euros or 7% of the company’s total worldwide annual turnover, depending on which figure is bigger.

The AI Act complements the General Data Protection Regulation (GDPR) enacted in May 2018 by addressing AI-specific risks and ensuring that AI systems respect fundamental rights.

While GDPR focuses on data protection and privacy, the AI Act emphasizes safe and ethical AI deployment. Already, major tech companies such as Meta, the parent company of Facebook and Instagram, have delayed AI-integrated products in the EU due to “regulatory uncertainty” around GDPR and the AI Act.

Loading…