European Union policymakers have just endorsed the world’s first comprehensive regulations for artificial intelligence (AI). These rules, part of the AI Act, cover everything from high-risk systems to general-purpose AI, and they will start being enforced next month. Here’s what you need to know about these groundbreaking changes.
Key Points of the AI Act
High-Risk AI Systems
High-risk AI systems are those that could significantly impact health, safety, fundamental rights, the environment, democracy, elections, and the rule of law. These systems will need to meet strict requirements:
- Impact Assessments: Conduct fundamental rights impact assessments.
- Market Access: Fulfill obligations to gain access to the EU market.
AI in Law Enforcement
Law enforcement will have strict guidelines on using AI, particularly for biometric identification in public spaces. This technology can only be used to:
- Identify victims of serious crimes like kidnapping and human trafficking.
- Prevent specific terrorist threats.
- Track suspects of terrorism, trafficking, murder, kidnapping, rape, armed robbery, and environmental crimes.
General-Purpose AI Systems (GPAI) and Foundation Models
These systems will have lighter but still significant transparency requirements:
- Technical Documentation: Prepare and maintain detailed technical documents.
- EU Copyright Compliance: Ensure all content used for training AI complies with EU copyright laws.
- Summary Reports: Publish detailed summaries about the training content.
For those posing systemic risks, additional steps include:
- Risk Assessments and Mitigation: Conduct thorough evaluations and mitigate identified risks.
- Adversarial Testing: Perform tests to ensure robustness.
- Incident Reporting: Report serious incidents to the European Commission.
- Cybersecurity: Maintain high standards of cybersecurity.
- Energy Efficiency: Report on energy usage.
Prohibited AI Practices
Certain AI practices are outright banned under the new regulations:
- Biometric Categorization: Systems using sensitive characteristics like political or religious beliefs, sexual orientation, or race.
- Facial Recognition Databases: Creation of databases from untargeted scraping of images from the internet or CCTV.
- Emotion Recognition: Use in workplaces and schools.
- Social Scoring: Based on social behavior or personal characteristics.
- Behavior Manipulation: AI that manipulates human behavior to override free will.
- Exploitation of Vulnerabilities: Targeting vulnerable individuals due to age, disability, or economic situation.
Enforcement and Sanctions
Who Enforces the AI Act?
An AI Office within the European Commission will be responsible for enforcing these rules. Additionally, an AI Board composed of EU representatives will support the Commission and member countries in applying the legislation.
Penalties for Non-Compliance
Fines will vary depending on the severity of the violation and the size of the company:
- Minor Infringements: Fines start at 7.5 million euros ($8 million) or 1.5% of global annual turnover.
- Major Infringements: Fines can go up to 35 million euros or 7% of global annual turnover.
These new regulations mark a significant shift in how AI is governed and will have major implications for the tech industry. Companies will need to adapt quickly to comply with these new standards, ensuring that their AI systems are safe, transparent, and respectful of fundamental rights. As the first of its kind, the AI Act sets a precedent that other regions might soon follow, signaling a new era in AI regulation.