European Union Approved AI Act
On February 13, the European Union ratified the AI Act, establishing the world's first legal framework for regulating artificial intelligence. Although pending a final vote in the European Parliament, the text is nearly finalized and has already generated various reactions. The regulation's official enactment is expected two years after its official publication.
Risk-Based Regulatory Framework
The AI Act introduces a risk-based categorization for AI systems: minimal, limited, high, and unacceptable risks, with corresponding transparency mandates. Systems posing "unacceptable risks" are those infringing on individual rights or conflicting with EU privacy laws, including the misuse of biometrics and the use of societal scoring mechanisms. High-risk categories cover critical infrastructure and biometric identification, demanding rigorous impact assessments and compliance documentation for market entry.
The Act's treatment of General Purpose AI Models (GPAI) or "foundational" models illustrates the complexity of regulating versatile AI technologies. A significant divergence in national approaches within the European Union can be traced to this point. On one side, France and Germany advocated for a streamlined regulatory approach, suggesting a single category for GPAIs and promoting self-regulation by the companies that develop and deploy these models. This stance was motivated by a desire to nurture the relatively small European AI market, aiming to reduce regulatory burdens that could potentially stifle innovation and growth.
Contrastingly, the European Parliament's position deviated from this market-centric perspective. Instead, it underscored the importance of protecting citizens' rights, implicitly rejecting the notion of self-regulation and the single-category proposal. This divergence reflects a broader debate within the EU on balancing the promotion of technological and economic development against the imperative to safeguard individual rights and societal values in the age of AI.
What's to expect in 2024 beyond the EU
This is the situation in Europe at the moment. But what lies ahead? At the beginning of this year, Tate Ryan-Mosley, Melissa Heikkilä and Zeyi Yang in MIT Technology Review highlighted what effort we should expect in AI regulation. Here are some key points, looking beyond the European Union:
United States
President Biden's executive order on AI emphasized transparency and standards, creating a pathway for sector-specific regulations. Still, an EU-like regulation should be considered.
Two factors that are going to determine the AI landscape in the United States:
- The US AI Safety Institute will play a key role in implementing AI policies, with legislative action on AI still uncertain.
- The 2024 presidential election will significantly influence the discourse on AI regulation, especially concerning misinformation and social media.
China
China's AI regulation has been fragmented, focusing on specific applications with an overarching artificial intelligence law planned. At the moment:
- Existing regulations require AI models to be registered with the government, indicating a move away from the "Wild West" environment.
- The comprehensive law under consideration could introduce a national AI office and a "negative list" for high-risk AI research areas.
Rest of the World
- Africa is preparing an AI strategy to protect consumers and support AI development, with countries like Rwanda, Nigeria, and South Africa leading national efforts.
- Global organizations are working on AI regulatory consistency, which could benefit AI companies by easing compliance burdens.
- The approach to AI regulation may further diverge between democratic and authoritarian countries, affecting global AI industry dynamics.
Image: "EU Flag Neural Network" by Creative Commons was cropped from an image generated by the DALL-E 2 AI platform with the text prompt "European Union flag neural network." Use: CC0.