As Washington faces challenges in formulating a cohesive strategy on artificial intelligence governance, the European Union is taking a leadership role. The EU has advanced a provisional agreement that lays down regulations for AI, potentially guiding the United States and other nations in their future policies. This AI Act, scheduled for a legislative assembly vote in April, intends to regulate AI usage across a spectrum of industries including finance and transportation. It aims to establish guidelines for law enforcement’s use of AI, as well as standards for constructing large language models to ensure the protection of individual privacy and commercial confidentiality.
There’s a general sense of approval from EU officials towards the AI Act. However, Europe’s technology sector has raised concerns about the Act’s vague terminology, fearing it might lead to unintended consequences.
Sam Altman, CEO of OpenAI, is a prominent advocate for the regulatory oversight of AI technologies, given their rapid development. He suggests the creation of a global regulatory entity similar to the International Atomic Energy Agency. At the World Governments Summit this month, Altman recommended a “regulatory sandbox” approach, allowing for controlled experimentation with AI technologies to better understand their potential impacts and to refine regulatory frameworks accordingly.
Altman’s proposal, made at a conference in the United Arab Emirates—a country where OpenAI is exploring investment opportunities—highlights the importance of practical trials in AI regulation. This approach underlines the need for a balanced experimentation environment to guide effective policy making.
The AI Act is unique in its risk-based regulatory focus, concentrating not on the AI technologies themselves but on the products and services they enable. The Act proposes different levels of regulation based on the associated risk, with minimal oversight for low-risk applications like spam filtering and more stringent controls for AI in critical sectors such as healthcare and finance. Additionally, the Act seeks to impose severe restrictions on specific uses of AI, like real-time public facial recognition, except under extraordinary circumstances.
Upon its expected passage, the AI Act will be enacted within two years, marking a significant regulatory milestone for the EU.
In contrast, the U.S. has taken a more tentative approach with President Biden’s executive order on AI last October, which encourages AI firms to voluntarily share safety test results and other relevant information with the government, without imposing stringent industry-wide standards. Although additional legislative efforts are underway in the U.S., none have reached the maturity level of the EU’s initiatives.
The EU’s proactive stance in tech regulation has already influenced major changes within large corporations like Apple and Meta, driven by its legislative directives. While the AI Act may not become a global standard immediately, it is poised to serve as an influential blueprint for future legislative efforts around the world.