After days of negotiations, including a 22 hour session beginning Wednesday December 6, European lawmakers have finally agreed to a new sweeping artificial intelligence regulation. One of the first of its kind in the world, the AI Act attempts to regulate the rapidly evolving technology with a risk-based approach.

Though details of the law are not yet public (the final version has not been published as lawmakers continue to negotiate technical details and vote), the AI Act is expected to have significant effects on developers of AI, including the makers of large AI models such as the popular ChatGPT. European legislators settled on a “risk-based” approach, placing the most stringent limits on systems deemed the most high-risk. However, debate amongst the member states regarding which models should be included in the high-risk categories could drastically shift the kinds of companies the law regulates. 

Enforcement of the law also remains unclear. Regulators across the 27 EU member states will be involved in enforcing the law. And, certain aspects of the law will not be effective for up to 24 months, a considerable length of time during which the nature of AI can change dramatically. The first draft of the AI Act has already faced this challenge – the law was rewritten as new technology emerged.  

This new legislation is sure to have an impact across the world, both by regulating global companies and by acting as model legislation. This is developing story and we will update with details and further analysis once the final regulation is public.