On December 6, 2022, the European Union’s (EU) Regulation on Artificial Intelligence (“AI Act”) progressed one step towards becoming law when the Council of the EU (the Council) adopted its amendments to the draft act (“Council General Approach”). The European Parliament (Parliament) must now finalize their common position before interinstitutional negotiations can begin.
The Council General Approach concludes months of internal Council negotiations and broadly offers a more business-friendly approach to artificial intelligence (AI) regulation than the European Commission’s (EC’s) proposal. The definition of an AI system and the scope of the AI Act are slightly narrowed, and a supplementary layer is added to the classification of high-risk AI so that systems that would otherwise be high risk but are only used as accessories to relevant decision making, are excluded. Obligations on providers of high-risk AI systems remain similar, but some requirements are made more technically feasible and less burdensome. The list of prohibited systems is both expanded and narrowed in different areas, and penalties are tweaked in favor of small- to medium-sized enterprises (SMEs).1
AI and its regulation are a top priority for the EU. Following the publication of the Ethics Guidelines for Trustworthy AI in 2019, the EC initiated a three-pronged legal approach to regulating AI. Together with the AI Act, new and revised civil liability rules,2and revised sectoral legislation, such as the General Product Safety Regulation,3seek to offer a legislative framework to support trustworthy AI in the EU. The AI Act will also operate alongside other existing and proposed data-related regulations including the General Data Protection Regulation,4the Digital Services Act,5the proposed Data Act,6and the proposed Cyber Resilience Act.7
The EC published a proposal for the AI Act (the Proposal) in April 2021. The Proposal adopts a cross-sector and risk-based approach that applies to all providers and users of AI systems that are on the EU market, regardless of where they are established. Applications of AI that are perceived to be most harmful will be banned, while a defined list of “high-risk” AI systems will need to comply with strict requirements. The majority of the Proposal’s obligations fall on providers of high-risk AI systems. Transparency requirements will apply to AI systems with limited risks, while those that are of low or minimal risk will not be subject to any obligations. National regulators will be tasked with enforcement, which will be overseen by a newly established “EU AI Board.” Companies could face fines of up to the higher of €30 million or six percent total worldwide annual turnover.
For a summary of the EC proposal, please refer to our visual Fact Sheet on Draft EU AI Act.
Key Changes Made by the Council
The Parliament must now finalize their amendments to the Proposal before the next phase of the legislative process, the interinstitutional negotiations (so-called “trilogues”), can begin. More than 3,000 amendments are currently being debated by members of the Parliament. They are expected to vote on the amendments in the first half of 2023, and trilogues could begin shortly after. It is possible that the law could enter into force by the end of 2023, prior to the next Parliament elections in 2024. Once the text passes into law, companies will likely have two-to-three years to comply.26
Meanwhile, advancements in AI technology are making headlines. In particular, Open AI’s ChatGPT chatbot and Dall.e 2 art generator were recently published27and have already attracted millions of curious users. As the potential and challenges associated with AI come to the fore of public discourse, it will be interesting to see how recent developments shape negotiations in the Parliament and trilogues.
We will publish updates on the legislative progress of the AI Act as they occur.
For more information on the EU AI Act and other matters related to AI and machine learning, please contact Cédric Burton, Laura De Boel, Maneesha Mithal, or any other attorney from Wilson Sonsini’s privacy and cybersecurity practice or AI and machine learning practice.
Laura De Boel, Maneesha Mithal, Rossana Fol, and Hattie Watson contributed to the preparation of this client alert.