Could the EU AI Act Pass in 2023: An AI Act Primer
Posted: May 12, 2023
The AI Act is the EU’s attempt to regulate how companies develop, distribute, and use AI. Following negotiations in the European Parliament, it appears that the AI Act could pass later in 2023—just two years after the European Commission first proposed the law.
The recent AI explosion—prompted by chatbots such as ChatGPT and image generators such as Midjourney—has prompted the EU’s institutions to get this far-reaching and highly significant law over the line.
In this blog, we’ll provide an overview of the AI Act, focusing on who the law covers, what the law requires, and how the AI Act compares to the General Data Protection Regulation (GDPR).
Who’s covered?
The AI Act will impact thousands of companies developing, selling, and using AI systems across the EU.
Scope of the AI Act
The proposed law has a very broad scope, partly due to how the AI Act defines an “artificial intelligence system” (AI system):
“…software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”
Note the law’s reference to “Annex I”, which sets out the different approaches to AI that are covered by the law, including:
- Machine learning.
- Logic and knowledge-based approaches.
- Statistical approaches, Bayesian estimation, search, and optimisation methods.
This is a broad definition, capturing many types of software. But bear in mind that the AI Act’s rules don’t apply to all AI use cases.
Types of entities under the AI Act
The AI Act defines different types of people, businesses, and organisations according to their relationship with AI systems.
Here are some of the most important types of entities covered by the AI Act:
- Operator: An AI system provider, user, importer, or distributor.
- Provider: Develops an AI system—or has an AI system developed on its behalf—to put the AI system on the market or into service under the producer’s name or trademark.
- User: Uses an AI system for commercial or professional purposes.
- Importer: Established in the EU and makes a non-EU provider’s AI system available in the EU.
- Distributor: Anyone in the supply chain other than the provider or the importer that makes the AI system available in the EU “without affecting its properties”.
The AI Act covers both EU and non-EU companies if the output of the non-EU company’s AI system is used in the EU.
Obligations Under the AI Act
The AI Act imposes many requirements on AI operators based on their role within the AI supply chain.
Below are some of the main sorts of obligations under the AI Act.
Prohibited practices
Some uses of AI are outright banned under the AI Act. The prohibited practices have been subject to a lot of debate among the EU’s institutions—but in the European Commission’s initial proposal, they included:
- Using AI for subliminal (unconscious) techniques that cause a person harm.
- Using AI to exploit vulnerable people due to their age or disabilities.
- Social scoring, i.e., using AI to profile people in a way that disadvantages them across unrelated contexts.
- Real-time biometric identification by law enforcement (except in specific circumstances).
The final point above has been particularly contentious and has been amended several times across draft versions of the law.
High-risk AI systems
Operators of certain “high-risk” AI systems have extensive safety, risk assessment, and certification obligations under the AI Act.
Annexes II and III of the AI Act list the types of AI systems classed as “high risk”. High-risk AI systems are mostly used in areas such as law enforcement, education, immigration, the legal system, and other public services.
Among many other obligations, operators of high-risk AI systems must (depending on their role in the AI supply chain):
- Ensure that the data sets used for AI systems are accurate and unbiased.
- Keep records regarding their development or use of AI systems.
- Conduct a “conformity assessment” (providers only) before putting the AI system on the market.
Other AI systems
Some AI systems are not prohibited or high-risk but are still subject to some obligations under the AI Act.
These other types of AI systems include:
- AI systems that interact with people, such as chatbots.
- Emotional or biometric identification systems.
- AI systems for generating “deep fakes”.
Provides of these types of AI systems must be transparent so that people always know they are interacting with an AI system or AI-generated content.
The AI Act and the GDPR
The AI Act interacts with the GDPR in several ways. For example:
- Both laws regulate the use of personal data.
- Under the AI Act, AI providers must use accurate and unbiased data—including personal data—to train their AI systems.
- Under the GDPR, controllers must ensure the accuracy of personal data and abide by the principle of fairness.
- Both laws regulate “automated decision-making”.
- The AI Act requires human oversight of most AI decision-making.
- The GDPR prohibits certain types of “solely automated individual decision-making”.
- Both laws require the conducting of a Data Protection Impact Assessment (DPIA) under certain conditions:
- The AI Act requires users of high-risk AI systems to conduct a DPIA before using the system.
- The GDPR requires controllers to conduct a DPIA before engaging in any processing activity that is “likely to result in a high risk” to people’s rights and freedoms.
The AI Act also refers to the GDPR’s concept of “special categories of personal data”, and the Commission’s initial proposal allows AI providers to use special category data in AI training to detect and mitigate bias.
However, it’s important to note that the GDPR already covers a lot of AI-related activity.
The GDPR’s principles—such as lawfulness, fairness and transparency; data minimisation; purpose limitation; and accuracy—are highly relevant to any organisation developing or using AI systems.
Indeed, following the Italian regulator’s recent action against ChatGPT owner OpenAI, the European Data Protection Board (EDPB) recently formed an AI Task Force to coordinate AI-related enforcement action.
So regardless of when the AI Act finally takes effect, all companies using AI already have legal obligations around transparency, choice, and safety.