The UK’s Information Commissioner, John Edwards, has told UK businesses that the Information Commissioner’s Office (ICO) is preparing to take action against companies misusing AI.
As head of the UK’s data protection regulator, Edwards has the power to investigate and enforce the law against companies violating the UK General Data Protection Regulation (UK GDPR).
While the ICO has only issued one UK GDPR penalty primarily related to AI, the regulator has repeatedly laid out its expectations regarding the technology. This article explores Edwards’ speech, his AI enforcement track record, and what we know about the ICO’s position on AI.
AI and consumer trust
“AI and privacy go hand in hand… You cannot expect to utilize AI in your products or services without considering privacy, data protection and how you will safeguard people’s rights,” Edwards said at techUK’s Digital Ethics Summit 2023 on 6 December.
Edwards explained that ensuring people trust AI is a win-win for businesses and consumers.
“If people don’t trust AI, then they’re less likely to use it, resulting in reduced benefits and less growth or innovation in society as a whole.”
“This needs addressing. 2024 cannot be the year that consumers lose trust in AI,” Edwards said.
Edwards issued a warning to “bad actors” who “aren’t respecting people’s information” and who are “using technology like AI to gain an unfair advantage over their competitors.”
“Our message to those organizations is clear,” Edwards said. “Non-compliance with data protection will not be profitable.”
“Persistent misuse of customers’ information, or misuse of AI… will always be viewed negatively by my office.”
“Where appropriate, we will seek to impose fines commensurate with the ill-gotten gains achieved through non-compliance,” Edwards warned.
Clearview AI decision
So far, the ICO has only made one UK GDPR enforcement decision primarily concerning AI, against New York-based facial recognition firm Clearview AI.
That case was initiated under Edwards’ predecessor, Elizabeth Denham, and resulted in a fine of over £7.5 million against the company in May 2022.
Clearview was found to have:
- Processed personal data, including “special category data”, without a legal basis.
- Violated the UK GDPR’s rules on:
- Data retention
- Data subject rights
Unfortunately for the ICO, its decision was overturned by the First Tier Tribunal (Information Rights) in October. While the tribunal agreed with many of the ICO’s data protection findings, Clearview was found to be outside of the ICO’s jurisdiction. The ICO will appeal.
Snap preliminary enforcement notice
A better guide to the ICO’s position on AI comes from a preliminary enforcement notice delivered to Snap Inc., owner of social media app Snapchat.
The notice, which informs Snap that the ICO has potential UK GDPR violations and invites the company to respond, relates to a feature called “My AI”.
My AI is a generative AI chatbot integrated into Snapchat, based on OpenAI’s model, GPT. The feature was made available to UK Snapchat+ subscribers in February 2023.
While the full preliminary enforcement notice is not publicly available, the ICO’s press release reveals that Snap might not have properly assessed the “privacy risks” associated with its chatbot.
In addition to these two cases, the ICO has issued guidance on AI in light of the UK GDPR.
Much of the guidance relates to fundamental issues of UK GDPR compliance as applied to organizations using AI to personal data, such as
- Identifying a legal basis
- Conducting a Data Protection Impact Assessment (DPIA)
- Ensuring transparency and accountability
The guidance also covers some more in-depth, technical aspects of developing and using AI systems, including:
- Statistical bias: Addressing various forms of statistical bias, such as representation or sampling bias, measurement bias, and aggregation bias, in AI models.
- Societal bias: Controllers are expected to consult domain experts and people with lived experiences to identify and mitigate societal biases.
- Data labelling: Controllers should train staff about implicit bias in data labelling and involve community groups in the labelling process where appropriate.
- Data analysis and pre-processing: Controllers should track the development and use of training datasets and assess any biases.
- Model development: Controllers should balance the model’s inductive bias and variance and consider transparency obligations, especially in the context of ‘black-box’ models.
- Model evaluation: Controllers should conduct disaggregated evaluations of AI systems to ensure statistical accuracy and to address any biases in model performance across different demographic groups.
- Deployment and monitoring: Post-deployment, ongoing monitoring of AI systems is essential.
- Human review: Controllers should monitor both AI and human components of hybrid AI systems to ensure fairness and accuracy.
AI regulation is already here
As the EU finalizes its AI Act, and the UK considers a new regulatory approach, it’s important to remember that the use of AI is already regulated under the UK GDPR.
Data protection law applies to almost every aspect of AI development and use:
- You must have a legal basis for processing personal data via AI, whether in training sets or when the AI is in use.
- You must be transparent in your use of AI.
- In many cases, you must conduct a DPIA before implementing an AI system.
- You must enable people to exercise their data protection rights with respect to AI systems.
- You must not use AI in a way that violates the GDPR’s automated decision-making
While the ICO has yet to bring more than one enforcement action directly related to AI, the regulator has made its views on data protection and AI relatively clear.
Meeting the UK GDPR’s requirements when designing, developing, or using AI systems will help you maintain people’s trust, reduce risk, and make the most of this important technology.