9 ways to balance AI innovation with privacy principles
Posted: May 28, 2024
One of the worst fears of any responsible privacy office is for the organization to see it as the ‘Office of No.’
In fact, most privacy professionals believe in innovation and data-driven decision-making, and they work hard to support business goals in the responsible data decisions for which they advocate.
The challenge, of course, is to balance business needs to move fast while still taking the time it takes to handle personal data – the fuel of many critical business decisions – with the necessary care and consideration.
Today’s marketplace is a race to employ Artificial Intelligence (AI) to solve important business problems.
The combination of business pressures to move fast, with the uncertainties inherent to any new entry into an evolving field, has the potential to increase this internal friction between privacy professionals who caution care and business drivers wanting quick AI wins.
And we’ve seen plenty of big corporations attempting to navigate the complexities of AI and privacy, with mixed results.
Many seem to attempt to somewhat ‘sweep it under the rug’ with passive updates to privacy policies and terms of service.
We saw Zoom fall victim to this in late 2023. Most recently Slack has also come under fire for hiding a note deep in an outdated privacy policy that told users they need to email the company to opt out of it using data to train their AI systems.
What we’re seeing in real time is how not to handle AI in your privacy policies. As well as how quickly these issues – once they’re spotted or go viral – can turn into a global PR crisis.
So how can a privacy professional help the business move fast, but safely, with AI? Here are a few ideas…
Remember: Data usage terms is a form of PR
AI is a hot topic for everyone right now, including your end users. With a heightened awareness of the implications of how personal data is being used, you need to be very conscious in how you describe and articulate what exactly you’re asking people to opt in or out of.
The language you use in terms and conditions or privacy policies has to be understandable, clear and transparent. As seen with Zoom and Slack, attempts to hide or gloss over key elements can risk irreparable reputational damage. Too often the language gets bogged down in complex lawyer jargon, which isn’t enough. Treat it like PR – how would you write it if it was on the front page of your website?
Make consent an integral part of your AI strategy
As AI technology rapidly evolves, you’d be best placed to root your programs in foundational consent. If you know what data your AI feature is collecting, make that clear to the end user, as well as exactly what you plan on doing with that data thereafter.
Consent unlocks data, meaning you can do more with it. So, start off on the right foot by requesting clear, opt-in consent. Steer clear of vague opt-out statements. These days, opt-out requests trigger alarm bells. Ideally, you’ll offer granular consent options to clearly specify and attain consent for each area of your data strategy.
Start with non-personal data sets and uses
There may be innovative uses of AI that have the potential for enormous impact but do not involve human beings – or their data – at all. Beginning the AI journey for uses that do not involve personal information and do not impact individual human beings may allow the organization to jump into AI innovation full steam without risking privacy laws and human rights. Though it is important to consider non-privacy risks associated with AI, such as Intellectual Property (IP) and copyright issues, removing personal data from the equation (literally) can help reduce complexity.
Launch and learn
The most impactful uses of AI may require the use of some personal information. Conducting a series of small pilots may allow the organization to build the AI privacy muscle before tackling large, complex projects. Starting “small” may mean granting access to a small number of privacy-trained users, using limited and pseudonymized data sets, and/or allowing a very few scope-limited practical applications with a high amount of human involvement. Whatever “small” means to an organization, rapid testing, and learning, adjusting, and repeating in limited circumstances can help balance the need to move forward with privacy concerns.
Oversight
Research and education institutions have long employed the notion of a human subject review board – often a cross functional committee that reviews and approves or denies proposed research projects that interact with human subjects.
An organization might benefit from a similar approach to AI, by establishing a team committed to reviewing proposed AI projects that involve personal data, or that have the possibility of impacting decisions about people.
Regardless of the governance structure implemented, having in place the right set of checkpoints and balances to review individual AI use cases with privacy and human rights in mind can be helpful – especially until the organization’s institutional knowledge about AI and its uses reaches a level that policy creation can help insert more general guidelines.
Conduct Privacy Risk Assessments
Some jurisdictions, like the US, EU, Brazil, may require Data Protection Impact Assessments (DPIAs) or Privacy Impact Assessments (PIAs) for some AI applications if those applications cross trigger thresholds like automated decisions, sensitive data use, vulnerable data subjects, profiling, and large-scale processing.
Even in jurisdictions without a concrete requirement to conduct DPIAs/PIAs, these are processes that can help an organization have and document productive conversations about privacy risks, mitigations, and controls. In addition to showing good faith and rigor in privacy decisions, DPIAs/PIAs can create legacy knowledge about AI decisions within the organization, with each one building on predecessors and further educating each generation of AI project.
Consider FIPs
As with any technological advancement, there is a learning curve to get through before an organization can master AI related to its business. However, a company that regularly compares the Fair Information Practices (FIPs) to its business activities will find that the same FIPs apply to AI just as well as to any other data-related activity.
Having a robust conversation about accountability, transparency, consent, etc., perhaps as part of the DPIA/PIA process, can help team members organize privacy thoughts, even given the intricacies of innovative technology. Remembering how today’s legacy general privacy laws already apply to AI activities – such as existing requirements related to automated decision making, legal basis, notice, and consent – can also help to structure principle-based compliance in a new and emerging field.
Assemble an AI Swat Team
Establishing an AI Center of Excellence (COE) can be useful to establish accountability and cooperation across the organization. It may be helpful to designate cross-functional responsibilities and specialties within the COE, including data governance, data science, security, data architects, privacy, legal, and ethics.
Each member of the COE can keep abreast of developments within their specialty related to AI, share information across the team, consider proposed AI projects together as a team, and train the rest of the organization in responsible AI.
Some organizations also use this team to consider real-life AI pilots, guide implementation teams, and analyze what went right with the activities and what could have gone better (including how to do better next time). This team might eventually be the group to draft organizational policies and procedures to help guide future projects.
Consider Privacy Enhancing Technologies and Techniques
Ironically, technology can help reduce privacy risk in the use of technology. Using Privacy Enhancing Technologies to clean, minimize, mask or pseudonymize, restrict, and delete personal data can increase privacy protections. Technology can also enhance privacy by tracking, often through meta data, data provenance and permissions.
One of the challenges of working with data sets involving personal data is that it can be possible to reidentify data or link data sets in a way that threatens privacy. Differential privacy enables analysis and sharing based on personal data while protecting the privacy of all individuals within the data set. Advanced analytical techniques facilitated by technology, such as differential privacy, can also insert sound privacy into AI activities.
Even organizations that have banned AI by policy are likely to have pockets of AI activities going on in practice. A privacy office that can help organize privacy-sensitive AI expertise, pilots, governance, and technology/techniques will be an office that both maintains its relevancy and practical thought leadership in business-supporting activities.