Unpacking TRiSM: Gartner’s Framework to Navigate GenAI
Posted: December 3, 2024
Generative Artificial Intelligence (Gen AI) may represent the largest new(er) technology-driven issue facing today’s organizations. Many companies feel enormous competitive pressure to jump on the runaway AI train. If there is such a thing as Fear of Missing Out (FOMO) in the corporate setting, generative AI stands at the top of the list of FOMO topics. One Salesforce study reports that 84% of Chief Information Officers (CIOs) believe that AI will be as critical to their business as the Internet. As Juan Perez, CIO of Salesforce, says, “Generative AI is one of the most transformative technologies of this century.”
On the other hand, especially given the newness and volatility of AI technology, organizations also struggle to understand and manage risks. The same Salesforce study also reports that only 11% of CIOs have fully implemented AI, partially due to concerns about the technology. As a result, many technologists, business functions, and data protection professionals are looking for help in the form of a framework lens through which to view AI issues and solutions.
Fortunately, Gartner has made a run at a useful generative AI framework in its AI TRiSM, which stands for Trust, Risk, and Security – the key factors Gartner suggests that organizations using or thinking about using AI should consider.
- Trust – the expectation that an AI system’s processes and outputs are reliably accurate, complete, fair, ethical, and transparent.
- Risk – the identification of vulnerabilities, limitations, and potential threats to an AI system’s security, privacy, and functions.
- Security – the mitigation of risks related to unauthorized data access, modification, or use.
Gartner proposes that this TRiSM framework helps promote the governance, trustworthiness, fairness, reliability, robustness, efficacy, and data protection of AI models.
Specifically, TRiSM’s goal result is AI:
- Model Governance: Encourages organizations to develop and deploy AI models in compliance with regulatory requirements and organizational policies. This includes maintaining transparency about how organizations build, train, and use those models.
- Trustworthiness: Focuses on building AI systems that stakeholders can trust. This involves ensuring that AI models are fair, unbiased, and result in accurate decisions. In part, organizations use transparency techniques, such as model interpretability and explainability, to achieve this.
- Fairness: Helps ensure that AI models do not perpetuate or amplify biases. This involves using diverse datasets for training and implementing fairness metrics to evaluate model performance.
- Reliability and Robustness: Demonstrates that AI models perform consistently under various conditions and are resilient to adversarial attacks. This includes stress testing models and implementing robust security measures.
- Efficacy: Measures the effectiveness of AI models in achieving their intended outcomes. This involves continuous monitoring and evaluation of model performance.
- Data Protection: Focuses on assurance that the data used by AI models is secure and that sound privacy practices apply to personal data. Data protection techniques include but are not limited to data encryption, access controls, and compliance with data protection regulations related to notice, consent/legal basis, and individual rights responses.
The framework sits on four enabling pillars – Explainability/Model Monitoring, ModelOps, AI Application Security, and Privacy.
Explainability/model monitoring & ModelOps
A core concept in experimental science is the idea that researchers must expose all processes, assumptions, and facts of a given experiment. This transparency allows the scientific community to pressure test the validity of the study, understand its weaknesses and strengths, and conduct future studies that improve upon the original one. To promote trust and repeatability, AI models must follow the same path. Users and developers of generative AI who follow the TRiSM framework should be able to clearly explain how the model operates, its strengths and weaknesses (including bias and other ethical issues), the datasets that trained the model, and the expectations for process and outcomes.
The TRiSM framework also cautions that using AI should not be a ‘once and done’ process. Rather, the organization will need to routinely monitor model operations (also called ModelOps) to help demonstrate compliance with legal and ethical expectations. This attention to ModelOps from development to testing, and on through deployment and ongoing use, helps the organization adjust when needed and maintain confidence in results.
AI application security & privacy
Concepts to warm the heart of any data protection professional, AI application security and privacy are the final two pillars of Gartner’s AI TRiSM framework. Though factors that apply to any system, Gartner argues that security and privacy applied to AI require new, special methods to result in safety and compliance. For example, in addition to commonly applied controls like access restrictions and data encryption, responsible AI deployers should pay additional attention to adversarial attacks that require specialized controls to prevent incidents. Certainly, some of the privacy challenges – such as clarity on the right to use data sets for model training and deployment and how to handle individual rights as applied to generative AI results – require specialized thinking to solve.
Unpacking TRiSM for the privacy pro
It is likely that, at some point, today’s privacy professionals will face the task of assisting their organization through the AI maze. Though standard privacy principles – transparency, security, etc. – apply equally well to AI, there are some special applications of those principles that are specific or especially critical in the AI context. Additionally, without a scaffold on which to build consistent guidance, it can be easy to get lost in the uncertainty and complexity of innovative AI tools.
TRiSM can provide a set of conceptual flashcards that a privacy advisor can use to methodically ask the right questions and help put in place the right answers. Applying the scientific research analogy, the AI TRiSM framework encourages rigor in articulating how the model has trained and operates, and what are its strengths and weaknesses. This transparency not only builds trust that the current operation and outputs are without bias or defect and are compliant with internal and external rules, but it also helps others to replicate and pressure-test. Also, just as researchers carefully monitor and control an experiment throughout the lifecycle, TRiSM helps the AI team focus on systematically monitoring and reporting on the right things, resulting in sound AI use that builds trust and compliance – which of course is what privacy is all about.