It was developed by research and consulting firm Gartner®, and “provides proactive solutions to identify and mitigate these risks, ensuring reliability, trustworthiness and security.”2
While numerous, separate frameworks focus specifically on AI trust, AI risk or AI security, they’re challenging to integrate and synchronize, according to researchers. This lack of coordination can result in fragmented AI management. It can also lead to knowledge gaps in the risks and security consequences stemming from AI implementation and AI practices.
The AI TRiSM framework, however, provides a unified approach. It brings together the important parts of different frameworks for more comprehensive management of AI technologies.
Supporters of AI TRiSM consider the framework important for mitigating risks and cyberthreats related to the advancement and increasing use of generative AI (gen AI), such as large language models (LLMs). Generative AI use can increase organizations’ attack surfaces, enable more sophisticated cyberattacks by hackers and raise novel ethical considerations. The benefits of AI TRiSM applications in areas like healthcare and finance include risk mitigation, enhanced measures for model monitoring, and safeguards against adversarial attacks and unauthorized access.3
According to Gartner, “AI trust, risk and security management (AI TRiSM) ensures:
AI governance refers to the processes, standards and guardrails that help ensure AI systems and tools are safe and ethical. Effective AI governance includes risk management—with mechanisms to address potential biases, data privacy violations and other concerns—while building trust and supporting innovation.
It involves the continuous monitoring and evaluation of AI systems to ensure they comply with established ethical norms and legal regulations. AI governance includes data governance, a data management discipline designed to maintain safe, high-quality data that is easily accessible for data discovery and business intelligence initiatives.
Different organizations and frameworks emphasize various guiding principles and goals for determining the trustworthiness of AI systems. Frequently cited principles of trustworthy AI include accountability, explainability and interpretability.
When AI model users and other stakeholders have trouble understanding how a model functions, it can hinder their trust in the AI system. The right processes and methodologies can help users to understand and trust the decision-making processes and outputs of machine learning models.
Fairness often involves mitigating or eliminating bias in AI models and data during the AI development lifecycle. AI models absorb the biases of society that can be embedded in their training data. Biased data collection that reflects societal inequity can result in harm to historically marginalized groups in credit scoring, hiring and other areas.
Identifying and addressing bias in AI requires the ability to direct, manage and monitor an organization’s AI activities. This can be achieved through AI governance—specifically, the creation of policies and practices to guide responsible AI development and use of AI technologies.
Reliability generally refers to the ability of something to function as anticipated or required for a given period under certain conditions. With respect to AI systems, meeting performance expectations includes providing correct outputs for a period that may extend as long as the lifetime of the system.5
Data protection is the practice of safeguarding sensitive data from loss and corruption. Data protection is intended to preserve data availability, ensuring users can access data for business operations, even if data is damaged or lost in a data breach or malware attack.
Data protection encompasses data security (the protection of digital information from unauthorized access, corruption or theft) and data privacy (the principle that a person should have control over their personal data). Data protection is key to compliance with major regulatory regimes, such as the European General Data Protection Regulation (GDPR).
Both traditional technologies and newer, AI-specific solutions support AI TRiSM. The former include tools for providing identity and access management (IAM) and solutions for data security posture management.
AI-specific technologies for AI TRiSM vary by provider. Some focus on certain functions, such as security or compliance. Others offer more comprehensive products with an array of capabilities, including:
AI governance
AI data protection and classification
AI runtime inspection and enforcement (see additional information in the following section)
According to additional Gartner guidance released in February, 2025, TRiSM solutions should also include:7
An inventory of AI entities used in the organization, such as models, agents and applications in various configurations, ranging from embedded AI in off-the-shelf, third-party applications to bring-your-own AI, to retrieval-augmented generation systems, to first-party models
Data used to train or fine-tune AI models, provide context for user queries in a RAG system, or feed AI agentic systems.
Continuous assurances and evaluation of performance, reliability, security, or safety expectations and metrics that are used for baselines Assurances and evaluations are applied predeployment, postdeployment (out of band).
Applied to models, applications and agent interactions to support transactional alignment with organizational governance policies. Applicable connections, processes, communications, inputs and outputs are inspected for violations of policies and expected behavior. Anomalies are highlighted and either blocked, autoremediated or forwarded to humans or incident response systems for investigation, triage, response and applicable remediation.
There are multiple use cases for implementing AI TRiSM in the deployment and management of enterprise AI. These include:
Medical professionals increasingly use AI-powered tools for a range of purposes, from medical device automation to analyzing images. An AI TRiSM program can help protect the healthcare data used in these systems from data breaches. Measures such as access controls, for example, can mitigate potential risks of unauthorized access.
When datasets that contain demographic biases are used to train AI algorithms, the outcomes can be biased. This has been a known problem in the financial industry, affecting loan approvals, interest rate charges and more. In Denmark, the Danish Business Association applied AI TRiSM practices by performing fairness tests to validate predictions for AI models that oversee financial transactions, increasing customer trust.8
In addition to ensuring greater fairness in financial transactions, AI TRiSM measures can help protect financial institutions’ fraud detection systems from adversarial attacks.9 These AI solutions also help banks comply with legal requirements on consumer protections and safeguarding sensitive information.
Read the full report here.
Govern generative AI models from anywhere and deploy on the cloud or on premises with IBM watsonx.governance.
See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation, and improve customer trust.
Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting.
1 Gartner Glossary, AI TRiSM, https://www.gartner.com/en/information-technology/glossary/ai-trism. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
2, 4 Gartner Article, Tackling Trust, Risk and Security in AI Models, Avivah Litan, December 24, 2024, https://www.gartner.com/en/articles/ai-trust-and-ai-risk.
3, 8 “Artificial Intelligence Trust, Risk and Security Management (AI TRiSM): Frameworks, applications, challenges and future research directions.” Expert Systems with Applications. 15 April 2024.
5 “AI Risks and Trustworthiness.” National Institute of Standards and Technology. Accessed 23 February 2025.
6 The Gartner Framework to Manage AI Governance, Trust, Risk and Security. [Webinar] Gartner. Accessed 28 January 2025, Agenda | The Gartner Framework to Manage AI Governance, Trust, Risk and Security
7 Gartner Article, Market Guide for AI Trust, Risk and Security Management, Avivah Litan, Max Goss, Sumit Agarwal, Jeremy D'Hoinne, Andrew Bales, Bart Willemsen. February 18, 2025. https://www.ibm.com/account/reg/signup?formid=urx-53702
9 ”The Role of Artificial Intelligence in Modern Finance: Current Applications and Future Prospects.” Applied and Computational Engineering. December 2024.