What is AI TRiSM?

Woman smiles while typing on a laptop computer.

Authors

Alice Gomstyn

Staff Writer

IBM Think

Alexandra Jonker

Staff Editor

IBM Think

What is AI TRiSM?

AI TRiSM, or artificial intelligence (AI) trust, risk and security management (AI TRiSM), “ensures AI model governance, trustworthiness, fairness, reliability, robustness, efficacy and data protection. This includes solutions and techniques for model interpretability and explainability, AI data protection, model operations and adversarial attack resistance.”1
 

 It was developed by research and consulting firm Gartner®, and “provides proactive solutions to identify and mitigate these risks, ensuring reliability, trustworthiness and security.”2

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

Why is AI TRiSM important?

While numerous, separate frameworks focus specifically on AI trust, AI risk or AI security, they’re challenging to integrate and synchronize, according to researchers. This lack of coordination can result in fragmented AI management. It can also lead to knowledge gaps in the risks and security consequences stemming from AI implementation and AI practices.

The AI TRiSM framework, however, provides a unified approach. It brings together the important parts of different frameworks for more comprehensive management of AI technologies.

Supporters of AI TRiSM consider the framework important for mitigating risks and cyberthreats related to the advancement and increasing use of generative AI (gen AI), such as large language models (LLMs). Generative AI use can increase organizations’ attack surfaces, enable more sophisticated cyberattacks by hackers and raise novel ethical considerations. The benefits of AI TRiSM applications in areas like healthcare and finance include risk mitigation, enhanced measures for model monitoring, and safeguards against adversarial attacks and unauthorized access.3

AI Academy

Uniting security and governance for the future of AI

While grounding the conversation in today’s newest trend, agentic AI, this AI Academy episode explores the tug-of-war that risk and assurance leaders experience between governance and security. It’s critical to establish a balance and prioritize a working relationship for both to achieve better, more trustworthy data and AI your organization can scale.

The principles and practices of AI TRiSM

According to Gartner, “AI trust, risk and security management (AI TRiSM) ensures:

  • Governance
  • Trustworthiness
  • Fairness
  • Reliability
  • Data protection in AI deployments”4

AI governance

AI governance refers to the processes, standards and guardrails that help ensure AI systems and tools are safe and ethical. Effective AI governance includes risk management—with mechanisms to address potential biases, data privacy violations and other concerns—while building trust and supporting innovation.

It involves the continuous monitoring and evaluation of AI systems to ensure they comply with established ethical norms and legal regulations. AI governance includes data governance, a data management discipline designed to maintain safe, high-quality data that is easily accessible for data discovery and business intelligence initiatives.

Trustworthiness

Different organizations and frameworks emphasize various guiding principles and goals for determining the trustworthiness of AI systems. Frequently cited principles of trustworthy AI include accountability, explainability and interpretability.

When AI model users and other stakeholders have trouble understanding how a model functions, it can hinder their trust in the AI system. The right processes and methodologies can help users to understand and trust the decision-making processes and outputs of machine learning models.

Fairness

Fairness often involves mitigating or eliminating bias in AI models and data during the AI development lifecycle. AI models absorb the biases of society that can be embedded in their training data. Biased data collection that reflects societal inequity can result in harm to historically marginalized groups in credit scoring, hiring and other areas.

Identifying and addressing bias in AI requires the ability to direct, manage and monitor an organization’s AI activities. This can be achieved through AI governance—specifically, the creation of policies and practices to guide responsible AI development and use of AI technologies.

Reliability

Reliability generally refers to the ability of something to function as anticipated or required for a given period under certain conditions. With respect to AI systems, meeting performance expectations includes providing correct outputs for a period that may extend as long as the lifetime of the system.5

Data protection

Data protection is the practice of safeguarding sensitive data from loss and corruption. Data protection is intended to preserve data availability, ensuring users can access data for business operations, even if data is damaged or lost in a data breach or malware attack.

Data protection encompasses data security (the protection of digital information from unauthorized access, corruption or theft) and data privacy (the principle that a person should have control over their personal data). Data protection is key to compliance with major regulatory regimes, such as the European General Data Protection Regulation (GDPR).

What technologies support AI TRiSM?

Both traditional technologies and newer, AI-specific solutions support AI TRiSM. The former include tools for providing identity and access management (IAM) and solutions for data security posture management.

AI-specific technologies for AI TRiSM vary by provider. Some focus on certain functions, such as security or compliance. Others offer more comprehensive products with an array of capabilities, including:

  • AI governance

  • AI data protection and classification

  • AI runtime inspection and enforcement (see additional information in the following section)

Mandatory features for AI TRiSM solutions

According to additional Gartner guidance released in February, 2025, TRiSM solutions should also include:7

AI catalog

An inventory of AI entities used in the organization, such as models, agents and applications in various configurations, ranging from embedded AI in off-the-shelf, third-party applications to bring-your-own AI, to retrieval-augmented generation systems, to first-party models

AI data mapping

Data used to train or fine-tune AI models, provide context for user queries in a RAG system, or feed AI agentic systems.

Continuous assurances and evaluation

Continuous assurances and evaluation of performance, reliability, security, or safety expectations and metrics that are used for baselines Assurances and evaluations are applied predeployment, postdeployment (out of band).

Runtime inspection and enforcement

Applied to models, applications and agent interactions to support transactional alignment with organizational governance policies. Applicable connections, processes, communications, inputs and outputs are inspected for violations of policies and expected behavior. Anomalies are highlighted and either blocked, autoremediated or forwarded to humans or incident response systems for investigation, triage, response and applicable remediation.

Real world use cases for AI TRiSM

There are multiple use cases for implementing AI TRiSM in the deployment and management of enterprise AI. These include:

Safeguarding sensitive healthcare data

Medical professionals increasingly use AI-powered tools for a range of purposes, from medical device automation to analyzing images. An AI TRiSM program can help protect the healthcare data used in these systems from data breaches. Measures such as access controls, for example, can mitigate potential risks of unauthorized access.

Improving the customer experience of financial transactions

When datasets that contain demographic biases are used to train AI algorithms, the outcomes can be biased. This has been a known problem in the financial industry, affecting loan approvals, interest rate charges and more. In Denmark, the Danish Business Association applied AI TRiSM practices by performing fairness tests to validate predictions for AI models that oversee financial transactions, increasing customer trust.8

Preventing banking fraud and supporting regulatory compliance

In addition to ensuring greater fairness in financial transactions, AI TRiSM measures can help protect financial institutions’ fraud detection systems from adversarial attacks.9 These AI solutions also help banks comply with legal requirements on consumer protections and safeguarding sensitive information.

Read the full report here.

Related solutions
IBM watsonx.governance

Govern generative AI models from anywhere and deploy on the cloud or on premises with IBM watsonx.governance.

Discover watsonx.governance
AI governance solutions

See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation, and improve customer trust.

Discover AI governance solutions
AI governance consulting services

Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting.

Discover AI governance services
Take the next step

Direct, manage and monitor your AI with a single portfolio to speed responsible, transparent and explainable AI.

Explore watsonx.governance Book a live demo
Footnotes

1 Gartner Glossary, AI TRiSM, https://www.gartner.com/en/information-technology/glossary/ai-trism. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

2, 4 Gartner Article, Tackling Trust, Risk and Security in AI Models, Avivah Litan, December 24, 2024, https://www.gartner.com/en/articles/ai-trust-and-ai-risk.

3, 8 “Artificial Intelligence Trust, Risk and Security Management (AI TRiSM): Frameworks, applications, challenges and future research directions.” Expert Systems with Applications. 15 April 2024.

5 “AI Risks and Trustworthiness.” National Institute of Standards and Technology. Accessed 23 February 2025.

6 The Gartner Framework to Manage AI Governance, Trust, Risk and Security. [Webinar] Gartner. Accessed 28 January 2025, Agenda | The Gartner Framework to Manage AI Governance, Trust, Risk and Security

Gartner Article, Market Guide for AI Trust, Risk and Security Management, Avivah Litan, Max Goss, Sumit Agarwal, Jeremy D'Hoinne, Andrew Bales, Bart Willemsen. February 18, 2025. https://www.ibm.com/account/reg/signup?formid=urx-53702

9 ”The Role of Artificial Intelligence in Modern Finance: Current Applications and Future Prospects.” Applied and Computational Engineering. December 2024.