AI is embedded in everyday life, business, government, medicine and more. At IBM®, we are helping people and organizations adopt AI responsibly. Only by embedding ethical principles into AI applications and processes can we build systems based on trust.
Accelerate responsible, transparent and explainable AI workflows for both generative AI and machine learning models
Watch the episode: Trust, transparency and governance in AI
In a new case study featuring IBM, Gartner discusses how to establish a governance framework to streamline the process of detecting and managing technology ethics concerns in AI projects.
IBM announced the general availability of the first models in the watsonx Granite Model Series — a collection of generative AI models to advance the infusion of generative AI into business applications and workflows.
TheStreet spoke to Christina Montgomery, IBM's Chief Privacy and Trust officer, about how the company is focused on ensuring safe, responsible AI
The commitments — centered on key principles of safety, security, and trust — support the development of responsible AI.
IBM CEO Arvind Krishna shares three core tenets of smart AI regulation.
In a new series, IBM and the Data & Trust Alliance offer insights on how businesses can earn trust in the era of generative AI.
Awareness about risks and potential mitigations is a crucial first step toward building and using foundation models responsibly.
IBM AI Ethics Global Leader Francesca Rossi speaks to the US National AI Advisory Committee about the future of AI.
The Principles for Trust and Transparency are the guiding values that distinguish IBM’s approach to AI ethics.
At IBM, we believe AI should make all of us better at our jobs, and that the benefits of the AI era should touch the many, not just the elite few.
IBM clients’ data is their data, and their insights are their insights. We believe that government data policies should be fair and equitable and prioritize openness.
Companies must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into their algorithms’ recommendations.
The Principles are supported by the Pillars of Trust, our foundational properties for AI ethics.
Good design does not sacrifice transparency in creating a seamless experience.
Properly calibrated, AI can assist humans in making fairer choices.
As systems are employed to make crucial decisions, AI must be secure and robust.
Transparency reinforces trust, and the best way to promote transparency is through disclosure.
AI systems must prioritize and safeguard consumers’ privacy and data rights.
Train, validate, tune, and deploy foundation and machine learning models with ease.
Scale AI workloads, for all your data, anywhere.
Accelerate responsible, transparent and explainable data and AI workflows.
Human values are at the heart of responsible AI.
IBM and the Data & Trust Alliance offer insights about the need for governance, particularly in the era of generative AI.
A risk- and context-based approach to AI regulation can mitigate potential risks, including those posed by foundation models.
IBM's AI Ethics Board was established as a central, cross-disciplinary body to support a culture of ethical, responsible and trustworthy AI throughout IBM.
Co-chaired by Francesca Rossi and Christina Montgomery, the Board’s mission is to support a centralized governance, review and decision-making process for IBM ethics policies, practices, communications, research, products and services. By infusing our long-standing principles and ethical thinking, the Board is one mechanism by which IBM holds our company and all IBMers accountable to our values.
Read the 2022 IBM Impact Report
Learn more about Francesca
Learn more about Christina
A Policymaker's Guide to Foundation Models
IBM's perspective on the opportunities posed by foundation models as well as their risks and potential mitigations.
Foundation models: Opportunities, risks, and mitigations
Awareness about risks and potential mitigations is a crucial first step toward building and using foundation models responsibly.
Precision regulation for data-driven business models
White paper outlining seven recommendations about data-driven business model risks for policymakers.
Precision regulation for AI
Companies should utilize a risk-based AI governance policy framework and targeted policies to develop and operate trustworthy AI.
Responsible advancement of neurotechnology
White paper on privacy risks of Brain-Computer Interfaces.
Data responsibility
Companies that collect, store, manage or process data have an obligation to handle it responsibly, ensuring ownership and privacy, security and trust.
Facial recognition
IBM no longer produces facial recognition or analysis software. We believe in a governance framework informed by precision regulation.
Mitigating bias in AI
Five priorities to strengthen the adoption of testing, assessment and mitigation strategies to minimize bias in AI systems.
Learning to trust AI systems
A pioneering paper on accountability, compliance and ethics in the age of smart machines.
Standards for protecting at-risk groups in AI bias auditing
IBM's point of view on protecting at-risk groups in AI bias auditing.
Automation With a Human Touch: How AI Can Revolutionize Our Government.
Addresses ethical concerns raised by the use of technologies to address society’s problems.
Defines the ethics guidelines for trustworthy AI.
IBM partners with the Vatican to endorse ethical guidelines around AI.
Brings together diverse global voices to define best practices for beneficial AI.
A guide for embedding ethics in AI design and development.
Preparing for Tomorrow by Future-Proofing in the Present.
Putting trust into practice through the responsible use of data and AI.
Explores how AI ethics can progress from abstract theories to concrete practices.