AI is embedded in everyday life, business, government, medicine and more. At IBM®, we are helping people and organizations adopt AI responsibly. Only by embedding ethical principles into AI applications and processes can we build systems based on trust.
Watsonx.ai brings together traditional machine learning along with new generative AI capabilities powered by foundation models.
The commitments — centered on key principles of safety, security, and trust — support the development of responsible AI.
IBM CEO Arvind Krishna shares three core tenets of smart AI regulation.
In a new series, IBM and the Data & Trust Alliance offer insights on how businesses can earn trust in the era of generative AI.
Awareness about risks and potential mitigations is a crucial first step toward building and using foundation models responsibly.
IBM AI Ethics Global Leader Francesca Rossi speaks to the US National AI Advisory Committee about the future of AI.
Chief Privacy & Trust Officer Christina Montgomery discusses the need for responsible AI.
Insights contributed by IBM help enable new AI governance professionals around the world.
Chief Privacy & Trust Officer Christina Montgomery testifies before US Senate Judiciary Committee on oversight of AI.
A risk and context-based approach to AI regulation is the most effective strategy to minimize the risks of AI, including those posed by foundation models.
When ethically designed and responsibly brought to market, generative AI offers unprecedented opportunities to benefit business and society.
The Principles for Trust and Transparency are the guiding values that distinguish IBM’s approach to AI ethics.
At IBM, we believe AI should make all of us better at our jobs, and that the benefits of the AI era should touch the many, not just the elite few.
IBM clients’ data is their data, and their insights are their insights. We believe that government data policies should be fair and equitable and prioritize openness.
Companies must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into their algorithms’ recommendations.
The Principles are supported by the Pillars of Trust, our foundational properties for AI ethics.
Good design does not sacrifice transparency in creating a seamless experience.
Properly calibrated, AI can assist humans in making fairer choices.
As systems are employed to make crucial decisions, AI must be secure and robust.
Transparency reinforces trust, and the best way to promote transparency is through disclosure.
AI systems must prioritize and safeguard consumers’ privacy and data rights.
Available now - Train, validate, tune, and deploy foundation and machine learning models with ease.
Available now - Scale AI workloads, for all your data, anywhere.
Accelerate responsible, transparent and explainable data and AI workflows. General availability of watsonx.governance is expected in November.
IBM and the Data & Trust Alliance offer insights about the need for governance, particularly in the era of generative AI.
A risk- and context-based approach to AI regulation can mitigate potential risks, including those posed by foundation models
AI governance is a strategy for value creation.
IBM's AI Ethics Board was established as a central, cross-disciplinary body to support a culture of ethical, responsible and trustworthy AI throughout IBM.
Co-chaired by Francesca Rossi and Christina Montgomery, the Board’s mission is to support a centralized governance, review and decision-making process for IBM ethics policies, practices, communications, research, products and services. By infusing our long-standing principles and ethical thinking, the Board is one mechanism by which IBM holds our company and all IBMers accountable to our values.
Read the 2022 IBM Impact Report
Learn more about Francesca
Learn more about Christina
A Policymaker's Guide to Foundation Models
IBM's perspective on the opportunities posed by foundation models as well as their risks and potential mitigations.
Foundation models: Opportunities, risks, and mitigations
Awareness about risks and potential mitigations is a crucial first step toward building and using foundation models responsibly.
Precision regulation for data-driven business models
White paper outlining seven recommendations about data-driven business model risks for policymakers.
Precision regulation for AI
Companies should utilize a risk-based AI governance policy framework and targeted policies to develop and operate trustworthy AI.
Responsible advancement of neurotechnology
White paper on privacy risks of Brain-Computer Interfaces.
Data responsibility
Companies that collect, store, manage or process data have an obligation to handle it responsibly, ensuring ownership and privacy, security and trust.
Facial recognition
IBM no longer produces facial recognition or analysis software. We believe in a governance framework informed by precision regulation.
Mitigating bias in AI
Five priorities to strengthen the adoption of testing, assessment and mitigation strategies to minimize bias in AI systems.
Learning to trust AI systems
A pioneering paper on accountability, compliance and ethics in the age of smart machines.
Standards for protecting at-risk groups in AI bias auditing
IBM's point of view on protecting at-risk groups in AI bias auditing.
Automation With a Human Touch: How AI Can Revolutionize Our Government.
Addresses ethical concerns raised by the use of technologies to address society’s problems.
Defines the ethics guidelines for trustworthy AI.
IBM partners with the Vatican to endorse ethical guidelines around AI.
Brings together diverse global voices to define best practices for beneficial AI.
A guide for embedding ethics in AI design and development.
Preparing for Tomorrow by Future-Proofing in the Present.
Putting trust into practice through the responsible use of data and AI.
Explores how AI ethics can progress from abstract theories to concrete practices.