Businesses are facing an increasingly complex, ever-changing global regulatory landscape when it comes to AI. The IBM approach to AI ethics balances innovation with responsibility, helping you adopt trusted AI at scale.
Fostering a more ethical future by leveraging technology
Case study: Building trust in AI
Co-created by IBM, the Data & Trust Alliance's new Data Provenance Standards offer a first-of-their-kind metadata taxonomy to support transparency about data provenance.
This recognition validates IBM’s differentiated approach to delivering enterprise-grade foundation models, helping clients accelerate the adoption of gen AI into their business workflows while mitigating foundation model-related risks.
The EU AI Act has ushered in a new era for AI governance. What do you need to know and do to achieve compliance?
Three IBM leaders offer their insights on the significant opportunities and challenges facing new CAIOs in their first 90 days.
Learn about strategies and tools that can help mitigate the unique risks posed by foundation models.
Learn how the responsible development and deployment of AI technology can be better for people and the planet.
IBM leaders Christina Montgomery and Joshua New outline three key priorities for policymakers to mitigate the harms of deepfakes.
Good design does not sacrifice transparency in creating a seamless experience.
Properly calibrated, AI can assist humans in making choices more fairly.
As systems are employed to make crucial decisions, AI must be secure and robust.
Transparency reinforces trust, and the best way to promote transparency is through disclosure.
AI systems must prioritize and safeguard consumers’ privacy and data rights.
Human values are at the heart of responsible AI.
IBM and the Data & Trust Alliance offer insights about the need for governance, particularly in the era of generative AI.
A risk- and context-based approach to AI regulation can mitigate potential risks, including those posed by foundation models.
The IBM AI Ethics Board is at the center of IBM’s commitment to trust. Its mission is to:
Co-chaired by Francesca Rossi and Christina Montgomery, the Board sponsors workstreams that deliver thought leadership, policy advocacy and education and training about AI ethics to drive responsible innovation and the advancement and improvement of AI and emerging technologies. It also assesses use cases that raise potential ethical concerns.
The Board is a critical mechanism by which IBM holds our company and all IBMers accountable to our values and commitments to the ethical development and deployment of technology.
Learn more about ethical impact in the 2023 IBM Impact Report
Take a look inside IBM's AI ethics governance framework
Learn more about Francesca
Learn more about Christina
IBM advocates for policies that balance innovation with responsibility and trust to help build a better future for all.
IBM's five best practices for including and balancing human oversight, agency and accountability over decisions across the AI lifecycle.
IBM’s recommendations for policymakers to mitigate the harms of deepfakes.
IBM’s recommendations for policymakers to preserve an open innovation ecosystem for AI.
These standards can inform auditors and developers of AI on what protected characteristics should be considered in bias audits and how to translate those into data points required to conduct these assessments.
IBM recommends policymakers consider two distinct categories of data-driven business models and tailor regulatory obligations proportionate to the risk they pose to consumers.
Policymakers should understand the privacy risks that neurotechnologies pose as well as how they work and what data is necessary for them to function.
Five priorities to strengthen the adoption of testing, assessment and mitigation strategies to minimize bias in AI systems.
Companies should utilize a risk-based AI governance policy framework and targeted policies to develop and operate trustworthy AI.
IBM’s Global Leader for Responsible AI Initiatives, Dr. Heather Domin, discusses how regulation, collaboration, and skills demand are shaping the AI governance landscape
Experts from IBM and University of Notre Dame outline recommendations for getting the best ROI from AI ethics investments.
With input from IBM, Partnership on AI's new report explores safeguards for open foundation models.
Co-authored by IBM, the Data & Trust Alliance's new policy roadmap provides recommendations for balancing AI innovation with AI safety.
At The Futurist Summit, IBM Chief Privacy and Trust Officer Christina Montgomery and Partnership for AI CEO Rebecca Finley discuss the critical relationship between open innovation and AI safety.
With support from the Notre Dame-IBM Tech Ethics Lab, ten research projects will be undertaken in 2024.
With support from the Notre Dame-IBM Technology Ethics Lab, the Pulitzer Center launches the AI Spotlight Series, a global training initiative.
IBM and Meta launch the AI Alliance in collaboration with over 50 founding members and collaborators globally.
In collaboration with IBM, the World Economic Forum offers three briefing papers to help guide responsible transformation with AI.