AI Ethics
IBM’s multidisciplinary, multidimensional approach to AI ethics
IBM’s multidisciplinary, multidimensional approach to AI ethics
*Source: IBM’s From Roadblock to Scale: The Global Sprint Towards AI Study 2020
3 in 4 businesses are exploring or implementing AI.
78% of senior business decision-makers say it is very or critically important that they can trust that their AI’s output is fair, safe, reliable.
Uniting insights from business, policy, research, and thought leadership
For more than 100 years, IBM has continuously strived for responsible innovation capable of bringing benefits to everyone and not just a few.
The purpose of AI is to augment human intelligence.
At IBM, we believe AI should make all of us better at our jobs, and that the benefits of the AI era should touch the many, not just the elite few.
Data and insights belong to their creator.
IBM clients’ data is their data, and their insights are their insights.
New technology, including AI systems, must be transparent and explainable.
Technology companies must be clear about who trains their AI systems, what data was used in that training and, most importantly, what went into their algorithms’ recommendations.
Good design does not sacrifice transparency in creating a seamless experience.
Properly calibrated, AI can assist humans in making fairer choices.
As systems are employed to make crucial decisions, AI must be secure and robust.
Transparency reinforces trust, and the best way to promote transparency is through disclosure.
AI systems must prioritize and safeguard consumers’ privacy and data rights.
Selected positions and recommendations from the IBM Policy Lab
IBM no longer offers general purpose IBM facial recognition or analysis software. We believe a precision regulation approach can inform a reasonably-balanced governance framework for facial recognition systems. Policymakers should employ precision regulation that applies restrictions and oversight to particular use-cases and end-users where there is greater risk of societal harm.
Organizations that collect, store, manage or process data have an obligation to handle it responsibly, ensuring ownership and privacy, security, and trust.
Read more about IBM policies and practices on emerging technology
A new forum providing policymakers with a vision and actionable recommendations to harness the benefits of innovation while ensuring trust in a world being reshaped by data.
Information about our commitment to protect our clients and business with security and privacy practices.
Governance, accountability, and Good Tech across the organization
The Board was established as a central, cross-disciplinary body to support a culture of ethical, responsible, and trustworthy AI throughout IBM.
Our mission is to support a centralized governance, review, and decision-making process for IBM ethics policies, practices, communications, research, products and services. By infusing our long-standing principles and ethical thinking, the Board is one mechanism by which IBM holds our company and all IBMers accountable to our values.
IBM fellow and AI Ethics Global Leader
Vice President & Chief Privacy Officer
IBM Research is building and enabling AI solutions focused on trust.
The Center for Open Source Data and AI Technologies (CODAIT) helps maintain projects created by IBM Research that can increase fairness, explainability, robustness, and transparency in machine learning systems.
A shared collection of ethics, guidelines, and resources to design human-centric AI.
Partnering IBM Research scientists and engineers with academic fellows, subject matter experts from a diverse range of non-governmental organizations (NGOs), public sector agencies, and social enterprises to tackle emerging societal challenges using science and technology.
IBM brings the power of its technology, resources, and people to help with initiatives around the world, from education to health.
As AI adoption rapidly increases, it’s critical that AI ethics progress from abstract theories to concrete practices.
Resources and open-source tools for building trust-based AI
Automate AI model risk management in new ways
Manage regulatory compliance by tracing and explaining AI decisions across workflows, and intelligently detect and correct bias to improve outcomes. IBM Watson OpenScale will easily operate with model development environments from other vendors and open source tools. It provides an innovative set of monitoring and management tools that help you build trust and implement control and governance structures around your AI investments.
An open solution that helps remove barriers to enterprise-scale AI
Working with organizations, businesses and governments on ethical AI
IBM works with governments, academia, non-profits and industry partners to further the implementation of ethical AI.