Foundations of trustworthy AI: governed data and AI, AI ethics and an open diverse ecosystem
Companies around the world are realizing that building trust in AI is key to widespread adoption of the technology. They know that trust in AI and being able to explain decisions made through AI algorithms are crucial to the success of their business from a brand perspective. According to the Global AI Adoption Index 2021, 86% of global IT professionals agree that consumers are more likely to choose the services of a company that offers transparency and an ethical framework on how its data and AI models are built, managed, and used.
Ultimately, customers have greater confidence in outcomes through trust and trustworthy AI solutions rooted in the foundations of governed data and AI tools and processes, AI ethics, and an open, diverse ecosystem.
Governed data and AI
Governed data and AI refer to the technology, tools and processes that monitor and maintain the trustworthiness of data and AI solutions. Companies must be able to direct and monitor their AI to ensure it is working as intended and in compliance with regulations. IBM’s governed data and AI technology is structured on the application of our fundamental principles for ethical AI: transparency, explainability, fairness, robustness, and privacy. These five focus areas are how we define trustworthy AI.
Transparency reinforces trust, and the best way to promote transparency is through disclosure. Transparency is also required for an AI solution to be ethical. It allows the AI technology to be easily inspected and means that the algorithms used in AI solutions are not hidden or unable to be looked at more closely.
While transparency offers a view into the AI technology and algorithms in use, simple and straightforward explanations are needed for how AI is used. People are entitled to understand how AI arrived at a conclusion, especially when those conclusions impact decisions about their employability, their credit worthiness, or their potential. Provided explanations need to be easy to understand.
Fairness in an AI solution means the reduction of human bias and the equitable treatment of individuals and of groups of individuals. An AI solution designed to be fair must remain that way. Monitoring and safeguards are critical to prevent bias from creeping into the solution.
As AI continues to become more a part of our human experience, it also becomes more vulnerable to attack. To be considered trustworthy, an AI solution must be robust enough to handle exceptional conditions effectively and to minimize security risk. AI must be able to withstand attacks and maintain its integrity while under attack.
To be trustworthy, AI must ensure privacy at every turn, not only of raw data, but of the insights gained from that data. Data belongs to its human creators and AI must ensure privacy with the highest integrity.
Governed data and AI tools and practices must be firmly rooted on a foundation of ethical principles. AI augments humans, their intelligence, and their behaviors, and it should be accessible to all, not just a select few. Data and insights gained by AI belong to the humans who created them, and ethical AI maintains that right. AI ethics also means that how companies use that data in AI is transparent and explainable.
Open and diverse ecosystem
Trustworthy AI solutions require more than a foundation of ethics and governance. A culture of diversity, inclusion, and shared responsibility, reinforced in an open ecosystem, is imperative for building and managing AI while delivering real value for both business and society. This may require a cultural shift so that the teams building AI solutions are made up of people from different backgrounds and closely resemble the gender, racial and cultural diversity of the societies which those solutions serve.
By offering tools, expertise, education and guidance, we’re helping businesses build trust in their AI and the outcomes it drives.