June 11, 2021 | Written by: Anders Quitzau
Categorized: AI | Sustainability
Share this post:
Just like electricity changed the last centuries for the better, AI will transform this era. AI can help society scale new heights – making us healthier, more prosperous and more sustainable. It’s an exciting time. But as we celebrate and anticipate AI’s enormous potential for economic and social good, there are – as with any new wave of technology – questions and concerns. People worry about how AI makes decisions. What information does it use? Is it objective and fair or is it biased? And how can we find out?
These questions and concerns need to be resolved. Because no matter how big or exciting its potential, if society decides not to trust AI, it cannot succeed. In IBM’s Global AI Adoption Index 2021, 86% of businesses surveyed believed that consumers are more likely to choose AI services from a company that uses an ethical framework and offers transparency on its data and AI models. We believe that establishing core principles is the departure point of building AI that is fairer, more responsible and more inclusive.
AI ethics in the Nordic countries
In the Nordics – governments are also seeing the need – and the opportunities – for creating trust in AI with their citizens, through national policies, strategies and recommendations. Denmark has committed to the National Strategy for Artificial Intelligence, Finland – the Strategy Leading the way into the age of artificial Intelligence, Norway – Nasjonal strategi for kunstig intelligence, Sweden – National Approach to Artifical Intelligence. As a stark contrast to the other Nordic countries, Iceland does not seem like Iceland has an AI strategy as of yet, although they have established the Icelandic Institute for Intelligent Machines. But so far they have not published any positions on AI ethics.
At IBM we support targeted policies that would increase the responsibilities for companies to develop and operate trustworthy AI, by defining appropriate risk-based AI governance policy frameworks.
AI ethics in IBM
At IBM, we use our company’s three guiding principles for trust and transparency to shape how we develop and deploy AI. Firstly, AI systems must be transparent and explainable. When humans develop AI systems and gather the data used to train them, they can, consciously or unconsciously, inject their own biases into their work with unfair recommendations as a result. These must be mitigated by having correct procedures and processes in place. Secondly, AI’s purpose is to augment human intelligence. AI is not about man vs machine but is man plus machine. AI should make all of us better at our jobs, and the benefits of the AI era should touch many, not just the elite few. And thirdly, data and insights from AI belong to their creator. IBM clients’ data and insights belong to them, not us.
Fair and transparent AI
91 percent of businesses using AI today say their ability to explain how it arrived at a decision is critical. Such transparency can help reduce the bias in AI data and systems that is a cause for concern. Bias in AI can have serious consequences when it influences recommendations in sensitive areas – job recruitment, court decisions and more. IBM worked with a bank who wanted to use AI for its loan decision process. The bank provided loan data. The data showed that men, having all other factors equal, were more likely to get loans than females. This was based on historical societal biases, not on true financial metrics. We could detect and mitigate this bias using IBM Cloud Pak for Data solution. But if not detected, an AI system will use that data to learn and will then perpetuate its biases, meaning a continuation of fewer women getting loan approvals.
A lack of diversity within teams developing AI makes it difficult for developers to anticipate bias and its potential impact. Put together diverse teams and you will reduce blind spots and increase chances of detecting bias. Education and training of developers is essential – not just on tools and methodologies but also on their awareness of their own biases.
Another way to mitigate bias is to make sure that AI decisions are transparent and explainable. For example, if a patient or a medical professional wants to know how an AI system came to a given conclusion regarding diagnosis or treatment, this should be explained in language and terms that are clear to whoever is asking.
The tools are here
There are guidance and tools available to help companies assess, audit and mitigate risks, including bias. KPMG uses IBM’s AI Fairness 360 Toolkit as part of its AI services used for clients in the financial sector. It’s a toolkit that’s used to check for unwanted bias in datasets and machine learning models, and to develop algorithms to mitigate such bias. Companies are also using IBM’s AI Explainability 360 Toolkit to help organizations generate explanations of AI decisions. And IBM Research has developed and put in use the concept of AI Fact Sheets. Similar to nutrition labels for foods, AI FactSheets are designed to support businesses’ internal and external transparency and compliance with regulations.
But tools cannot do it alone
Of course, achieving AI fairness is not just a technical problem; it also requires the right governance structures, engagement from company leadership and a drive to do the right thing. IBM has established an internal AI ethics board. Co-led by our Chief Privacy Officer and our Global AI Ethics Leader, it supports initiatives to operationalize our principles of trust and transparency. We believe in the power of trusted partnerships. We were one of the first signatories of the Vatican’s “Rome Call for AI Ethics” to advance a more human-centric AI We were part of the European Commission’s High Level Expert Group on AI, designed to deliver ethical guidelines for trustworthy AI in Europe. And we co-chair the Global AI Action Alliance, a new organization announced earlier this year at Davos that will set standards and provide tools to guide the responsible development and deployment of AI worldwide.
Doing the right thing is good business
Increasing the level of trust in AI systems isn’t just a moral imperative, it’s good business sense. Companies and governments are using IBM Watson to build a trusted data and AI foundation that will improve their customers experiences, empower their employees and increase their operational excellence.
If clients, citizens, employees and other stakeholders don’t trust AI, our society cannot reap the benefits it can offer. It’s an opportunity we must not miss.
If you want to find more resources on AI fairness and ethics, take a look the IBM AI Ethics website, where you will find links to tools, policies, recommendations to management and developers and more.