The four keys to trustworthy AI
Every organization needs trustworthy AI. Here’s how you can build it.
Artificial intelligence is a major factor in people’s lives. It influences who gets a loan, how companies hire and compensate employees, how customers are treated, even where infrastructure and aid are allocated. It is already deeply embedded in our businesses, organizations and governments, including the 40,000 client engagements with IBM Watson across 20 industries in 80 countries. As the world increasingly relies on AI to help make major predictions and decisions, it becomes essential that people can trust the process and results of that AI. IBM is working on building that trust.
Organizations that neglect their ethical duties in AI can face lawsuits, regulatory fines, angry customers, embarrassment, reputational damage, and destruction of shareholder value. For example, consider the fairness of your organization’s hiring practices. If your HR department uses an existing machine-learning-based application to score prospective employees, how do you ensure trustworthy implementation of this technology?
From a technical perspective, governed data and AI technology should meet your criteria of transparency, explainability, fairness, robustness and privacy. For your hiring application to be fair, it must counter human biases and promote inclusivity and equitable treatment. But that’s not enough. You must also be ready to provide explanations to hiring managers. Your application must work well in exceptional conditions, withstand threats, and correct for drift. And you must keep applicant data private and secure to prevent inappropriate use.
A major obstacle to the widespread deployment of AI is a lack of trust. According to Morning Consult, 77% of global IT professionals report that it is critical to their business that they can trust the AI’s output is fair, safe and reliable. The way to gain trust is to earn it by building fair, robust, explainable, transparent and privacy-preserving AI models and implementations. Leveraging IBM’s leadership, expertise, tools, and governance frameworks is the best path to trustworthy AI on which we can build our future.
Widespread adoption of AI across an enterprise can be achieved by building systems delivering understandable and trusted outcomes. However, designing and implementing trustworthy AI solutions first requires a deep understanding of the humans’ problems that we want to solve as well as their business needs. Keeping in mind during the entire solution design cycle who you are trying to create value for is crucial to deliver trusted outcomes that users rely on. This can be accomplished by using a framework, such as Enterprise Design Thinking for Data and AI, that illuminates how to employ data and AI to build responsible and trusted solutions that provide business value while solving human-centric problems.
So how do you get started? Taking for example the case of AI technology and solutions your company might build to assess and hire candidates, there are four key areas to consider.
Assessment, audit and risk mitigation
How do you know that the AI models your HR team is using to automate hiring practices are fair? Are you confident that the methods being used by your HR AI solutions are robust enough to stand up to scrutiny? Can you make assurances that your AI solutions can be explained?
To ensure your AI solutions are trustworthy, you need guidance and tools to help you to assess, audit and mitigate risk.
IBM is a leader when it comes to trustworthy AI, as named in a 2021 Gartner report. Our deeply experienced data science and design teams, along with our industry subject-matter expertise, positions us to help you set up an assessment, audit and risk mitigation framework, leveraging IBM Watson AI products to mitigate risk. IBM’s guidance and tools allow you to assess your current AI-enabled business processes, creating scorecards and recommendations for the customer to address the trustworthiness of your AI solutions. IBM can also help you implement continuous monitoring and mitigation of your current AI-enabled business processes to help ensure that your AI solutions are trustworthy.
End-to-end AI lifecycle
Is your organization growing and your HR team looking to add automation to scale your hiring practices? Are you looking to add AI solutions to increase your HR team’s capacity to review resumes by leveraging AI solutions to automate your highly manual process? How do you ensure that whatever you build is trustworthy?
As they journey toward AI, most organizations establish data science teams staffed with people skilled in ML/DL algorithms, frameworks and techniques. Yet, many of those organizations struggle to make their AI projects truly relevant to the business, instead failing to get the projects into full production and integrated with existing applications and processes. It’s why so many line-of-business stakeholders consider only a small percentage of AI projects to be true successes.
IBM Watson provides a comprehensive point of view and an integrated set of products and services for managing the complete lifecycle of AI. Our guidance and tools can help you with planning, building, deploying and managing new AI solutions and ensuring they begin and remain trustworthy.
AI governance frameworks
As your organization begins to implement AI solutions to automate rating candidates, how can you be sure that your developers and data scientists are ensuring those solutions are fair, transparent, privacy-preserving, explainable and robust?
With AI governance frameworks in place to govern the data and models lifecycle, you can establish guardrails to keep your hiring practices, continuing our example use case, free from bias and reduce risk of lawsuits and bad press. IBM can help you implement an AI governance framework with procedures on data and model management and internal standards and regulations, whatever the size of your organization. By having an AI governance framework in place, there is transparency around whether your organization’s AI ethical guidelines are being followed because you can have governance without ethics, but you can’t deliver ethics without governance.
Guidance and education
Are you confident your data scientists and developers have the necessary knowledge? Can they ensure that the HR AI solutions they build eliminate bias and promote inclusivity and equitable treatment?
Artificial intelligence is a cutting-edge technology that requires deep expertise. Building AI solutions with little knowledge can truly endanger your bottom line and reputation. Your organization needs to be aware of the best practices for building trustworthy AI solutions, and you need to provide education for your data scientists, developers and decision-makers. At IBM, we have encapsulated our deep AI expertise into courses and certification paths, with a strong focus on the tenets of trustworthy AI. You can share these resources to your employees, within the above solutions or as standalone courses and certifications.
In addition to the industry-specific subject matter expertise and guidance built into each engagement above, IBM has also developed IBM Enterprise Design Thinking for Data and AI or Human-centered AI Design. With EDT for Data and AI, IBM can guide you to properly design AI systems as a team from the get-go. We can also guide your teams with a clear intent and a focus on human agency — not tools and technology for tools and technology’s sake — and show you how to apply IBM’s standard approach to design AI solutions.
IBM’s strategy of trustworthy AI for business is a holistic approach to help you govern your AI solutions and manage your full AI lifecycle. Drawing on expertise from IBM Research excellence and innovation, Data and AI and IBM Storage technological foundations, Expert Labs expertise, IBM Services industry expertise, and Enterprise Design Thinking for Data and AI, IBM can help you develop AI that is trustworthy — fair, robust, explainable, transparent, and privacy-preserving.