The need for trusted AI: Advancing ethics and transparency
Artificial intelligence (AI) uses computers and machines to mimic the problem-solving and decision-making capabilities of the human mind. The technology is intended to foster logic-driven decisions but when human bias creeps into the system, it can have unintended negative results. However, when the work is done to implement AI in an ethical and transparent manner there are endless possibilities to extend knowledge and embrace the diversity of the of the hundreds of thousands of human dimensions.
What we know
According to an IBM Institute for Business Value (IBV) survey of global executives, average spending on AI will likely more than double in the next three years1. And with heightened AI use, there is an elevated risk related to data responsibility, inclusion and algorithmic accountability. AI is powering critical workflows in financial services, human resources, customer management and healthcare and AI adoption continues to accelerate rapidly providing the opportunity for collaboration within and across organizations to put ethics and transparency at the forefront.
Consumers are troubled about how companies use their personal information: 81% say they became more concerned over the prior year with how companies use their data and 75% are less likely to trust organizations with their personal information.1 As concerns about privacy, misuse and bias climb, companies must be vigilant in how they treat consumers’ data to build trust.
AI’s socio-technical aspects are intended to unite humans and technology and a dedication to transparency can help companies advance this unity. It’s time for industries to wake up and do things in novel ways that shun black-box algorithms and instead foster data sets which are understandable to the end-user. We must be honest that insights gained from algorithms aren’t always accurate and instead work toward data that’s explainable, predictable and more accurate.
Things to consider
Consider how AI is used in talent management. As every job seeker and hiring manager knows, matching a candidate’s skills and fit for a role goes well beyond an algorithm. While intended as an impartial method for organizations to narrow a pool of qualified applicants to advance to interviews, there is a threat AI may introduce bias. AI often lacks the human element required to match the right person with the right role and may adversely impact areas of judgment and have an impact on a person’s opportunity to advance or be considered.
Across the globe, the AI regulatory environment is evolving. The European Union Commission recently proposed new regulations and a comprehensive framework for trustworthy AI, a move expected to affect companies around the world.
In the quest for financial success enterprises may cut corners, inappropriately deploy AI and sacrifice strategic priorities — and even values — for temporary gains. To confront these potential pitfalls, a company may build a compliance apparatus to create guardrails and other reinforcement mechanisms to combat inadvertent or intentional lapses.
The impact of AI
Ethical considerations surrounding AI have never been more critical than they are today. People around the world including business executives, front-line employees, government representatives and individual citizens face serious decisions they wouldn’t have imagined in the past. These can profoundly impact the lives of their colleagues, clients and communities. Many companies are forced to weigh difficult trade-offs between economic and health imperatives guided only by their ethics, morals and values.
Given AI’s prevalence in many high-stakes decision-making applications, it’s essential that we build AI systems that are truly fair, explainable, accountable and robust. Methods that lead to trusted data include creating a clear data lineage and provenance, and embedding responsible ambassadors in the core development of AI processes and applications.
Hard work with a payoff
It’s time for businesses to get serious about scaling AI in an enterprise environment. Organizations must take meaningful action and proactively address misuse and consumer concerns. The hard work will pay dividends when we begin to see humans in thousands of dimensions rather than commodities. The companies that act now have an opportunity to shape their competitive futures—and make AI more trustworthy and, ultimately, more trusted.