AI is already being used to inform decisions on whether someone should get a job. Whether they should be granted credit or qualify for housing. It’s having major, life-altering impacts, making it critically important that we handle AI responsibly.
That responsible management, however, is more easily said than done. And Christina Montgomery, Chief Privacy and Trust Officer at IBM, is someone who focuses on keeping AI a positive force for change.
“Within IBM, we created an AI Ethics Board—that I currently co-chair—to take on this challenge,” notes Montgomery. “We’ve articulated principles around AI—that it should be transparent and explainable. That it should be privacy preserving, secure, inclusive and fair. And while the board has helped build these principles into our culture, we need more than faith to ensure that we’re holding ourselves accountable to them as a company.”
Fortunately, IBM had accommodated a large influx of regulations like this before.
“Our GDPR [General Data Protection Regulation] compliance is probably the closest analog to what we’ve had to deal with for AI,” adds Lee Cox, Vice President of Integrated Governance, Services, and Research within the IBM Office of Privacy and Responsible Technology. “Prior to that, how we handled data protection-related compliance challenges was more local—many of our programs were regionalized. They got the job done, but we would have needed to put a fair amount of work in to scale them to meet new demands.”
He continues: “But with GDPR and other privacy regulations, we needed to start coordinating on a global level. We needed to adapt quickly as we faced more standards, more obligations, more complexity, more sensitivity about, ‘What’s happening to my data and how is it being consumed?’”
To handle this global oversight, IBM created an enterprise-wide Privacy and AI Management System (PIMS). And based on its successes, Montgomery and her team felt that IBM could augment this tool to better document and track compliance across its AI operations as well.
Much like with privacy, AI had witnessed an avalanche of new regulations—both at a national and regional level—over the previous few years. Similarly, various global alliances and associations had also developed guidelines intended to help keep AI behaving ethically and responsibly. But meeting this growing list of expectations can be hard to achieve at scale.
“We’re a big company,” adds Montgomery. “We operate in 170-plus countries around the world. We consist of over 400 distinct legal entities that do business with 13,000 suppliers and 150,000 business partners. At this size, it can be difficult from a governance perspective to get all of our more than 250,000 employees on the same page.”