September 21, 2018 | Written by: Matthew Wilson
Share this post:
Like human beings, artificial intelligence (AI) systems don’t always know what they don’t know. That can lead to problems, such as facial recognition software that doesn’t recognize certain types of faces or voice software that can’t understand particular accents.
Biases like those have even led some organizations to be wary of AI for liability reasons.
To head off those potential issues, IBM announced this week a new, cloud-based service that provides continuous insights into how AI systems make decisions, searches for potential prejudices and recommends adjustments to offset them.
“It’s time to translate principles into practice,” said David Kenny, IBM senior vice president of cognitive solutions. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision-making.”
IBM has also launched the open source AI Fairness 360 toolkit, which includes algorithms, code and tutorials intended to help organizations detect bias themselves.
To learn more about IBM efforts to root out bias in AI, read the full article at Venturebeat and the IBM Research blog post.