Cognitive

Automated, cloud-based IBM service looks to remove AI bias

Share this post:

AI bias IBM CloudLike human beings, artificial intelligence (AI) systems don’t always know what they don’t know. That can lead to problems, such as facial recognition software that doesn’t recognize certain types of faces or voice software that can’t understand particular accents.

Biases like those have even led some organizations to be wary of AI for liability reasons.

To head off those potential issues, IBM announced this week a new, cloud-based service that provides continuous insights into how AI systems make decisions, searches for potential prejudices and recommends adjustments to offset them.

“It’s time to translate principles into practice,” said David Kenny, IBM senior vice president of cognitive solutions. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision-making.”

IBM has also launched the open source AI Fairness 360 toolkit, which includes algorithms, code and tutorials intended to help organizations detect bias themselves.

To learn more about IBM efforts to root out bias in AI, read the full article at Venturebeat and the IBM Research blog post.

More Cognitive stories

Bringing AI and esports together with Watson on cloud

Whether players are physically on the field or controlling virtual avatars in digital settings, IBM Watson is helping sports fans catch the most breathtaking moments. At the 2019 Game Developers Conference this week, IBM showcased how it’s using artificial intelligence (AI) on the IBM Cloud to make esports fan experiences better and improve player performance. […]

Continue reading