AI

IBM Research at the Intersection of HCI and AI

Share this post:

Editor’s note: This post is written by Justin Weisz, Vera Liao, Christine Wolf, and Elizabeth Daly of IBM Research.

The Twenty-Fourth ACM International Conference on Intelligent User Interfaces (IUI 2019) was held March 16-20 in Marina del Rey, CA. IUI is the premier venue where the human-computer interaction (HCI) community meets the artificial intelligence (AI) community. Work presented at IUI focuses on improving the interaction between humans and AI systems by combining HCI approaches with state-of-the art AI techniques from machine learning (ML), natural language processing (NLP), data mining, knowledge representation, and reasoning. IBM Research has actively engaged with the IUI community for decades. This year, IBM researchers presented 4 papers and organized 3 workshops across multiple key areas of IUI, including explainable AI, conversational interfaces & human-agent interaction, and automated ML.

Explainable AI (XAI) – outstanding paper award

While AI applications have achieved dramatic success in recent years, their inability to explain results to human users impairs not only their effectiveness but also human trust in adopting these applications. XAI, which refers to techniques to make AI easily understood and trusted by human users, has been a primary focus of IUI researchers by bringing perspectives from AI, psychology, behavioral science, and design.

As part of work from IBM Research AI‘s key area in trusting AI, our researchers are actively working on making AI systems fair and unbiased. By providing user-friendly explanations to help people understand AI systems’ decisions, we can enable effective human scrutiny and intervention for fair AI systems. Toward this goal, IBM researchers have built a service to generate multiple styles of explanation and conducted a user study to unpack the complex effects between explanation styles, types of ML fairness, and users’ individual characteristics. This paper on XAI received an outstanding paper award at IUI this year.

XAI techniques are a growing area of research, yet a key challenge remains in understanding the unique requirements that might arise when XAI systems are deployed into complex settings of use. In another paper, an IBM researcher introduced a design method called “explainability scenarios.” These scenarios help designers understand what information people need to know about AI systems in order to act on their recommendations.

IBM researchers also participated in the 2nd annual IUI workshop on Explainable Smart Systems (ExSS), presenting findings from a user study of explainable UI. RulesLearner is a prototype system that allows users to interact with natural language classification models expressed as linguistic logic expressions. The research findings indicate the importance of explainability and interactivity for improving generalizability and productivity, and suggest that hybrid intelligence (human-AI) methods offer great potential. We also presented findings from a study of intelligent system design, highlighting the different types of explanations workers need to make sense of and successfully integrate smart outputs into their everyday work practices.

Conversational interfaces & human-agent interaction

With continued advances in natural language processing and machine learning, conversational agents have become popular for various tasks, domains, and settings. Developing new technologies and designs for conversational interfaces, and studying how people interact with agents, has become a focus of both the HCI and AI communities, with the opening keynote of IUI addressing the making of responsible and empathetic conversational agents.

In order to improve how people interact with AI, IBM researchers developed a web experience that teaches about how chatbots work and how to have better conversations with them. In an evaluative study, the researchers found that people learned lessons such as using simple language to speak with chatbots, minimizing contextual details, and not getting frustrated when a chatbot says, “I don’t know.” This experience demonstrated that while technological improvements to chatbots are needed, humans can also be taught to have better chatbot experiences by learning how to “speak chatbot.”

IBM researchers applied conversational interfaces to recommender systems by developing a career goal recommender. Through conversation, users were able to interactively improve their recommendations and bring their own preferences to the system. Placing users “in the loop,” especially for higher-stakes recommendations like career goals, increased the trust and transparency of the system.

At the frontier of current research into conversational agents, we are working to develop capabilities to adapt to individual users, both to satisfy different user needs and to behave in socially favorable ways. IBM researchers have been actively working on these topics and co-organized the first IUI Workshop on User-Aware Conversational Agents. The workshop brought together more than 30 researchers from the HCI, user modeling, and NLP communities, engaging in discussions to identify the central research topics in user-aware conversational agents.

Automated ML

The automation of ML and data science is an emerging topic in the IUI community. Automating data science was not only covered prominently in one of the keynotes at IUI but also in a full paper exploring human-guided ML requirements and assessing existing systems in this emerging space. At IBM, we are developing a suite of AutoAI technologies to make it easier for data scientists to produce high-quality models by automating different steps of the data science pipeline: joining disparate data sets and cleaning data, engineering features, crafting neural network architectures, tuning hyper parameters, and evaluating models for fairness and robustness.

Marriage of HCI and AI

At IBM Research, we believe that AI systems will always contain a human element—what we call a human-in-the-loop—in order to ensure that these systems are fair and unbiased, robust and secure, applied ethically and in service to the needs of their users. HCI research is crucial for understanding how to design human-in-the-loop AI systems. HCI research methods help us understand who we are building AI systems for and evaluating how well those systems are working.

Accepted papers at IUI

  • Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). ACM, New York, NY, USA, 275-285. DOI: https://doi.org/10.1145/3301275.3302310
  • Justin D. Weisz, Mohit Jain, Narendra Nath Joshi, James Johnson, and Ingrid Lange. 2019. BigBlueBot: teaching strategies for successful human-agent interactions. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). ACM, New York, NY, USA, 448-459. DOI: https://doi.org/10.1145/3301275.3302290
  • Christine T. Wolf. 2019. Explainability scenarios: towards scenario-based XAI design. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). ACM, New York, NY, USA, 252-257. DOI: https://doi.org/10.1145/3301275.3302317
  • Oznur Alkan, Elizabeth M. Daly, Adi Botea, Abel N. Valente, and Pablo Pedemonte. 2019. Where can my career take me?: harnessing dialogue for interactive career goal recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). ACM, New York, NY, USA, 603-613. DOI: https://doi.org/10.1145/3301275.3302311

Workshops at IUI

This post is written by Justin Weisz, Vera Liao, Christine Wolf, and Elizabeth Daly of IBM Research.

More AI stories

Image Captioning as an Assistive Technology

IBM Research's Science for Social Good team recently participated in the 2020 VizWiz Grand Challenge to design and improve systems that make the world more accessible for the blind.

Continue reading

Reducing Speech-to-Text Model Training Time on Switchboard-2000 from a Week to Under Two Hours

Published in our recent ICASSP 2020 paper in which we successfully shorten the training time on the 2000-hour Switchboard dataset, which is one of the largest public ASR benchmarks, from over a week to less than two hours on a 128-GPU IBM high-performance computing cluster. To the best of our knowledge, this is the fastest training time recorded on this dataset.

Continue reading

IBM & MIT Roundtable: Solving AI’s Big Challenges Requires a Hybrid Approach

At IBM Research’s recent “The Path to More Flexible AI” virtual roundtable, a panel of MIT and IBM experts discussed some of the biggest obstacles they face in developing artificial intelligence that can perform optimally in real-world situations.

Continue reading