AI

IBM Research Progresses Field of Human-Computer Interaction (HCI)

Share this post:

The ACM CHI Conference on Human Factors in Computing Systems is the premier venue for publishing research work in human-computer interaction (HCI). This year, CHI was cancelled due to COVID-19, although some workshops will be held virtually. We plan to make recorded video presentations from IBM authors available in this post when available. SIGCHI will also make recorded presentations for full papers available in the ACM Digital Library. IBM Research had a strong body of work accepted to CHI 2020, including 9 full papers, 6 late-breaking work papers, 3 courses, 1 demo, 6 co-organized workshops, a panel, and a Special Interest Groups (SIG) meeting. Among them, one paper received the prestigious CHI Best Paper award, and another paper received an Honorable Mention award. These awards represent the HCI community’s recognition for top-quality research work.

IBM Research’s contributions to CHI 2020 focus on creating and designing AI technologies that center on user needs and societal values, spanning the topics of novel human-AI partnerships, AI UX and design, trusted AI, and AI for accessibility. In this post, we highlight a selection of our work on these topics, and provide a full list of accepted IBM work at the end.

Novel Human-AI Partnerships

As AI technologies become ever more prevalent and capable, researchers at IBM are envisioning novel forms of human-AI partnerships and investigating the human needs around them.

In a paper that received a Best Paper award, IBM researchers created a human-AI cooperative word guessing game and studied how people form mental models of their AI game partner. Through two studies — a think-aloud study and a large-scale online study — researchers investigated how people come to understand how the AI agent operates.

In the game, the agent gives clues about a target word and participants must guess the target word based on those clues. The paper investigates what conceptual models of AI agents should include and describes three components vital for people to form an accurate mental model of an AI system: local behavior, which includes conceptions of what kinds of hints the AI agent is likely to give or respond best to, global behavior, which includes conceptions of how the AI agent tends to play the game, and knowledge distribution, which includes conceptions such as whether or not the AI agent knows about specific people or attributes. The paper offers these categories as a guide for conceptual model development of all kinds of AI systems. In a large-scale study, the researchers found that people who have better estimates of the AI agent’s abilities are more likely to win the game, and people who lose more often tend to over-estimate the AI agent’s abilities.

HCI

Figure 1. Screenshot of the human-AI cooperative word guessing game

IBM Researchers, along with our academic partners in the IBM AI Horizons Network, have created a human-multi-agent negotiation platform in which people haggle with life-sized humanoid avatars in a simulated street market. These avatars can sense when they are being addressed by observing the buyer’s head orientation, and they are able to undercut one another’s offers. This work breaks new ground in several respects, including direct negotiation between humans and AI agents via natural language, the use of non-verbal human-agent communication via head orientation, and multi-lateral negotiation situated in an immersive environment. A late-breaking work paper describes this platform and associated research challenges, and early experiments indicate that human participants enjoyed the experience.

This negotiation platform provides a basis for a new international AI competition called HUMAINE 2020 (Human Multi-agent Immersive Negotiation), which is slated to be held in conjunction with IJCAI 2020. All are welcome to join the competition! View the web site for details and to sign up, and watch a sneak preview of the immersive environment in which the competition will be held.

HCI

Figure 2. Prototype of human-multi-agent negotiation platform

AI User Experience (AIUX) and Design 

HCI research often strives to produce design guidelines and practical approaches that help practitioners make technology more user-friendly. The era of AI requires innovation in design methods and processes that target the unique challenges inherent to user interactions with AI.

One of these challenges is in explainability. Understanding AI is a universal need for people who develop, use, manage, regulate, or are affected by AI systems. However, AI technologies are complex and often difficult to comprehend, such as the high-performing, yet opaque, ML models built with deep neural networks. To tackle this problem, ML researchers have developed a plethora of techniques to generate human-consumable explanations for ML models. At IBM, our open-source toolkit AI Fairness 360 makes these techniques easily accessible to ML developers.

In a paper that received an Honorable Mention award, IBM researchers studied the practices of twenty design professionals working on sixteen different AI products to understand the design space of explainable AI and pain points in current design practices. This research produced a set of actionable knowledge that bridges user needs with explainable AI (XAI) techniques, including the XAI Question Bank, a list of prototypical questions users have when asking for explanations in AI systems, and design guidelines to address these user questions.

Researchers at IBM also actively practice innovative, user-centered design approaches in their work developing new AI systems. In a case study paper, IBM researchers applied storytelling methods to design and validate the UX of an AI assistant for the oil & gas industry. Inspired by their findings from extensive user research, the researchers created a sketch-based video detailing the user experience of the AI assistant and how it would reshape and empower knowledge workers’ everyday tasks. This work shows the benefits of this sketch-based approach compared to traditional methods for validating UX design, such as wireframes, mock-ups, and storyboards.

In a late-breaking-work paper, IBM researchers documented how humanitarian aid workers could work together with an AI system to gain insights from different data sources to support decision-making practices around the allocation of resources to aid forcibly displaced peoples. In this collaboration between IBM Research and the Danish Refugee Council (DRC), the team combined empirical data with a scenario-based design approach to derive current and future scenarios describing collaboration between humanitarian aid experts and AI systems.

Trusted AI

IBM Research is actively building and enabling AI solutions people can trust. HCI researchers play an important role in such efforts, as reflected in two late-breaking-work papers.

The number of AI models being used in high-stakes areas such as financial risk assessment, medical diagnosis, and hiring continues to grow. Correspondingly, the need for increased transparency and trust in such models becomes even more relevant. Although the specifics of these models differ across domains, all of these models face the same challenge: how to collect and disseminate the most important information about a model — it’s facts — in a way that is accessible to developers, data scientists, and other business stakeholders who make decisions about that model’s use. To address this need for transparency in how machine learning models are created, IBM researchers have promoted the concept of a FactSheet, a collection of information about an AI model or service that is captured throughout the machine learning lifecycle. Through user research with data scientists and AI developers, IBM researchers developed a set of recommendations that help guide the development of FactSheets, such as providing user guidance for authoring facts , and how to report the facts to the various involved stakeholders. These recommendations also provide AI system builders with a greater understanding of the underlying HCI challenges related to documentation AI systems.

HCI

Figure 3. Prototype of FactSheet

Data is the foundation of derived knowledge and intelligent technologies. However, biases in data, such as gender and racial stereotypes, can be propagated through intelligent systems and amplified in end-user applications. IBM researchers developed a general methodology to quantify the level of biases in a dataset by measuring the difference of its data distribution with a reference dataset using a metric called Maximum Mean Discrepancy. Evaluation results show that this methodology can help domain experts to uncover different types of data biases in practice.

AI for Accessibility

IBM Research is committed to creating inclusive and accessible technologies, and our efforts continue in the AI era. Accessibility researchers at IBM, together with our academic partners in US and Japan, published two full papers and two late-breaking papers at CHI 2020. The team has been exploring ways to help visually impaired people to recognize real-world and enhance the quality of life. Their work leverages rapidly-maturing AI technologies that are capable of understanding the surrounding environment of a visually-impaired person, such as friends’ facial expressions, products in a shop, and empty seats on a train. The team developed multiple systems that enable visually-impaired people to be more independent on a trip, at an office, or at home: a face recognition system using wearable cameras, a smartphone-based personal object recognition system called ReCog, which can be trained by blind users, a robot navigation system for dynamic targets that helps people find and navigate to objects in a room, such as an empty chair, and a smartphone-based navigation system that helps blind people stand in lines.

HCI

Figure 4. Interfaces of ReCog

In addition to our contributions to the CHI literature, IBM researcher Dr. Shari Trewin served as an Accessibility Co-Chair. She engaged in an unprecedented effort to create accessible paper proceedings. Her work ensures that every paper published at CHI is available in an accessible format for all members of the CHI community.

Accepted Papers

Accepted Late-breaking Work

Accepted Demo

  • Josh Andres. Neo-Noumena

Accepted Courses

  •  Q. Vera Liao, Monider Singh, Yunfeng Zhang, Rachel K.E. Bellamy. Introduction to Explainable AI. https://hcixaitutorial.github.io/
  • Yunfeng Zhang, Rachel K.E. Bellamy, Monider Singh, Q. Vera Liao. Introduction to AI Fairness
  • Josh Andres. Inbodied Interaction 102: Understanding the Selection and Application of Non-invasive Neuro-physio Measurements for Inbodied Interaction Design

Accepted Workshops

Accepted Panel

  • Dakuo Wang. From Human-Human Collaboration to Human-AI Collaboration: Designing AI Systems That Can Work Together with People

Accepted Special Interest Groups Meeting

  • Michael Muller. Queer in HCI: Supporting LGBTQIA+ Researchers and Research Across Domains. Virtual meeting 

Accepted Workshop Papers

  • Q. Vera Liao, Yunfeng Zhang. Measuring Social Biases of Crowd Workers using Counterfactual Queries. CHI Workshop on Human-Centered Approaches to Fair and Responsible AI. http://fair-ai.owlstown.com
  • Juliana Jansen Ferreira, Mateus de Souza Monteiro. Evidence-based explanation to promote fairness in AI systems. CHI Workshop on Human-Centered Approaches to Fair and Responsible AI. http://fair-ai.owlstown.com
  • Heloisa Candello. Unveiling Practices and Challenges of Machine Teachers of Customer Service Conversational Systems. CHI Workshop on Mapping Grand Challenges for the Conversational User Interface Community. http://www.speechinteraction.org/CHI2020/

 

IBM Research Staff Member

Zahra Ashktorab

IBM Research Staff Member

Justin Weisz

Manager & Strategy Lead, Human-Centered AI, IBM Research

More AI stories

IBM Research Pioneers Technologies Behind New AI for IT Capabilities

IBM is launching today a broad range of new AI-powered capabilities and services to help CIOs automate various aspects of IT development, infrastructure and operations, including IBM Watson AIOps and Accelerator for Application Modernization with AI. As is the case with much of IBM’s AI development, significant portions of the technologies underlying Watson AIOps and the Accelerator were born out of IBM Research. 

Continue reading

IBM Research AI at ICASSP 2020

The 45th International Conference on Acoustics, Speech, and Signal Processing is taking place virtually from May 4-8. IBM Research AI is pleased to support the conference as a bronze patron and to share our latest research results, described in nine papers that will be presented at the conference.

Continue reading

IBM Research AI at ICLR 2020: Advancing Trusted, Secure and Precision-Focused AI

IBM Research AI plans to showcase more than a dozen papers at ICLR 2020 covering a diversity of topics including breakthroughs in ways of infusing common sense into AI, securing machine learning from adversarial attacks and maintaining precision of inferencing while reducing energy use.

Continue reading