We have developed an AI-driven assistive smartphone app dubbed LineChaser, presented at CHI 2021, that navigates a blind or visually impaired person to the end of a line. It also continuously reports the distance and direction to the last person in the line, so that the blind user can follow them easily.
To tackle bias in AI, our IBM Research team in collaboration with the University of Michigan has developed practical procedures and tools to help machine learning and AI achieve Individual Fairness. The key idea of Individual Fairness is to treat similar individuals well, similarly, to achieve fairness for everyone.
It’s fast. It’s clever. It’s a robot – and it’s about to cross the Atlantic. Laden with 700 kg of scientific equipment, Mayflower is arguably the world’s most cutting-edge ship. Fully autonomous, solar-powered and without any crew, it’s getting ready to sail from Plymouth in the UK to Plymouth, Massachusetts, in the US. On its two-week, 3,000-mile journey, the sleek 15m-long trimaran will be studying the ocean, its inhabitants and the water composition.
To better simulate how the human brain makes decisions, we’ve combined the strengths of symbolic AI and neural networks. Specifically, we combined the learning representations that neural networks create with the symbol-like entities represented by high-dimensional and distributed vectors. The idea is to guide a neural network to represent unrelated objects with dissimilar high-dimensional vectors.
We've developed speech synthesis technology that emulates the type of expressiveness humans naturally deploy in face-to-face communication. In our recent paper Supervised and Unsupervised Approaches for Controlling Narrow Lexical Focus in Sequence-to-Sequence Speech Synthesis presented at the IEEE Spoken Language Technologies Workshop in Shenzhen, China, we describe a system that can emphasize or highlight certain words to improve the expressiveness of a sentence or help with context ambiguity.
In a new paper published in Frontiers in Digital Health journal, we present the first empirical evidence of tablet-based automatic assessments of patients using speech analysis — successfully detecting mild cognitive impairment (MCI), the transitional stage between normal aging and dementia.
Our study "Comparison of methods to reduce bias from clinical prediction models of postpartum depression” examines healthcare data and machine learning models routinely used in both research and application to address bias in healthcare AI.
At the 2021 virtual edition of the ACM International Conference on Intelligent User Interfaces (IUI), researchers at IBM will present five full papers, two workshop papers, and two demos.
Using novel deep learning architectures, we have developed an AI that could help organizations, enterprises, and data scientists to easily extract data from vast collections of documents. Our technology allows users to quickly customize high-quality extraction models. It transforms the documents, making it possible to use the text they contain for other downstream processes such as building a knowledge graph out of the extracted content.
Unveiled at the two-year anniversary of the IBM Research AI Hardware Center, AI Hardware Composer for analog AI hardware enables one to master and accelerate the AI hardware technology to power more sustainable AI models. It’s one of many upcoming developments of the AI Hardware Center, launched in 2019 to innovate across materials, devices, architecture and algorithms.