AI

Daily chats with AI could help spot early signs of Alzheimer’s

Share this post:

There’s no cure for Alzheimer’s disease. But the earlier it’s diagnosed, the more chances there are to delay its progression.

Our joint team of researchers from IBM and University of Tsukuba has developed an AI model that could help detect the onset of the mild cognitive impairment (MCI), the transitional stage between normal aging and dementia — by asking older people typical daily questions. In a new paper published in Frontiers in Digital Health journal, we present the first empirical evidence of tablet-based automatic assessments of patients using speech analysis — successfully detecting mild cognitive impairment (MCI), the transitional stage between normal aging and dementia.

Unlike previous studies, our AI-based model uses speech responses to daily life questions using a smartphone or a tablet app. Such questions could be as simple as inquiring someone about their mood, plans for the day, physical condition or yesterday’s dinner. Earlier studies mostly focused on analyzing speech responses during cognitive tests, such as asking a patient to “count down from 925 by threes” or “describe this picture in as much detail as possible.”

We found that the detection accuracy of our tests based on answers to simple daily life questions data was comparable to the results of cognitive tests — detecting MCI signs with an accuracy of nearly 90 percent. This means that such an AI could be embedded into smart speakers or similar type of commercially available smart home technology for health monitoring, to help detect early changes in cognitive health through daily usage.

Our results are particularly promising because conducting cognitive tests is much more burdensome for participants. It forces them to follow complicated instructions and often induces a heavy cognitive load, preventing frequent assessments for timely and early detection of Alzheimer’s. Relying on more casual speech data though could allow much more frequent assessments with less operational and cognitive costs.

For our analysis, we first collected speech responses from 76 Japanese seniors — including people with MCI. We then analyzed multiple types of speech features, such as pitch and how often people would pause when they talked.

We knew that capturing subtle cognitive differences based on speech in casual conversations with low cognitive load would be tricky. The differences between MCI and healthy people for each speech feature tend to be smaller than those for responses to cognitive tests.

We overcame this challenge by combined use of responses to multiple questions designed to capture changes in memory and executive functions, in addition to language function, associated with MCI and dementia. For example, the AI-based app would ask: “What did you eat for dinner yesterday?” A senior with MCI could respond: “I had Japanese noodles with tempura — tempura of shrimps, radish, and mushroom.”

It may seem that there is no problem with this response. But the AI could capture differences in paralinguistic features such as pitch, pause, and others related to acoustic characteristics of voice. We discovered that compared to cognitive tests, daily life questions could elicit weaker but statistically discernible differences in speech features associated with MCI. Our AI managed to detect MCI with high accuracy of 86.4 percent, statistically comparable to the model using responses to cognitive tests.

Our research follows another promising study by IBM Researchers on the use of AI and speech analysis to predict the onset of Alzheimer’s.


Yasunori, Y., Kaoru, S., Masatomo, K., et al. Tablet-Based Automatic Assessment for Early Detection of Alzheimer’s Disease Using Speech Responses to Daily Life Questions. Front. Digit. Health. 17 March 2021 | https://doi.org/10.3389/fdgth.2021.653904

 

Inventing What’s Next.

Stay up to date with the latest announcements, research, and events from IBM Research through our newsletter.

 

Research Staff Member & Technical Lead for Digital Health, IBM Research - Tokyo

Kaoru Shinkawa

Digital Health, Accessibility & Healthcare, IBM Research - Tokyo

Masatomo Kobayashi

Research Staff Member, Healthcare Research, IBM Research - Tokyo

More AI stories

MIT and IBM announce ThreeDWorld Transport Challenge for physically realistic Embodied AI

MIT Brain and Cognitive Sciences, in collaboration with the MIT-IBM Watson AI Lab, has developed a new Embodied AI benchmark, the ThreeDWorld Transport Challenge, which aims to measure an Embodied AI agent’s ability to change the states of multiple objects to accomplish a complex task, performed within a photo- and physically realistic virtual environment.

Continue reading

Mimicking the brain: Deep learning meets vector-symbolic AI

To better simulate how the human brain makes decisions, we’ve combined the strengths of symbolic AI and neural networks. Specifically, we combined the learning representations that neural networks create with the symbol-like entities represented by high-dimensional and distributed vectors. The idea is to guide a neural network to represent unrelated objects with dissimilar high-dimensional vectors.

Continue reading

Austin or Boston? Making artificial speech more expressive, natural, and controllable

We've developed speech synthesis technology that emulates the type of expressiveness humans naturally deploy in face-to-face communication. In our recent paper Supervised and Unsupervised Approaches for Controlling Narrow Lexical Focus in Sequence-to-Sequence Speech Synthesis presented at the IEEE Spoken Language Technologies Workshop in Shenzhen, China, we describe a system that can emphasize or highlight certain words to improve the expressiveness of a sentence or help with context ambiguity.

Continue reading