Chieko Asakawa has dedicated her career as a computer scientist to helping blind people like herself live life to the fullest. In the 1980s, she developed a widely used word processor for Braille and a digital library for Braille documents.
In the 1990s, she created a voice browser that enables blind people to access the Internet using text-to-speech translations. Now, working with scientists at Carnegie Mellon University, the IBM Fellow is developing technologies aimed at helping the blind “see” and interact more fully with the world around them.
“I want technology to substitute for vision to help the blind manage in the real world,” says Chieko. “Blind people want to be independent, and I want to help achieve that goal.”
She wants to be able to recognize friends who are walking towards her and greet them by name when they come near. She wants to know peoples’ emotional states when she speaks to them. And she wants to be able to walk up on a stage to make a presentation without needing to hold someone’s elbow for guidance.
In essence, she and her CMU colleagues are combining Internet of Things sensors with smartphones and cognitive technologies to create digital guide dogs for the blind.
Chieko at CMU
Their goal is to not only develop systems that harness the combined power of these technologies but to create a platform of open source software that developers can use to add a new generation of accessibility capabilities for a wide range of situations and, environments– shopping malls, airports, hospitals, stadiums, offices, etc. The technologies could also be used for fully sighted people–perhaps a Take Me Home app for elderly people who get lost or a semi-autonomous wheel chair for people who have difficulty walking. Here’s where they’ll share their tools to further explore ideas with researchers, developers and users.
Prof. Martial Hebert, director of The Robotics Institute at CMU, who is collaborating with Chieko, says the project is a natural extension of what he and his colleagues do every day. “One of the most important uses for robots will be operating alongside people, helping people and living with people–and that will include guiding blind people,” he says.
Chieko was born with normal sight but a swimming accident when she was 11 years old left her completely blind by the time she was 14. She remembers the frustration she felt at having to ask her brothers to read to her, and she has been determined all of her life to invent tools that would help her and others like her live fuller lives.
Two years ago, Chieko saw the potential for advances in mobile computing, Internet of Things sensor networks and computer vision to usher in a new generation of accessibility aids for the blind. Looking for collaboration partners, she visited CMU’s Robotics Institute two years ago and was impressed by their ideas and projects–especially those addressing computer vision. “They made me believe that my dream to see the real world again may come true,” she says. RI faculty members shared her enthusiasm and they began collaborating late last year. Now she’s working on the CMU campus in Pittsburgh as visiting faculty member.
The Robotics Institute is a living laboratory–with robots of different types and with different purposes roaming the hallways and inhabiting offices and classrooms. The initial focus for Chieko and her colleagues there is to make CMU’s campus, and, in particular, its sprawling computer science building, accessible to the blind.
To lay the groundwork, they installed a network of Bluetooth Low Energy beacons on the campus. The BLE beacons send Bluetooth radio signals that provide mobile devices, including smartphones, with detailed digital maps of physical environments. By combining the beacon networks with GPS technologies, video cameras in Smartphones, and computer image recognition software, the scientists develop applications that help blind people navigate indoors and outside. The applications map routes to their destinations, read signs, spot obstacles in their way and help them avoid colliding with other people or being struck by cars or bicycles.
One of the pilot apps, NavCog, tells blind people about the world around them by whispering into their ears through earbuds.
Here’s a demo video showing NavCog in action.
The open source toolkit the team is producing includes, for starters, a navigation app, a map editing tool, a tool to sample beacon signals, and localization. There’s more to come. Chieko wants to be able to connect blind people with rich sources of data about what’s going on in the the world around them. For instance, she imagines them being able to go shopping in a mall and tapping into information about what’s for sale in the stores they pass. “In this way, a blind person would be able to go window shopping,” she says.
When Chieko was a child, before her accident, she frequently watched a TV cartoon show that featured a boy and his pet robotic bird, which would sit on his shoulder wherever he went. His mother would communicate with him telepathically through the bird, giving him advice and warning him of impending danger as he battled the forces of evil.
Chieko has long dreamed of creating the real-world equivalent of the bird to help blind people. Now her lifelong dream is coming true. “My motto is, ‘making the impossible possible by never giving up,'” she says.
This story originally appeared on the THINK Blog on Oct. 15, 2015, but has been updated to include Chieko’s powerful and recent TED presentation.
“What ultimately makes a platform worth using in the long run are the applications that run on it.” – Ben Thompson, Stratechery Platforms drive commerce. Whether in technology or other industries, the creation, acceptance and adoption of platforms spur innovation, efficiency, and productivity. Consider the U.S. Interstate Highway System, which dates back to the 1950’s, […]
In 2014, Apple and IBM formed a partnership to empower the enterprise and change the way professionals work. Since then, we’ve helped numerous clients across industries, including Amica Insurance, City Furniture, and Japan Airlines, fuel their digital transformations with mobile. Today, we’re seeing an increasing number of companies interested in artificial intelligence (AI) and machine […]
When it comes to the digital workplace, we view it in terms of stages, or generations. The first was device-centric and included a one-size-fits-all model, in which every employee received essentially the same type of device, the same applications, and the impersonalized levels of support service. The second generation focused on limited device choices and […]