Posted in: Accessibility, Cognitive Computing, IBM Research-Tokyo

Realizing a barrier-free society

Field experiment at a busy shopping district in Tokyo

Tokyo’s underground consists of miles of pedestrian walkways, extensive shopping arcades and a subway network. It stretches for hundreds of kilometers between more than 200 subway stations. Even with a map and a good sense of direction, it is not necessarily an easy place to navigate. And for the visually impaired, the challenge is even greater.

So, this month, a unique voice navigation field experiment led by civil engineering and general contracting firm Shimizu Corporation, IBM Research and real estate developer Mitsui Fudosan is taking place in an underground pedestrian walkway and COREDO Muromachi shopping park buildings 1, 2, and 3 in Nihonbashi-Muromachi, a popular downtown district with history that dates back to the Edo period of the 1600s.

Shimizu and IBM researchers (including myself) developed a high-precision voice navigation system that takes advantage of the IBM-Carnegie Mellon University pilot smartphone app called NavCog, which was developed to help people overcome difficulties with exploring the world around them — whether because of visual impairment, wheelchair use or lack of familiarity with their location or the local language.

This experimental high precision indoor/outdoor voice navigation system consists of Shimizu’s spatial information database and indoor positioning infrastructure with IBM’s indoor positioning technology and voice dialog. The team tested the system at Carnegie Mellon University campus, and at a facility Shimizu created for this purpose within its lab in Tokyo called the “Shinsetsu ni Sasayaku Ba (kindly whispering place).” After extensive testing in both locations, we are now experimenting in the real world.

Navigating the underground, one building at a time

To conduct the real-world field experiment in Nihonbashi-Muromachi, Shimizu and IBM researchers came up with a unique method to place the beacons that help the system recognize a user’s location. First, Shimizu researchers harnessed their expertise in architecture to identify ideal locations to place the 224 beacons in the ceilings and narrow gaps to avoid the need for renovations. This allowed them to efficiently place all 224 beacons in one night. Next, IBM researchers used a remote-control wheelchair equipped with sensors to measure radio wave signals from the beacons without closing passage ways. This work was completed within a few days — about twenty times more efficient than conventional manual methods in our test environment. We then used machine learning algorithms to create a probabilistic model — a statistical analysis tool that estimates, on the basis of event frequency, the probability of an event occurring again — to associate radio wave signals with likely pedestrian locations to enable accurate navigation using various sensors in smartphones, such as accelerometer, gyroscope and barometer. By using this method, the team installed the system into buildings in a limited amount of time without disturbing store business.

Map showing the field experiment supported area

Map showing the field experiment supported area

The approximately 21, 000 square meter area for the field experiment consists of Mitsui Fudosan’s COREDO Muromachi shopping park buildings 1, 2, and 3, which are connected to the underground pedestrian walkway, and have an extensive number of shops, restaurants, and a movie theater.

beacon system

 

There are three modes available for the field experiment as shown in the below photo.

image1

“Take me to the movie theater,” said IBM Fellow Chieko Asakawa, who leads the project with Shimizu and Mitsui Fudosan, to her smartphone* as she arrived near the Nihonbashi Information Center located in the first basement level of COREDO Muromachi when taking the system out for a test. Her smartphone showed a route map on a screen. And, at the same time, it quickly digested the route information and started giving location details to Chieko through earbuds. Using NavCog’s visual impairment mode, the app already knows that she is blind and needs detailed, verbal navigation instructions.


IBM Fellow Chieko Asakawa interacts with a new voice navigation system developed by Shimizu Corporation and IBM Research to help people overcome difficulties with exploring the world around them — whether because of visual impairment, wheelchair use or lack of familiarity with their location or the local language. Here, the system is set to visually impaired user mode. Chieko’s smartphone is speaking at the faster speed she prefers. According to research, people with visual impairment can comprehend speech of 25 syllables per second, compared to 10 syllables by sighted people.

The system provides Chieko the shortest route, while avoiding escalators, which can be confusing as ups and downs are side-by-side. It also gives detailed information via voice in near real-time, such as “obstacle on your right,” and “braille block will end soon,” or tells Chieko that she is about to reach a fork in the passage way, and needs to go left.

To make a detailed and useful information map, collaboration was key. Mitsui Fudosan shared the floor plan and various data on COREDO Muromachi to create a map with detailed information that matters to people like Chieko, such as where an automatic glass door is, if the user is approaching an escalator or an elevator, or if the user is approaching an obstacle such as a standing store signboard or a long line of people in front of the shop the user want to access. Through this experience, I’ve come to feel strongly that a mechanism to collaboratively share data is needed to better serve the needs of diversity and make our real world more accessible to everyone.

With that desire to foster accessibility, the team created additional features for wheel chair users and visitors from other countries, adding English as a language option for navigation.

Japanese mode screen image

Japanese mode screen image

As the field experiment continues, the three companies will collect survey results to analyze localization precision, voice guidance timing and whether  guidance is easy to understand, and appealing. Providing an appropriate navigation guide at just the right moment is the key. The researchers want to ensure that everyone can walk around town with ease.

“With cognitive assistant technologies, which help regain information by augmenting missing or weakened abilities, I hope those who are blind, like myself, can benefit from such technologies, and enjoy exploring different parts of Tokyo,” said Chieko, who was recently elected as the new foreign member of the National Academy of Engineering. “Yet, technology alone cannot create a barrier-free city. All the constituents need to collaborate to overcome mobility barriers. I look forward to collaborating with as many people as possible, and we plan to share the findings on the open platform.”

The underground pedestrian walkway and COREDO Muromachi shopping park buildings 1-3 area is part of the Nihonbashi area, one of the 38 capital regional plans established by the Japan’s Ministry of Land, Infrastructure, Transport and Tourism (MLIT). Thanks to MLIT, Chuo City of Tokyo, Nihonbashi Muromachi Area Management, and Japan Braille Library for their cooperation in this on-going field experiment in help realizing a barrier-free society.

*iPhone6 or later (iOS10 or later, except for iPhone SE)

Save

Save

Save

Save

Save

Save

Save

Comments

Add Comment

Your email address will not be published. Required fields are marked *

Hiro Takagi, Senior Technical Staff Member, IBM Research - Tokyo

Hiro Takagi

Senior Technical Staff Member, Cognitive Computing, IBM Research - Tokyo