Cognitive Computing

Puzzle Solving with Computer Vision and Watson Services

Share this post:

Who doesn’t love the challenge of solving a puzzle? Jigsaw puzzles are a popular hobby for all ages and have been an important tool in a child’s physical, cognitive and emotional development. Children can develop physical skills such as better hand/eye coordination; cognitive skills such as shape recognition, memory, and problem solving; and emotional skills such as setting goals and patience by completing puzzles. But what if you are a visually impaired or a blind parent? How can you help your child to accomplish this important developmental task?

Puzzle Solving Toolkit

At IBM Research – Ireland our team built a 3D computer vision driven task completion prototype called the Puzzle Solving Toolkit which interacts with visually impaired users to guide them in solving a jigsaw puzzle in a natural and intuitive way. We achieved this by providing a combination of computer vision algorithms and Watson services.

Our toolkit recognizes the puzzle pieces via a camera input and computes the best strategy for solving the puzzle. The system interacts with the user by leveraging Watson Speech to Text and Text to Speech services to communicate the necessary instructions about which piece to pick up next and also adapt to user’s feedback.

Watch our video where we explain how our Puzzle Solving Toolkit prototype works. In the video we also share additional research projects being developed by our team on Indoor Positioning Systems and Visual Recognition Systems. 

Components of the toolkit

We utilize a head-mounted Intel 3D camera as video input and microphone/speaker to achieve natural human-computer interaction. The video input is fed to a deep learning pipeline, which identifies the puzzle pieces in a given scene and matches them to the original image of the puzzle. Upon completion of the pipeline, the core puzzle solving algorithm computes the solution of the puzzle and coordinates the instructions that need to be communicated to the user to successfully complete the puzzle.

The core puzzle solving algorithm and the deep learning pipeline are executed in a Nvidia TK1 board and leverage the computational speed of its GPUs. This demo unveils the capabilities we can provide in assisting visually impaired people to execute simple everyday tasks by bringing sophisticated AI algorithms together with advanced hardware.

Real-world applications

With our toolkit we could envision helping, for example, visually impaired parents to guide their children in completing collaborative learning tasks. Our research can bring real-time, context-specific cognitive analytics to the edge, in tandem with deep device specialization and sensory augmentation, to transform the human experience.

Performing an everyday task such as cooking, navigating an indoor/outdoor environment, or attending a meeting at work is quite simple for most people. However, approximately 285 million visually impaired and 39 million blind people face an enormous challenge in accomplishing these daily tasks. [1] With the rapid convergence of hardware equipped with powerful sensors and 3D computer vision algorithms, we are now able to repurpose IoT devices and create AI-powered assistive technologies that naturally interact, using cognitive Watson services, to provide blind and visually impaired people with more independence. The spectrum of possible applications is endless, and we’re looking to enable people to accomplish more complex and challenging everyday tasks at a rapid pace.

The goal of our research is to help people maximise their full potential, accomplish more, and have a better quality of life. We will continue to enhance our prototypes, combined with the cognitive capabilities of Watson, to bring visually impaired people closer to complete autonomy.

[1]                  World Health Organization, Fact Sheet No 282, Aug 2014

 

Research Software Engineer

More Cognitive Computing stories
By Marc Johlic on January 29, 2021

Simplifying accessibility requirements

Happy 2021! To kick off the new year at IBM Accessibility, we had a soft launch of our new IBM Requirements page.

Continue reading

By Shari Trewin and Yves Veulliet on December 3, 2020

Designing AI Applications to Treat People with Disabilities Fairly

AI solutions must account for everyone. AI-based applications can treat people with disabilities fairly by embedding ethics into AI development from the very beginning.

Continue reading

By Shari Trewin on July 23, 2020

Proactive inclusion is everyone’s responsibility

It’s now 30 years since the Americans with Disabilities Act (ADA) was signed into law in the United States.  This important legislation sets out the rights of United States citizens with disabilities to access workplaces and communities. The ADA covers higher education, including access to conferences where academic research is presented. Accessibility for a large […]

Continue reading