Cognitive Computing

TJBot’s World Tour: Getting Built and Making Friends

Share this post:


TJBot — a DIY kit to build a programmable AI cardboard robot powered by Watson — made his debut at the Watson Developer Conference less than two months ago, but already he’s been laser cut and 3D printed at locations spanning South Africa, Kenya, Italy, Germany, Switzerland, Pakistan, Canada and Hong Kong. We’ve had interest from groups to collaborate on new use cases for TJBot, from creating educational curriculum for cognitive/robotics to prototyping enterprise solutions for eldercare and conversational agents.

Instructions — known as recipes — on how to build and program TJBot have also been well received within the Instructables online maker community, generating over 21,000 views and featured across the community website. TJBot has been adopted by the entire spectrum of makers, from beginners to experts – all creating cognitive objects that learn, reason, and interact in a natural way.

Simplified design — making for makers


TJBot 1.0 prototype. Raspberry pi, (microphone and Bluetooth dongle attached), and a circular RGB led.

Our original goal for TJBot was for it to act as an entry point for users to experience and experiment with ‘embodied cognition’ – the idea of embedding AI technology into devices, objects and spaces they already interact with. If the embodied cognition process was sufficiently simplified, what would users create? What design patterns would emerge? In many ways, TJBot helps answer these questions by effectively democratizing the embodied cognition innovation process.

To that end, a guiding principle in creating TJBot has been simplicity. This is reflected in our choice of hardware components, as well as the programming language platform selected.  Beginning with a basic prototype kit (see figure to right), we have experimented with various types of LEDs, microphones, speakers and servo motors; selecting models that are compact and feature rich, but relatively easy to use.  In a similar vein, software created to control these sensors are written using Nodejs, an open-source, cross-platform runtime environment for developing applications in JavaScript.

Capabilities powered by sensors and Watson services


TJBot – A network of capabilities powered by sensors and Watson services.

As a prototype, TJBot has a growing network of capabilities, such as speaking, listening, waving and dancing. Each of these capabilities are enabled by TJBot’s embedded sensors, combined with a set of cognitive services. For example, speaking is enabled using Watson Text to Speech service which converts text to audio that is played through TJBot’s speakers . Similarly, listening is enabled using the Watson Speech to Text service to convert recorded audio from the microphones to text, which is then analyzed. These capabilities can be combined for other use cases, such as creating a virtual agent or digital assistant.

Additional Recipes

Currently, the TJBot github repository contains three basic recipes: code to enable TJBot to respond to simple voice commands, analyze and react to emotion within tweets, and function as a conversational agent.  Two additional recipes have been added by members of our maker community – TJWave and Swifty TJ. TJwave is a fun recipe that shows how to control the robot arm on TJBot. It also contains additional functions that allow you allows TJBot to “dance” to music. Essentially the robot plays a sound file, extracts beats/peaks from the sound file and waves its arm to this beat. Controlling the robot arm on TJBot can also be leveraged to animate voice interactions and mirror hand gestures observed in humans as we speak. The SwiftyTJ recipe shows how to control the LED on TJBot using the Swift programming language. As the catalogue of TJBot recipes grow, SwiftyTJ provides a starting point for Swift developers to start coding their TJBots.

What’s next

For 2017, we are focusing on three specific areas to advance TJBot: creation, curation and learning.

Creation: We’ll be creating improvements to existing recipes as well as exploring new capabilities for our little cardboard robot.  An example in this area includes current work being done to implement vision recognition capabilities using the camera sensor on TJBot – perhaps with applications for accessibility.

Curation: We are growing and curating the community of TJBot makers, introducing TJBot to new audiences and sharing new recipes, tweaks and feedback from users.

Learning: Perhaps the most important aspect of what’s next is related to learning. This involves a research effort that studies the maker experience, and end user experiences with a view towards contributing to design patterns and guidelines on cognitive application design.

If you have an idea or have made a recipe, send us an email at We look forward to seeing, and hearing, what you create with TJBot!

More stories

Meet Ari, the Smart Bike that Helps you Catch Green Lights

As more bikes take to the city streets than ever, researchers at RMIT have designed Ari, a smart e-bike that could help riders cruise the ‘green wave’.

Continue reading

IBM Research Contributes to z15 Launch with Hybrid Cloud, Security Breakthroughs

Today, IBM is launching the new z15 mainframe, the culmination of four years of collaborative development company-wide, with a focus on meeting crucial customer data security and privacy needs across hybrid multicloud environments. To build this ground-breaking new system to meet these client demands, IBM Research partnered with IBM Systems to help develop a new […]

Continue reading

Helping to Untangle Cancer Drug Resistance with Data

Why do targeted cancer therapies often fail? We have acquired so much more understanding about cancer in the last fifty years than in the last five thousand years. Approaches to patient treatments have dramatically changed, and statistics show significant improvement in patient response and outcomes to therapy in the last half a century [1]. Yet […]

Continue reading