Cognitive Computing

TJBot’s World Tour: Getting Built and Making Friends

Share this post:


TJBot — a DIY kit to build a programmable AI cardboard robot powered by Watson — made his debut at the Watson Developer Conference less than two months ago, but already he’s been laser cut and 3D printed at locations spanning South Africa, Kenya, Italy, Germany, Switzerland, Pakistan, Canada and Hong Kong. We’ve had interest from groups to collaborate on new use cases for TJBot, from creating educational curriculum for cognitive/robotics to prototyping enterprise solutions for eldercare and conversational agents.

Instructions — known as recipes — on how to build and program TJBot have also been well received within the Instructables online maker community, generating over 21,000 views and featured across the community website. TJBot has been adopted by the entire spectrum of makers, from beginners to experts – all creating cognitive objects that learn, reason, and interact in a natural way.

Simplified design — making for makers


TJBot 1.0 prototype. Raspberry pi, (microphone and Bluetooth dongle attached), and a circular RGB led.

Our original goal for TJBot was for it to act as an entry point for users to experience and experiment with ‘embodied cognition’ – the idea of embedding AI technology into devices, objects and spaces they already interact with. If the embodied cognition process was sufficiently simplified, what would users create? What design patterns would emerge? In many ways, TJBot helps answer these questions by effectively democratizing the embodied cognition innovation process.

To that end, a guiding principle in creating TJBot has been simplicity. This is reflected in our choice of hardware components, as well as the programming language platform selected.  Beginning with a basic prototype kit (see figure to right), we have experimented with various types of LEDs, microphones, speakers and servo motors; selecting models that are compact and feature rich, but relatively easy to use.  In a similar vein, software created to control these sensors are written using Nodejs, an open-source, cross-platform runtime environment for developing applications in JavaScript.

Capabilities powered by sensors and Watson services


TJBot – A network of capabilities powered by sensors and Watson services.

As a prototype, TJBot has a growing network of capabilities, such as speaking, listening, waving and dancing. Each of these capabilities are enabled by TJBot’s embedded sensors, combined with a set of cognitive services. For example, speaking is enabled using Watson Text to Speech service which converts text to audio that is played through TJBot’s speakers . Similarly, listening is enabled using the Watson Speech to Text service to convert recorded audio from the microphones to text, which is then analyzed. These capabilities can be combined for other use cases, such as creating a virtual agent or digital assistant.

Additional Recipes

Currently, the TJBot github repository contains three basic recipes: code to enable TJBot to respond to simple voice commands, analyze and react to emotion within tweets, and function as a conversational agent.  Two additional recipes have been added by members of our maker community – TJWave and Swifty TJ. TJwave is a fun recipe that shows how to control the robot arm on TJBot. It also contains additional functions that allow you allows TJBot to “dance” to music. Essentially the robot plays a sound file, extracts beats/peaks from the sound file and waves its arm to this beat. Controlling the robot arm on TJBot can also be leveraged to animate voice interactions and mirror hand gestures observed in humans as we speak. The SwiftyTJ recipe shows how to control the LED on TJBot using the Swift programming language. As the catalogue of TJBot recipes grow, SwiftyTJ provides a starting point for Swift developers to start coding their TJBots.

What’s next

For 2017, we are focusing on three specific areas to advance TJBot: creation, curation and learning.

Creation: We’ll be creating improvements to existing recipes as well as exploring new capabilities for our little cardboard robot.  An example in this area includes current work being done to implement vision recognition capabilities using the camera sensor on TJBot – perhaps with applications for accessibility.

Curation: We are growing and curating the community of TJBot makers, introducing TJBot to new audiences and sharing new recipes, tweaks and feedback from users.

Learning: Perhaps the most important aspect of what’s next is related to learning. This involves a research effort that studies the maker experience, and end user experiences with a view towards contributing to design patterns and guidelines on cognitive application design.

If you have an idea or have made a recipe, send us an email at We look forward to seeing, and hearing, what you create with TJBot!

More stories

AI Data Tracker Encourages Scientific Research into COVID-19 Non-Pharmaceutical Interventions

What impact do measures such as shelter-in-place, mask wearing, and social distancing have on the number of COVID-19 cases? How do the COVID-19 quarantine measures that have been implemented by North American countries compare to South American countries? These are just a few questions about the wide range of non-pharmaceutical interventions (NPIs) that have been applied by governments, globally.

Continue reading

IBM Federated Learning – machine learning where the data is

IBM Research recently announced the community edition of a framework for federated learning.

Continue reading

IBM Research and the Broad Institute Seek to Unravel the True Risks of Genetic Diseases

In 2019, IBM and the Broad Institute of MIT and Harvard started a multi-year collaborative research program to develop powerful predictive models that can potentially enable clinicians to identify patients at serious risk for cardiovascular disease (1, 2). At the start of our collaboration, we proposed an approach to develop AI-based models that combine and […]

Continue reading