Cognitive Computing

TJBot’s World Tour: Getting Built and Making Friends

Share this post:


TJBot — a DIY kit to build a programmable AI cardboard robot powered by Watson — made his debut at the Watson Developer Conference less than two months ago, but already he’s been laser cut and 3D printed at locations spanning South Africa, Kenya, Italy, Germany, Switzerland, Pakistan, Canada and Hong Kong. We’ve had interest from groups to collaborate on new use cases for TJBot, from creating educational curriculum for cognitive/robotics to prototyping enterprise solutions for eldercare and conversational agents.

Instructions — known as recipes — on how to build and program TJBot have also been well received within the Instructables online maker community, generating over 21,000 views and featured across the community website. TJBot has been adopted by the entire spectrum of makers, from beginners to experts – all creating cognitive objects that learn, reason, and interact in a natural way.

Simplified design — making for makers


TJBot 1.0 prototype. Raspberry pi, (microphone and Bluetooth dongle attached), and a circular RGB led.

Our original goal for TJBot was for it to act as an entry point for users to experience and experiment with ‘embodied cognition’ – the idea of embedding AI technology into devices, objects and spaces they already interact with. If the embodied cognition process was sufficiently simplified, what would users create? What design patterns would emerge? In many ways, TJBot helps answer these questions by effectively democratizing the embodied cognition innovation process.

To that end, a guiding principle in creating TJBot has been simplicity. This is reflected in our choice of hardware components, as well as the programming language platform selected.  Beginning with a basic prototype kit (see figure to right), we have experimented with various types of LEDs, microphones, speakers and servo motors; selecting models that are compact and feature rich, but relatively easy to use.  In a similar vein, software created to control these sensors are written using Nodejs, an open-source, cross-platform runtime environment for developing applications in JavaScript.

Capabilities powered by sensors and Watson services


TJBot – A network of capabilities powered by sensors and Watson services.

As a prototype, TJBot has a growing network of capabilities, such as speaking, listening, waving and dancing. Each of these capabilities are enabled by TJBot’s embedded sensors, combined with a set of cognitive services. For example, speaking is enabled using Watson Text to Speech service which converts text to audio that is played through TJBot’s speakers . Similarly, listening is enabled using the Watson Speech to Text service to convert recorded audio from the microphones to text, which is then analyzed. These capabilities can be combined for other use cases, such as creating a virtual agent or digital assistant.

Additional Recipes

Currently, the TJBot github repository contains three basic recipes: code to enable TJBot to respond to simple voice commands, analyze and react to emotion within tweets, and function as a conversational agent.  Two additional recipes have been added by members of our maker community – TJWave and Swifty TJ. TJwave is a fun recipe that shows how to control the robot arm on TJBot. It also contains additional functions that allow you allows TJBot to “dance” to music. Essentially the robot plays a sound file, extracts beats/peaks from the sound file and waves its arm to this beat. Controlling the robot arm on TJBot can also be leveraged to animate voice interactions and mirror hand gestures observed in humans as we speak. The SwiftyTJ recipe shows how to control the LED on TJBot using the Swift programming language. As the catalogue of TJBot recipes grow, SwiftyTJ provides a starting point for Swift developers to start coding their TJBots.

What’s next

For 2017, we are focusing on three specific areas to advance TJBot: creation, curation and learning.

Creation: We’ll be creating improvements to existing recipes as well as exploring new capabilities for our little cardboard robot.  An example in this area includes current work being done to implement vision recognition capabilities using the camera sensor on TJBot – perhaps with applications for accessibility.

Curation: We are growing and curating the community of TJBot makers, introducing TJBot to new audiences and sharing new recipes, tweaks and feedback from users.

Learning: Perhaps the most important aspect of what’s next is related to learning. This involves a research effort that studies the maker experience, and end user experiences with a view towards contributing to design patterns and guidelines on cognitive application design.

If you have an idea or have made a recipe, send us an email at We look forward to seeing, and hearing, what you create with TJBot!

More stories

The Story Behind IBM’s 2019 Patent Leadership

IBM inventors were awarded 9,262 U.S. patents – topping, once again, the list for the most U.S. patents received, for the 27th year running. That brings the total number of IBM’s U.S. patents to over 140,000.

Continue reading

SysFlow: Scalable System Telemetry for Improved Security Analytics

No organization is safe against cybercrime. Recent studies have shown that these crimes will cost the world well over $5 trillion a year by 2024. Cyber attackers breach corporate networks using a myriad of techniques, with application vulnerabilities corresponding to 25% of all exploitable attack vectors. More disturbing is that these attacks can go unnoticed […]

Continue reading

Take the Whisky test: How IBM tech can help root out fake Scotch

With the festive season just days away, it’s not just turkey and cranberry sauce flying off the supermarket shelves. It’s also champagne, wine, gin… and, of course, whisky. And it better be genuine. IBM researchers are working on technologies to help you make sure the whisky you buy is indeed what it says on the […]

Continue reading