TJBot — a DIY kit to build a programmable AI cardboard robot powered by Watson — made his debut at the Watson Developer Conference less than two months ago, but already he’s been laser cut and 3D printed at locations spanning South Africa, Kenya, Italy, Germany, Switzerland, Pakistan, Canada and Hong Kong. We’ve had interest from groups to collaborate on new use cases for TJBot, from creating educational curriculum for cognitive/robotics to prototyping enterprise solutions for eldercare and conversational agents.
Instructions — known as recipes — on how to build and program TJBot have also been well received within the Instructables online maker community, generating over 21,000 views and featured across the community website. TJBot has been adopted by the entire spectrum of makers, from beginners to experts – all creating cognitive objects that learn, reason, and interact in a natural way.
Simplified design — making for makers
Our original goal for TJBot was for it to act as an entry point for users to experience and experiment with ‘embodied cognition’ – the idea of embedding AI technology into devices, objects and spaces they already interact with. If the embodied cognition process was sufficiently simplified, what would users create? What design patterns would emerge? In many ways, TJBot helps answer these questions by effectively democratizing the embodied cognition innovation process.
Capabilities powered by sensors and Watson services
As a prototype, TJBot has a growing network of capabilities, such as speaking, listening, waving and dancing. Each of these capabilities are enabled by TJBot’s embedded sensors, combined with a set of cognitive services. For example, speaking is enabled using Watson Text to Speech service which converts text to audio that is played through TJBot’s speakers . Similarly, listening is enabled using the Watson Speech to Text service to convert recorded audio from the microphones to text, which is then analyzed. These capabilities can be combined for other use cases, such as creating a virtual agent or digital assistant.
Currently, the TJBot github repository contains three basic recipes: code to enable TJBot to respond to simple voice commands, analyze and react to emotion within tweets, and function as a conversational agent. Two additional recipes have been added by members of our maker community – TJWave and Swifty TJ. TJwave is a fun recipe that shows how to control the robot arm on TJBot. It also contains additional functions that allow you allows TJBot to “dance” to music. Essentially the robot plays a sound file, extracts beats/peaks from the sound file and waves its arm to this beat. Controlling the robot arm on TJBot can also be leveraged to animate voice interactions and mirror hand gestures observed in humans as we speak. The SwiftyTJ recipe shows how to control the LED on TJBot using the Swift programming language. As the catalogue of TJBot recipes grow, SwiftyTJ provides a starting point for Swift developers to start coding their TJBots.
For 2017, we are focusing on three specific areas to advance TJBot: creation, curation and learning.
Creation: We’ll be creating improvements to existing recipes as well as exploring new capabilities for our little cardboard robot. An example in this area includes current work being done to implement vision recognition capabilities using the camera sensor on TJBot – perhaps with applications for accessibility.
Curation: We are growing and curating the community of TJBot makers, introducing TJBot to new audiences and sharing new recipes, tweaks and feedback from users.
Learning: Perhaps the most important aspect of what’s next is related to learning. This involves a research effort that studies the maker experience, and end user experiences with a view towards contributing to design patterns and guidelines on cognitive application design.
If you have an idea or have made a recipe, send us an email at firstname.lastname@example.org. We look forward to seeing, and hearing, what you create with TJBot!