Moving to a new world of cognitive manufacturing systems

By | 14 minute read | September 27, 2016

leadspace image of robotic arm in cognitive manufacturing

This article is Part 1 of a series focused on how the application of cognitive IoT is triggering business and digital transformation.

By tapping the power of cognitive computing, organizations are moving from IoT vision and proof of concept to strategic deployments aimed at driving real transformation. The result is a new industrial era defined by factories, machines and parts capable of self-assessing, triggering actions and exchanging information with each other, and with the people who manufacture and maintain them.

This spring, Germany’s Hannover Fair — the largest industrial trade fair in the world — had as its overarching theme “Industry 4.0,” which is essentially the advanced computerization of forward-looking manufacturing. All over the millions of square feet of show floor, large robotic machines were noisily shaping, assembling, sintering, welding, wiring, and packaging sample products to showcase the capabilities of their suppliers in a sort of robotic three-ring circus: an industrial “Greatest Show on Earth.”

But in one area of the immense fairground, IBM’s Andy Stanford-Clark spoke in carefully enunciated English into a headset microphone. A few seconds later, his words were gracefully transcribed into the carefully wrought thick and thin brush strokes of thousand-year-old Chinese ideographs. Despite the chaos of the trade show all around him, Stanford-Clark’s display, in front of a crowd of fascinated onlookers, may have demonstrated the most essential technologies for the future of industry.

The transcription and writing was done entirely via computers, software, and a robotic arm with six degrees of freedom (Fig. 1). Stanford-Clark’s words were transmitted into the IBM Cloud, where Watson cognitive technology would convert his speech — for example, “Draw ‘Good Morning,’” — into English text. Then another Watson service converted English text to the Chinese symbols for “Good Morning.” Using the Watson IoT Platform, the symbols were then transmitted from the cloud to the industrial robotic arm, which, when it received the Chinese characters, was programmed to start drawing them. The robot’s tool gripper held a wooden calligraphy pen, which it dipped it in a saucer of ink, twirled it around to sharpen the point, and began, as Stanford-Clark remarks, “with lovely, graceful movements, particularly with its wrist,” to create the thin and  thick varying strokes that composed the Chinese characters.

a cognitive robotic arm translates English into Chinese in manufacturing

Fig. 1: At Hannover Fair, this robotic arm system was busy taking dictation in English
and translating it into Chinese characters drawn with a brush and ink.

The transportable robotic arm’s hardware wasn’t specially modified for the task, but was a standard industrial product shipped directly from the manufacturer. It had six joints and was able to move very precisely with very fine control. There was a small metal collar, like that for a drill bit, at the tip of the arm, and the brush was inserted into it and the collar screwed down to hold the brush tightly. Once the robotic arm was calibrated so the tip of the pen was set at the correct height above the table, the location, in three dimensions, of the tip of the brush could be precisely known and controlled. “The uniqueness was in the software,” Stanford-Clark observes.

Teaching Chinese

Beneath the heavy steel topped table to which the robot was firmly bolted, was an industrial PC that ran software for converting the Chinese characters into specific arm motions. Written by Foxconn, the robot arm’s manufacturer, the program directed the arm to make very precise movements when it received messages to draw a specific Chinese character from the Watson IoT platform. The application used an MQTT messaging client for direct Internet connection to the Watson IoT Platform. The program has a special calligraphy outline font to determine how the received characters should be formed using appropriate thin and thick brush strokes, sort of like a child’s paint-by-numbers art set. The program would output very low-level codes — like “move joint number one through 10 degrees; then move joint number two through 17 degrees, etc.” — which the PC would then send to the robot through a wired interface.

Stanford-Clark notes that the high precision robot was actually very flexible: “It might be drawing Chinese letters one day; it might be picking and placing boxes on a production line the next day; and it might be turning circuit boards over ready to solder the day after that. These are very general-purpose robots that, simply through reprogramming them, can be turned to different tasks. We use the term ‘software-defined hardware’ to describe this capability. That’s one of the really important things about the new cognitive manufacturing: you can really quickly reprogram the robots to change the design or start production of a different product without having to shut down the whole factory, reprogram the robots, and bring it all back up again.”

The real ‘cognitive’ part of IBM’s approach to cognitive manufacturing isn’t in the robot. While the software that translates Chinese characters into machine instructions is extremely clever, it doesn’t rely upon cognitive techniques to determine its actions. In a sense, it is hard-wired to produce arm movements when given a particular Chinese character. The cognitive activity is going on in the IBM Cloud, where specialized software is interpreting English speech into text, and how its meaning would be represented in Chinese characters (see Fig. 2).

a block diagram showing a Hannover Fair exhibit

Fig. 2:In this block diagram of IBM’s Hannover Fair demonstration, all the items shown in blue boxes reside in the cloud.

The Bluemix Hotel

Essential to the design of the IBM cognitive system shown in Fig. 2 are Bluemix and Node-RED. Stanford-Clark explains that “Bluemix is what we call a ‘Platform as a Service’ … What that essentially means is that it allows you to develop applications and run them in the cloud. I tend to think of it as like a hotel for applications: I can take the application I’ve written on my laptop and check it into the Bluemix hotel and it’ll stay there, running for a while.

“So I can then close my laptop and go somewhere and that application will still be running in the application hotel. I will be paying for the application being in the hotel, but it’ll be well looked after and get good service. If it ever stops, it’ll be restarted. And if it needs more resources, Bluemix will automatically allocate them.”

Continuing with the hotel analogy, another part of Bluemix is a book like that which you might find in your hotel room. The book lists all of the services of the hotel, like how to access the spa, gymnasium, or room service. Similarly, a large number of Watson cognitive and analytical services are available to applications in Bluemix. So rather than having to write application code to send an SMS, for example, there’s a service offered by Bluemix to send an SMS for you. The application developer just selects the service and drops it into the project and it becomes available for use.

The mix of services includes IBM-supplied services, many of which are Watson cognitive and analytical services. For instance, there are speech-to-text and text-to-speech services, and even a neural net classifier. Along with the services come instructions on how to format the data for input to the service and how it’s going to send you the output, so you can easily use it in your application.

Other services are supplied by third-party providers. One example is the SMS service, created by a company called Twilio, which offers their service through Bluemix. Stanford-Clark explains the way it works: “Every time you call the Twilio API in Bluemix, you get charged a few cents to your Bluemix account. So it’s a nice way for partners to monetize their API platforms.”


The Bluemix services shield you from having to deal with all the messy details of a task like writing an app for sending an SMS message. But sometimes you need something that glues services together or allows you to implement services that aren’t already available.

A Node Red flow diagram

Fig. 3: Node-RED uses a graphical approach to creating applications using three basic building types of
blocks: input nodes, function or processing nodes, and output nodes.

Node-RED is a graphical tool for wiring up the Internet of Things (Fig. 3). It’s based on the idea that the basic IoT process is to:

  1. get data from somewhere
  2. do something to it, and then
  3. send it somewhere else.

With regard to the first item, Node-RED provides nodes for getting data from various sources, such as an MQTT[1] messaging system, an HTTP connection, a serial port, or a web socket. Then processing, or function nodes, manipulate the data; for instance, a node might convert input in JSON (JavaScript Object Notation) to XML (eXtensible Markup Language). A particular processing task may require that someone writes a bit of JavaScript code to implement some algorithm to act on the data. Data can then flow to another processing node, which would do something else to it.

You can have a whole network of these nodes to process your data. There are many nodes provided in the palette, for most processing tasks. These can simply be dragged into your Node-RED application and configured by double-clicking and filling in a few parameters. The last type of node is an output node, which could send the data on to an HTTP server or publish to an MQTT broker, write it to a serial port, or send it to a Bluetooth device — whatever the desired destination is. The output might be to a database, a file, or a Twitter account.

Node-RED is essentially a set of building blocks from which you can build your own services. Stanford-Clark observes that, “It’s really nice because it breaks the application down into manageable chunks. And I find it pretty easy to understand, if I return to a piece of code I wrote a while back, or that somebody else wrote, because you look at it node by node and you can actually see the structure — you don’t have to wade through page after page of code. The use of the graphical interface is very powerful, and it was done this way so that people who are not programmers can write applications.” Node-RED is now supplied as one of the programming languages on the Raspberry Pi: if you download Raspbian, the Raspberry Pi official Linux image, it has Node-RED pre-installed on it.

Bluemix, Node-RED, and Watson IoT

Watson IoT is closely linked into Bluemix. It’s very easy to get your data flowing into the Watson IoT platform using MQTT, and from there to a Bluemix application created in your favourite language, whether it’s a Java, PHP or Node-RED application.

Node-RED has a Watson IoT input node and output node. So to get data from the IoT platform in Bluemix, you just drag the Watson IoT node into your application, open the node, type in your security credentials, specify which of your devices you want to receive data from, and then close it. Any data sent from your IoT devices, that you’re subscribed to, will appear in your Node-RED application so you can process it. If you want to send data back to an IoT device, you can use Node-RED’s Watson IoT output node to send the data. Since this capability ships as part of the Raspberry Pi platform, not only is Node-RED available in Bluemix to process the data, but it’s also very easy to get your Raspberry Pi to send data to the IoT platform.

Power of IoT

To Stanford-Clark, “The real power of IoT is what you do with the data. It’s not the data itself…it’s what you do with it after it’s been published to the cloud. We make it really easy for you to get your data into the Watson IoT platform, and then you can use Bluemix services to process it. It’s really easy to do things to your data and apply analytics to get your results.”

Stanford-Clark sees ‘analytics’ as a sliding scale, from really simple statistics, such as max, min and averages, through to more complicated statistical models, to the use of AI and machine learning for what IBM calls cognitive analytics — when your computer hypothesizes about what’s going on rather than you having to tell it what to look for.

Holistic manufacturing

In Stanford-Clark’s view, this gives rise to what he refers to as holistic manufacturing. “As well as looking at individual robots, we can also step back and take an holistic view of all of the machines, of whatever type they are, on a production line, maybe in a whole factory; and say, ‘Is there anything we could do slightly better to make the process more efficient?’”

“This is where the machine learning and the real artificial intelligence come in. We look at all the data from all the devices in a factory and go, ‘Hey, what if, instead of doing A then B then C, we did A then C then B?’ That would mean that we’d save one second per object made, which would save us this many seconds every day, and save us however many million dollars every year.” With cognitive analytics, the computer finds patterns in the data that people weren’t expecting or aware of, highlighting things that are unusual, strange, or anomalous so they can be examined and understood.

This analysis can be done by getting loads and loads of data from all the devices in the factory. It might even extend to data from the trucks bringing the raw materials to a factory and those taking finished products to the warehouse. “The whole supply chain could be optimized by looking at the data and making it all flow more smoothly,” says Stanford-Clark.

Furthermore, increased factory automation using robots can be a way to liberate people from jobs that are “3D” (Dull, Dangerous, and Dirty) to allow them to perform higher, more rewarding functions. As Stanford-Clark observes, “Part of the vision that’s cognitive manufacturing, is to have humans and robots working side by side — humans doing what they’re good at, robots doing what they’re good at — and actually having a much more efficient factory as a consequence of humans and machines working next to each other.” As shown in the Hannover Fair demonstration, it could be as simple as letting humans and robots converse in natural language. For instance, a worker could say to a robot, “Okay, pick up that box and move it over there” when the human’s finished working on it, rather than having the robot somehow having to sense whether the worker has finished or not. Humans and robots could thus collaborate to produce a more efficient and satisfactory production flow.

Another benefit of cognitive manufacturing that Stanford-Clark sees is the ability to satisfy the growing demand for mass customization. As an example, he notes that today’s production lines get their efficiency by producing millions of exactly the same thing, whereas people increasingly want their things to be different — unique to them in some way. The ability to customize each thing that you make, on the fly, can’t normally be done with robots, but humans excel at it. “So one thing that there’s a lot of research into, is how to do this better.”

Lessons learned in Hannover

Asked how he thinks the demonstration went at the show, Stanford-Clark replies he thinks it was “extremely well-received … people could immediately see that you could control a whole factory with this technology, and that was a real eye-opener for them.” He himself found the show very interesting because, while almost everybody at the Hannover Fair was using the Internet of Things for manufacturing, “what was apparent to me was that a lot of companies were talking about their existing hardware and applying the Internet of Things to connect it to the network, making it smarter.

IBM’s story was that we’re doing the Internet of Things across a whole range of different industries, whether it’s healthcare, vehicle telematics, agriculture, controlling a wind turbine, or whatever it is. And these industrial robots are just another “Thing” on the Internet. From all the analytics capabilities, all the device management, the technology we have for managing millions of connected vehicles; to monitoring elevators and escalators all over the world in hotels and airports…the same technology can now be applied to the manufacturing processes in a factory. And it was because we were coming top-down rather than bottom-up. The more I explained it to people, the more it became apparent to me that people are seeing that as a real differentiator in IBM’s approach.”

This article is Part 1 of a series of articles focused on how the application of cognitive IoT is triggering business and digital transformation. In the next instalment, we will take a deeper dive into how Schaeffler — one of the world’s leading automotive and industrial suppliers, is pioneering the development of innovative ‘mechatronic’ solutions which combine mechanical, electronic and software capabilities into individual components and systems which have the ability to monitor, report and manage their own performance. The article will explore how even the humble ball bearing can have in-built intelligence and sensory capabilities.

To get a preview of what’s coming next, please watch the video to learn more about how Schaeffler will be using Watson’s cognitive intelligence to interconnect digitally enabled components and create virtual models of entire industrial systems.

Additional information:

Learn more about Watson IoT Platform

Start using Watson IoT Platform for free

Get started using Watson IoT and Node-RED (Recipe)

Engage Machine Learning for detecting anomalous behaviors of things (Recipe)

Get certified on Watson IoT Platform with the Developer’s Guide to IoT

[1] MQTT is an ISO-standard messaging protocol for the IoT, designed for ease of use on small devices, and widely used throughout the industry.