Why data science experimentation drags out production optimization programs

Share this post:

Production optimization programs aim to increase production throughput and eliminate waste by leveraging data insights. But what happens when data identification – and creating analytical models relevant to use cases – becomes a job in its own right? Can out-of-the-box templates both support production optimization and reduce the need for data science experimentation? This blog explores the value of customizable, ready-to-go use case templates for manufacturing plants.

The challenges facing production optimization programs

Production optimization programs generally have two key objectives: to increase production throughput, and to eliminate waste. To achieve these objectives, we must identify, predict and pinpoint the production losses that contribute to waste and lower throughput. Here, machine learning and Artificial Intelligence (AI) have an important role to play – predicting production losses and recommending optimized action to mitigate them.

However, the task of identifying the data and creating analytical models relevant to use cases makes for an involved job – and an approach that is hard to scale.

One solution is to consider using out-of-the-box use case templates. Because many use cases in manufacturing plants are recurring, templates can be an efficient production optimization tool. Let’s look at two in particular: failure prediction and anomaly detection.

Use case #1: failure prediction

One example of an often-occurring use case is failure prediction; whereby historic data around known failures and a technique known as ‘auto-classification’ is used to predict possible faults in machines, quality or process.

Creating a template for a failure prediction use case requires:

(a)   An auto-classification analytics model pipeline – to flexibly choose the best fit algorithms based on input data

(b)   A notebook template to configure the pipeline for specific use cases

(c)    UX widgets to show the results.

With these three components, a process engineer will be able to apply the failure prediction use case template to specific machines and processes, and realize the use cases in their particular plant.


Figure 1. Production Optimization delivers out of the box Industry 4.0 use cases to business users on IBM technology

Use case #2: anomaly detection

The other common use case in plant floor is anomaly detection.  Anomaly detection is employed to generate early warning when one or more dependent variables are trending towards anomalous conditions that will result in a failure.  As with failure prediction, this use case can also be ‘templated’ with the following tools:

(a)   An ‘anomaly detection’ analytical model pipeline able to choose the best-fit algorithms from available input data

(b)   A notebook to configure the pipeline for a specific use case

(c)    UX widgets to show the results

Customizable templates for further use cases

In response to the need for scalable, tried-and-tested production optimization tools, IBM has templated a set of standard plant floor use cases. We can configure the templates, which are part of the Industry Solution offering, IBM Production Optimization, to apply each use case to a plant’s individual processes and assets.

In this way, they offer a swift, robust remedy to common issues, while being flexible enough to adapt to a plant’s particular infrastructure. There will be always exceptions; but we believe these templates can be used with minimum customization effort in about 70% of cases.

This approach reduces the time and effort for data science ‘experimentation’ and accelerates time to value. It also puts process engineers in driver’s seat, and allows them to interactively configure and tune the use cases. Finally, the out-of-the-box approach allows scaling up the use cases, to tens and hundreds of processes and assets within the plant floor.

Discover more about IBM Production Optimization

Take a look at our blog to discover how IBM Production Optimization can drive down equipment and process-related losses, and explore this offering in further depth on our website.

Don’t forget to look out for the next installment of our ‘Manufacturing Mondays’ series!


More Asset Management stories

Your walls can talk…learn what they’re saying at the TRIRIGA Academy.

Written by Paul Lacey | March 12, 2019 | Buildings, Conferences

People spend 87% of their time in buildings. Yet few understand what goes on behind the scenes. If your business needs to drive greater efficiency and a more engaging workplace experience, then join us for the TRIRIGA Academy, part of the IBM IoT Exchange, April 24-26, 2019, in Orlando, Florida. What is the TRIRIGA Academy? more

Rethinking your business with IoT: your product “as a Service”

Written by Mark Swinson | March 7, 2019 | Connected Software, Manufacturing, Platform

In the world of IT “as-a-Service” has been a major trend over the last 15 years. Once the availability, speed and reliability of the Internet reached the point where applications could be accessed in someone elses data center, it’s made sense to look for opportunities to reduce costs. Why not you pay for what you more

IoT and Blockchain: Improve asset visibility and compliance

Written by Michael Rowe and Tom Sarasin | March 5, 2019 | Asset Management, Blockchain

We continue to connect and instrument our physical world. As we do so, more and more opportunities arise to ensure that our assets and processes are performing well, and within regulatory limits. Using the IoT and blockchain, organizations can now improve asset visibility and compliance to ensure regulators and third parties are in sync. Customers more