Industry 4.0: Digital transformed manufacturing.

We are now entering the Fourth Industrial Revolution (or Industry 4.0). Like its three predecessors, Industry 4.0 brings about new levels of productivity in manufacturing through applying technology and automation to both production and management processes.

The concepts of Industry 4.0 bring together low-cost sensors, edge computing, large-scale Internet of Things (IoT), and machine-to-machine (M2M) communications with data processing, analytics, and artificial intelligence (AI) to digitally transform all parts of manufacturing.

This provides unprecedented levels of information and insights, enabling a wide range of optimization and automation to be applied across the entire value chain, including manufacturing components, factory floors, the supply chain, engineering, sales, operations, and customers.

Industry 4.0 scenario: Manufacturing Edge

One Industry 4.0 scenario is the digital enablement of hundreds of manufacturing plants across the multi-tier manufacturing topology. This enables data collection and insights to be gathered and facilitates agile, controlled configuration and software rollouts.

This integrates together the Edge (Shop Floor), Plant, and Enterprise layers, providing a single digital view of the manufacturing enterprise, whilst also enabling analysis and automation to occur at each level. Digital enablement starts at the Edge — the shop floors where activities related to products are performed and where most of the valuable data is generated. This equates to levels 0, 1, and 2 of the Purdue Model for Computer Integrated Manufacturing:

The Purdue model describes five levels at which processing capabilities can be positioned, ranging from physical processes at Level 0 through to business logistics systems at Level 4.

  • Level 0 — The physical process
  • Level 1 — Intelligent devices: Sensing and manipulating the physical processes. Process sensors, analyzers, actuators, and related instrumentation.
  • Level 2 — Control systems: Supervising, monitoring, and controlling the physical processes. Real-time controls and software; DCS, human-machine interface (HMI); supervisory and data acquisition (SCADA) software.
  • Level 3 — Manufacturing operations systems: Managing production workflow to produce the desired products. Batch management; manufacturing execution/operations management systems (MES/MOMS); laboratory, maintenance and plant performance management systems; data historians and related middleware. Time frame: shifts, hours, minutes, seconds.
  • Level 4 — Business logistics systems: Managing the business-related activities of the manufacturing operation. ERP is the primary system, establishing the basic plant production schedule, material use, shipping, and inventory levels. Time frame: months, weeks, days, shifts.

Connecting operational technology (OT) and information technology (IT)

Usually, Levels 0 to 2 are referred to as Operational Technology (OT), and Levels 3 to 5 as Information Technology (IT). There is a big gap between OT and IT. One of the main challenges of Industry 4.0 is to manage the convergence between IT and OT.

One of the ways to address this challenge is to put in place an integration layer between IT and OT at each plant level. This layer bridges OT field protocols such as Modbus and ProfiNet on one end and exposes general IT standards such as MQTT and REST, enabling the connection to the upper IT levels.

The integration layer provides a service platform for local integration, orchestration, analysis, and Plant-level visibility and connects to the Enterprise level. Leveraging services exposed by this platform, applications can be developed without having to cope with the complexity of integrating the OT infrastructure.

Applications can then be deployed at any of the levels — Enterprise, Plant level, or Edge — depending on the degree of control (strategic, tactical, or operational) of the different stakeholders.

With OT infrastructure becoming more software-defined, programable, and intelligent combined, we have a major opportunity to rethink the development, deployment, and management of the OT services. This gives rise to a model where OT infrastructure services can be encapsulated and offer an approach to treat “OT Infrastructure as Code.” Such a paradigm is now underway and is facilitated by the use of modern cloud deployment models through containerization, API-driven process integration, and deployment.

The digitally enabled manufacturing enterprise is aiming to improve key manufacturing KPIs. The gold standard in this is Operational Equipment Effectiveness (OEE), which identifies the percentage of manufacturing time that is truly productive by combining three factors:

  • Industrial equipment reliability
  • Volume of production
  • Quality of production

With direct visibility and control on the OT layers, it is possible to implement advanced applications, such as smart asset management, predictive maintenance and maintenance management, equipment efficiency control, production quality improvement, production optimization, and overall production line configuration and re-configuration.

This enables enterprises to optimize their OEE. This could be by reducing unplanned downtime through using telemetry from shop-floor sensors combined with machine learning (ML) to predict asset failures, prescribe maintenance strategies, and optimize maintenance schedules.

Industry platform: Red Hat OpenShift and IBM Cloud Pak® solutions

The combination of Red Hat OpenShift and the IBM Cloud Pak solutions, which run on OpenShift, provides the complete platform for developing, deploying, managing, and securing the Industry 4.0 three-tier architecture and its applications on a hybrid, multicloud basis.

The IBM Cloud Pak for Integration, IBM Cloud Pak for Data, and IBM Cloud Pak for Automation provide the ability to integrate the Industry 4.0 layers, analyse data, carry out machine learning, drive actions and automations, and build enterprise-wide and industry-specific applications. Additionally, the IBM Cloud Pak for Security provides end-to-end security and management for multiple, repeatable deployments for the Edge, Plant and Enterprise layers across a hybrid cloud infrastructure.

IDC’s 2020 Enterprise IT Infrastructure Survey shows that 35% of IT infrastructure spend is on private clouds (compared to 21% for public clouds) as enterprises adopt hybrid cloud strategies. Additionally, a Gartner (2019) survey of enterprises utilizing public clouds shows that 81% are working with two or more providers. Adopting a hybrid, multicloud strategy has the potential to reduce vendor lock-in, reduce service disruption risks, enable better disaster recovery, and provide the core cloud benefits of agility, scalability, and elasticity.

Reaching the full potential of a hybrid multicloud strategy is, however, not without challenges. A number of the benefits require application portability: deploying workloads to multiple clouds to avoid service disruption and enable disaster recovery; placing workloads to meet locality, security, compliance or data governance requirements; or migrating workloads as part of a cloud adoption roadmap. Application portability issues mean that “few applications ever move once they have been deployed in production and adopted by the business” (Garter, 2020).

Kubernetes is increasingly seen as the ubiquitous application portability layer for containerized, cloud native applications. It is becoming widely adopted, with 70% of enterprises expected to standardize on Kubernetes over the next one to three years (451 Research, 2019). However, the use of Kubernetes as a standardized container orchestration and management platform only provide some of the application portability solution.

Additional application portability challenges persist because any given cloud platform extends beyond Kubernetes itself. Each managed Kubernetes service provides a subtly different flavor due to the infrastructure implementation on which is runs and the vendor-specific capabilities, services, and APIs it provides. The use of capabilities like serverless functions, managed services (e.g., databases, datastores, and messaging services), load-balancing and proxy services, identity and access management, and observability all add friction and barriers to application portability.

This is a challenge that both OpenShift and the Cloud Pak solutions recognise, and they combine to abstract the Kubernetes container management control plane and the catalog of enterprise-grade services from the infrastructure implementation. This makes it possible to have a true application portability layer for hybrid multicloud strategies, including those that extend to the edge.

OpenShift and the Cloud Pak solutions provide the complete set of capabilities and services across DevOps, applications, security, management, integration, data and AI, and automation, and they provide an open platform for adopting and utilizing the more than 1,500 technologies beyond Kubernetes in the Cloud Native Computing Foundation (CNCF) Landscape.

This provides the foundation to transform to Industry 4.0 — building, deploying, managing, and securing a data-driven, digitally-enabled manufacturing platform.

Find out more about the Industry 4.0 Architecture Patterns on the IBM Cloud Architecture Center or read the Industry 4.0 & Cognitive Manufacturing white paper.

Was this article helpful?
YesNo

More from Cloud

Seven top central processing unit (CPU) use cases

7 min read - The central processing unit (CPU) is the computer’s brain, assigning and processing tasks and managing essential operational functions. Computers have been so seamlessly integrated with modern life that sometimes we’re not even aware of how many CPUs are in use around the world. It’s a staggering amount—so many CPUs that a conclusive figure can only be approximated. How many CPUs are now in use? It’s been estimated that there may be as many as 200 billion CPU cores (or more)…

Prioritizing operational resiliency to reduce downtime in payments

2 min read - The average lost business cost following a data breach was USD 1.3 million in 2023, according to IBM’s Cost of a Data Breach report. With the rapid emergence of real-time payments, any downtime in payments connectivity can be a significant threat. This downtime can harm a business’s reputation, as well as the global financial ecosystem. For this reason, it’s paramount that financial enterprises support their resiliency needs by adopting a robust infrastructure that is integrated across multiple environments, including the…

Agility, flexibility and security: The value of cloud in HPC

3 min read - In today’s competitive business environment, firms are confronted with complex, computational issues that demand swift resolution. Such problems might be too intricate for a single system to handle or might require an extended time to resolve. For companies that need quick answers, every minute counts. Allowing problems to linger for weeks or months is not feasible for businesses determined to stay ahead of the competition. To address these challenges, enterprises across various industries, such as those in the semiconductor, life…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters