Do Cloud Right Standardize, secure and scale innovation | Read the white paper
Two people working with a tablet on a manufacturing facility.

What is operational technology (OT)?

Operational technology, explained

Operational technology (OT) is the hardware and software that directly monitors, controls and automates physical processes and infrastructure in industrial settings.

Whereas information technology (IT) focuses on data processing and storage, OT aims to ensure that physical devices run safely and reliably in asset-heavy environments.

Operational technology supports critical infrastructure, including power grids, water treatment plants, transportation systems, healthcare networks and industrial manufacturing facilities. All the systems and layers used to manage industrial operations—the sensors, controllers, networks and control software that work alongside valves, motors, pumps, robotic arms and other equipment—comprise OT.

Though OT systems are different from traditional IT components, they are increasingly connected to IT networks and the internet. This convergence helps businesses optimize operational technology management (OTM) practices, but it also makes OT systems more complex and more vulnerable to cyberthreats.

According to the IBM X-Force Threat Intelligence Index 2026, the manufacturing sector is the most attacked industry, representing 27.7% of all incidents within top industries. More than 20% of businesses experienced an industrial cybersecurity incident in the past year, with four in 10 incidents disrupting operations.

Disruptions to OT-driven infrastructure can quickly cascade and affect large populations, so vigilant OT management must be a strategic priority for businesses looking to optimize and secure OT ecosystems.

Typically, that means relying on advanced OTM tools.

OT management tools provide enterprises with a centralized platform for tracking assets, monitoring system health and security operations in real time and coordinating responses when something goes wrong.

OTM tools can, for example, automatically discover new devices on industrial networks and build an up‑to‑date inventory of controllers and other OT assets. They can even map how devices are connected and which production processes they support, so engineers get a complete picture of the impact of system events and interactions.

With the help of OTM solutions, OT enables businesses to coordinate machines and automation workflows across locations and environments so they can maximize productivity and ensure consistent, high-quality output in critical business sectors.

Components of operational technology

Operational technology includes a range of components called industrial control systems (ICSs). This umbrella term covers all the control systems that directly run industrial processes, including controllers, sensors, networks, actuators, supervisory systems and supporting infrastructure.

Sensors and other field devices

As the eyes and ears of OT, sensors detect physical conditions—such as temperature, pressure, flow, vibration or position—and convert them into electrical or digital data signals that control systems can understand.

Sensors are typically installed directly on or near industrial equipment (pipelines, generators and production machinery), so they can provide real‑time data on operational processes.​ OT-driven industrial environments use several specialized sensor types, each with a specific purpose or matched to a specific process variable.

Temperature sensors, for example, measure process temperatures for resistance temperature detectors (RTDs) and thermocouples. Pressure sensors convert force-per-unit area into electrical signals to help managers and teams monitor tank pressure, hydraulic systems and compressed air networks.

Flow sensors measure how much liquid or gas passes through a pipe in a given time period, making them essential for dosing, batching and process control. And, using floats, radar and ultrasonic technologies, level sensors can detect how full tanks and silos are.

Every sensor performs two core functions: detecting changes in the environment and converting those changes into a usable output signal.

A temperature sensor might change its electrical resistance or generate a small voltage when the temperature rises. That analog change is then conditioned and scaled by a transmitter (another type of field device) into a standard signal. The conditioning process in OT systems filters out noise, amplifies weak signals and standardizes outputs, helping ensure that data from many different field devices can be reliably read and compared by ICSs.

In closed‑loop control systems, sensors provide feedback that tells controllers whether process variables match the desired setpoint, which enables actuators to make automatic adjustments when necessary. In safety systems, dedicated sensors monitor critical limits (such as steam line pressure or inflow levels) and trigger alarms or shutdowns when they detect hazardous conditions.

Other types of field devices—including meters, transmitters, switches and monitoring probes—play a similar role, measuring specific variables and feeding that information into the OT network.

Controllers

Controllers are specialized computers that run the control logic for industrial processes. There are three common types of controllers: programmable logic controllers (PLCs), remote terminal units (RTUs) and distributed control systems (DCSs). They do similar basic jobs but are optimized for different situations.

  • A PLC is an industrial computer designed to control machines and processes in real time. It reads inputs from sensors, runs an engineer‑written program and drives outputs to actuators, motors and valves. PLCs are well suited for high‑speed, discrete operations (packaging lines, bottling machines), where on-off signals and sequences must be processed very quickly and consistently.
  • An RTU is a controller designed for remote or widely distributed assets that must be controlled over long distances. It performs local data collection from field instruments, runs control logic and transmits the data back to a central supervisory system using radio, cellular, satellite or fiber-optic links. RTUs are frequently used in pipelines, wellheads, water and wastewater networks and electrical substations, where sites might be unattended or exposed to extreme temperatures.
  • DCS controllers are part of plant‑wide systems designed for large, continuous processes in oil refineries, chemical plants and power stations. Instead of using one big controller, DCSs rely on multiple controller nodes distributed around the plant. Each node handles a group of control loops and devices, but every node is integrated into a single, unified system with common engineering tools, and all configurations are stored in the same database.

In many plants, the three controller types coexist. PLCs handle machine‑level or package‑unit control, RTUs look after remote assets and a DCS coordinates larger processes and provides the main operator interface.

Actuators and final control elements

Actuators and final control elements receive commands from control systems and physically move or change something in the process (opening a valve or starting a motor, for instance). They form the muscles and hands of OT, turning digital decisions into mechanical actions that directly affect production.​

Actuators are the mechanisms that receive control signals and convert them into mechanical motions, such as linear movements or rotations, for final control elements. When engineering teams design a control loop or piece of equipment, they choose which actuator type will communicate with a final control element based on the force, speed, accuracy or safety protocols the process requires.

If an engineer needs to push a valve stem, they will use a pneumatic actuator to force compressed air into a diaphragm or piston. If a process requires high torque or thrust, the engineer might deploy hydraulic actuators, which use pressurized fluid to generate force.

Final control elements—which include pipe control valves, air duct dampers, variable‑speed pumps and motor‑driven devices (conveyors, mixers)—are devices that directly change the process variable response to a controller’s output signal. These components sit directly in the production line or on equipment, so when they move or change state, they immediately affect how much material or energy goes into, out of or around the process.

In a control loop, changing the position of final control elements enables changes to the manipulated variable that move the variable back toward the desired setpoint.

Supervisory systems

Supervisory systems sit one level above controllers and provide centralized monitoring, high‑level control and data management for entire processes. They don’t complete tasks directly; rather, they oversee the components that do (controllers).

Supervisory systems gather information from several PLCs, RTUs and DCSs and present it to operators through graphical screens. Operators can then send high‑level commands (start, stop, change setpoint, switch mode) back down to the process.

A classic example of a supervisory system is the supervisory control and data acquisition (SCADA) framework, which combines:

  • Software servers that collect and store data from remote or local controllers and sensors
  • Communication links to PLCs and RTUs
  • Human-machine interface (HMI) screens that show process graphics, alarms and trends

With supervisory systems such as SCADA, teams get an array of services that help consolidate data, run control applications and interface with high-level business and engineering systems.

Human-machine interfaces (HMIs) and operator workstations

HMIs and operator workstations—such as control room screens or panel displays—are visual and interactive touchpoints that let human operators see process information in real time.

A human-machine interface is a graphical user interface that shows teams key process data (such as temperatures or equipment status) and lets them send commands to controllers and other devices. HMIs translate complex data from controllers and supervisory systems into intuitive visualizations (process diagrams, trend charts, gauges, colored status indicators) so that operators can quickly grasp an asset’s condition.

Operator workstations, which typically run in control rooms or on the factory floor, are the computers or dedicated industrial PCs that host HMI software. These workstations connect to the OT network and pull live data from controllers and supervisory systems, displaying multiple HMIs simultaneously and enabling one operator to oversee an entire unit, process area or facility.

Communication networks and protocols

OT communication networks connect sensors, actuators, controllers, HMIs and supervisory systems, so data and commands can flow efficiently across the industrial environment. They provide the physical and logical infrastructure that carries signals between OT devices.

Physically, networks rely on copper Ethernet cables, serial wiring, fieldbus cables, fiber‑optic links and wireless connections that link to connect devices. Logically, OT networks can take on a range of topologies (including stars, buses, meshes and hybrid topologies), often using routers and switches to add redundancy to the network and help ensure that control traffic is appropriately triaged.  

Whereas networks provide the “roads” for OT data transmission, communication protocols provide the rules. Communication protocols (such as Ethernet/IP and Modbus) are defined sets of rules for how devices structure messages, address other devices, detect errors and confirm data delivery.​ They help ensure that:

  • Equipment from different vendors can interoperate
  • Data arrives in the right format
  • Both sender and recipient devices know whether a message is a command, a response or a simple status update

Many industrial protocols are designed for deterministic or near‑deterministic timing, where messages arrive within predictable time windows to help ensure stable process control.  

Industrial Internet of Things (IIoT) and other connected devices

IIoT devices—a subset of Internet of Things (IoT) devices—are network‑connected sensors, actuators, edge devices and smart equipment that extend OT visibility, often adding advanced diagnostics capabilities or cloud connectivity.

Unlike basic OT sensors, which only send raw signals to a local PLC, IIoT devices:

  • collect high-volume, high-frequency data from equipment;
  • process the data locally (at the edge of the network, typically);
  • add timestamps or metadata;
  •  and push it to upstream analytics platforms for processing.

In modern production environments, IIoT devices feed both local controllers (for real-time control) and enterprise systems (for business intelligence), blurring the lines between OT and IT. For example, conveyor belts with load sensors can automatically make throughput optimizations, and remote cameras can inspect infrastructure and feed the data back into tracking software.

Connected devices also include smart versions of traditional field devices (wireless vibration sensors or pressure transmitters with IP addressing, for example) and newer devices—such as RFID tags—that communicate over wifi, cellular or Bluetooth signals.

Operational technology vs. information technology

Though OT and IT intersect in myriad ways, they represent two distinct technological domains with fundamentally different objectives, architectures and operational demands. IT primarily deals with the management, processing, storage and distribution of data to support key business functions. OT focuses on the direct monitoring and management of physical devices, processes and industrial equipment.

IT environments are typically office-based or cloud-centric. They rely on general-purpose hardware (servers, desktops, laptops and mobile devices) that runs on standard operating systems and enterprise software for email services, databases and analytics. OT environments, however, are ruggedized for harsh industrial settings. They use specialized, proprietary hardware and software designed for longevity and reliability, often with legacy systems that operate for 10 to 20 years (or more) without regular updates.

This difference extends to network design. IT networks prioritize high bandwidth and connectivity for data sharing, but OT networks emphasize determinism (the guaranteed, predictable delivery of data packets with minimal latency or jitter) for precise timing in control loops.

Data handling in IT revolves around discrete transactions, documents and reports, where delays of seconds or minutes rarely cause serious issues (other than, perhaps, some disappointed users). OT data, by comparison, is continuous, streaming from sensors to actuators for immediate physical responses (stopping a malfunctioning turbine, for instance). In these situations, even a millisecond’s delay can lead to equipment failure or safety hazards.

In both OT and IT, low-latency data transmission is integral. However, in IT the goal is maintaining data confidentiality and integrity to prevent breaches, and in OT the goal is maintaining system safety and resilience to prevent injuries, asset loss, environmental incidents and widespread disruptions (like blackouts).

Change management practices in IT also differ significantly from those in OT. Culturally, IT fosters innovation and rapid iteration lifecycles, while OT values stability and predictability above all.

IT systems undergo frequent patching, upgrades and deployments. These systems incorporate downtime—within limits—to integrate new features or security fixes under the supervision of software developers, network engineers and cybersecurity teams.

OT demands extreme caution with changes. Because of stringent uptime needs, OT device modifications often require extensive testing in isolated environments to prevent disruptions to 24/7 operations. Control engineers and technicians who have expertise in industrial protocols typically manage the process.

Despite their differences, OT and IT continue to converge, driven by advancements in IIoT and the proliferation of Industry 4.0 (manufacturing and industrial operations that use advanced digital technologies such as AI, big data, cloud computing and intelligent automation).

However, convergence exposes traditionally air-gapped OT systems to the entire IT threat landscape, creating extensive attack surfaces where none previously existed. Consequently, successful OT networks now require hybrid security strategies that balance the priorities and risks of both OT and IT paradigms.

AspectIT characteristics OT characteristics
Primary focusDigital transformation, data processing and storagePhysical process control and real-time industrial operations
Response timeSeconds to minutes acceptable for most tasksMilliseconds critical for safety and control
Security priorityConfidentiality and data integrityAvailability, safety, system reliability
Change controlFrequent updates, planned downtime feasibleInfrequent updates, minimal downtime allowed
Failure impactFinancial loss, service disruptionPhysical harm, environmental damage, production halts
Network designOpen, scalable for on-demand connectivity remote accessTraditionally air-gapped, isolated for security and determinism
Personnel skillsSoftware development, networking, cybersecurityIndustrial engineering, PLC programming, process automation
AI Academy

Achieving AI-readiness with hybrid cloud

Led by top IBM thought leaders, the curriculum is designed to help business leaders gain the knowledge needed to prioritize the AI investments that can drive growth.

Security concerns in OT

Today’s OT devices are more powerful than ever. They are highly connected, both to each other and to IT systems. While this connectivity opens up new possibilities, it also exposes OT systems to cyberattacks, which often originate in IT environments before moving laterally to connected OT equipment.

Complicating matters further, many ICSs run on legacy hardware and software that wasn’t built to contend with sophisticated IT security threats (or any cyberthreat, for that matter). Some PLCs and SCADA systems are directly reachable from the internet or from less trusted networks, because they lack built-in protections, such as firewalls and encryption protocols.

Many OT systems skip multifactor authentication (MFA) and reuse weak or default passwords, making it easier for attackers to steal credentials and log in as legitimate engineers. The Threat Intelligence Index reports that nearly one-third (31%) of all cyberattacks on manufacturing environments originated from legitimate tools.

There are hundreds—if not thousands—of known vulnerabilities that target ICSs. And because OT prioritizes continuous operation over frequent updates (by design), security vulnerabilities can go unfixed for years.

For many systems, operational technology security controls must be added manually or retrofitted externally, because they can’t be applied using native updates or patches. Such approaches—often called “compensating controls”—layer protection around the legacy equipment without altering its core software or hardware.   

For example, programmers might set up virtual local access networks (VLANs), which use switches to create virtual subnetworks on the same physical wiring, to create isolated groups or “rooms” for PLCs. Then, they can set rules that only permit specific OT protocols and block everything else.

Similarly, a programmer might create data diodes—hardware “one-way valves” (often optical fiber setups) that let OT data out for monitoring but block all inbound traffic—to keep cybercriminals from accessing controls.

Remote access to OT systems through tools such as remote desktops, virtual private networks (VPNs), vendor dial‑in capabilities and cloud portals also presents cybersecurity risks for plants and factories. In 2025, half of all OT-IT cybersecurity incidents started with unauthorized external access.

And post-pandemic hybrid work environments have dramatically increased remote work arrangements—and the associated remote access risks for OT systems.

Remote work generally relies on personal devices and home wifi routers that lack strong security controls and organizational oversight, making it easier for remote engineers to inadvertently introduce unmanaged endpoints and allow access points to multiply unchecked. As more and more engineers embraced remote work, attack surfaces on ICSs expanded in kind.

What’s more, industrial environments make especially desirable targets for financially motivated cybercriminals, because they support so many aspects of peoples’ day-to-day lives. People turn on lights and faucets every day. They require clean water for drinking, cooking and bathing. They drive cars and ride trains that require fuel and electricity to function, and their homes are full of products that are produced in factories by OT-driven equipment.

Disrupting these processes with ransomware—malicious software that encrypts data or locks systems, demanding payment for decryption—can prove quite lucrative for bad actors, because any downtime can have catastrophic consequences.

In 2025, the manufacturing sector accounted for 17% of targeted ransomware attacks, making it the most frequently targeted business sector. These attacks cost businesses an average of 4.73 million USD per incident. And because more than half of businesses don’t have a dedicated incident response plan for ransomware attacks on OT environments, they can take days—or even months—to resolve.

For example, in 2022 Toyota was indirectly impacted by a ransomware attack. Hackers compromised Toyota supplier Kojima Industries’ file servers through a third-party partner’s systems, which ultimately halted parts supply to Toyota, forced a 24-hour production shutdown at 14 Japanese factories, and affected about 13,000 vehicles.

Securing OT environments

While IT-OT convergence presents some risks, IT-based security measures can also help mitigate those risks and maintain industrial operations.

Common tools, practices and technologies that enterprises use to address OT cybersecurity gaps include:

Real-time, continuous monitoring and detection

Because active scanning comes with downtime risks, OT environments require passive, agentless tools—which can monitor systems remotely without the need to install dedicated software—to continuously track assets and network traffic.

These tools use two main methods to detect cyberthreats: signature-based detection and anomaly-based detection.

Signature-based detection matches network traffic and data payloads against predefined patterns (signatures) of known threats (specific malware code, for example). Anomaly-based detection analyzes network activity for suspicious behavior that falls outside of normal patterns.

Network segmentation

Network segmentation divides OT networks into zones using firewalls, DMZs—buffered security zones that sit between IT and OT networks and broker traffic between them—and data diodes to limit lateral movement.

In a flat OT network, once an attacker infiltrates one device, they can often move laterally to safety systems, HMIs and PLCs without additional security checks. This dynamic enables attacks to spread easily to other devices on the network.

Segmentation and microsegmentation split OT networks into isolated zones (as small as individual devices or workloads) and enforce security policies between them, so attacks stay confined to one network segment. Segmentation tools can also enforce least-privilege protocols (where users are granted only the bare minimum permissions needed to do their jobs) and apply whitelisting processes so that only approved data packets are permitted to traverse the network.

Defense in depth

Defense in depth (DiD) layers multiple independent, mutually reinforcing security protocols—including physical, network, endpoint and policy controls—to ensure that no single failure exposes assets.

DiD strategies enable engineers to build complex barriers around industrial assets, so attackers must breach several layers to cause harm.

For example:

The physical layer prevents easy plant-floor access to unmanaged switches, gateways and engineering workstations.

The boundary layer creates DMZs with dual firewalls facing IT and OT control networks and uses strict “allow” lists so that only specific services can move through the zone.

A segmentation layer divides the network into security “neighborhoods” to stymie unauthorized access and keep attacks from spreading.

The host and application layer hardens workstations and servers using authentication protocols, app-level allow lists and role-based access controls.

Compensating controls use virtual patching to protect legacy systems, and monitoring systems passively scan the entire network for anomalies and policy violations.

Zero-trust security

Zero-trust security practices reject the old “trust but verify” model and replace it with a “trust nothing and no one” approach. They assume that attackers are already inside, so every ICS access request from users, devices and applications—including requests that are already moving through the network—must prove itself trustworthy every single time.

When a data request arrives at a network gate, it communicates through a broker for the OT asset, instead of talking directly to the asset. Then, the request must prove its identity through authentication protocols (MFA, for example) that establish who is sending the request. Posture checks determine whether the requesting device is allowed to access OT assets and, if so, at what level they can access asset data.

The zero-trust control plane evaluates the request context, including the time, location, requested asset, protocol or command type and any expected maintenance. Then, a policy engine uses all the collected information to decide whether to approve or deny the request, which specific operations are permitted, and how long the user can stay on the network.

After the approval, firewalls and gateways enforce the decision and monitoring tools watch the session. If request posture, context or behavior changes at any point, the control plane can downgrade access or end the session.

Security fabrics

Security fabric is an architectural approach where multiple security tools act as a coordinated mesh across IT-OT environments, cloud services, API endpoints and network edges.

Fabrics tightly integrate different security components—such as firewalls, network access controls, VPNs, identity and access management (IAM) systems and endpoint detection and response (EDR) tools—so that every device and technology on the network can share security policies and enforcement.

With a security fabric, enterprises get end-to-end protection of industrial operations and a central nervous system for security orchestration and automation, all in a single pane of glass. Fabrics can also enforce security rules in microseconds, a valuable feature for managing factory controls that require split-second response times. 

Intelligent automation in OT security

AI-driven cyberattacks are making IT-OT environments harder to defend. AI agents, which perform autonomous operations, can automatically scan networks for security vulnerabilities and generate tailored attacks, so threat detection windows are smaller than ever.

As technologies and security risks evolve, OT-driven businesses are trending toward IT-OT security stacks that extensively integrate artificial intelligence (AI) and intelligent automation (IA). IA, sometimes called cognitive automation, is the use of automation technologies—AI, business process management (BPM) and robotic process automation (RPA)—to streamline decision-making across organizations.

IA strategies, especially those that rely on AI and machine learning (ML), help OT-based businesses monitor ICSs and counteract OT’s inherent vulnerabilities (namely, 24/7 operational demands and unpatchable legacy hardware that lacks security protocols). And because AI tools can automatically analyze mixed IT-OT telemetry and proactively identify threats (including AI-based attacks), many businesses see them as integral to predicting and mitigating threats in today’s cybersecurity landscape.

AI-driven IA workflows help businesses in three key ways:

Behavioral analysis and anomaly detection

AI and ML tools can establish baselines of normal OT device behavior and flag deviations when they occur, enabling real-time anomaly detection and helping teams improve risk management practices.

Organizations can train ML algorithms on weeks, months or years of sensor telemetry to model expected patterns for specific protocols. Then, the ML algorithm can quickly detect anomalies in those patterns, such as irregular packet rates and command sequences (a communication protocol writing to a read-only register, for example).

OT devices also benefit from AI-assisted network tapping, a passive monitoring technique that creates exact copies of OT traffic flows and sends them to AI platforms for analysis. AI-based taps operate fully out-of-band to prevent any latency and packet loss, enabling delivery of real-time anomaly insights while upholding the strict timing and stability requirements of industrial operations.

To further optimize anomaly detection in OT, engineers can use AI to create digital twins—virtual replicas of physical assets—to run process simulations in parallel to real operations. Digital twins can integrate multidomain simulations (for mechanical, electrical and fluid dynamics components, for example), calibrated with historical sensor data, to detect firmware alterations and conduct “what-if” threat testing.

Predictive analytics and incident response

AI integration can help shift OT defense from signature-based detection limited to known threats to predictive analytics strategies that anticipate threats across the network.

In OT security, predictive analytics uses AI to proactively identify and prioritize cyber risks, modeling potential attack scenarios on detailed network topologies and comprehensive asset catalogs. In threat hunting, for instance, proactive hunts can deploy AI agents to simulate cyberattacks on live asset inventories and map exploitable attack paths.

AI also supports graceful degradation of OT devices during cyberattacks so that the business can maintain at least partial operations.

For example, AI tools can trigger adaptive controls to shed nonessential loads and sustain core functions (such as power delivery) during ransomware attacks. And if a malware attack occurs, AI can help reroute traffic around compromised HMIs while keeping safety interlocks in place. 

Edge AI and decision-making

Adopting edge AI in OT environments can shift computational intelligence from distant cloud servers to onsite hardware (PLCs, industrial gateways, edge servers and even embedded controllers) directly on the plant floor or near critical processes. Edge AI deploys compact, optimized AI algorithms on edge hardware to process sensor data and drive autonomous actions, enabling engineers to turn rigid control loops into adaptive, intelligent systems.

AI-driven OT devices can make split-second, autonomous decisions at the point of action, where sensors capture raw data from motors, conveyors, valves and robots. These capabilities help teams minimize the process delays, bandwidth strain and single points of failure that occur when device data must continuously travel to and from cloud servers.

For instance, next-generation PLCs embed lightweight AI models in edge hardware for closed-loop control, automatically adjusting feed rates and pressures based on live telemetry and without external signaling from engineering personnel. And data processing units (DPUs)—programmable chips that sit between network ports and servers—can use AI to implement zero-trust security policies on a packet-by-packet basis without slowing down operational decision-making.

Author

Chrystal R. China

Staff Writer, Automation & ITOps

IBM Think

Related solutions
IBM Concert

Turn your application data into actionable insights, helping you strengthen operations and improve IT resilience.

Explore IBM Concert®
Observability solutions

End-to-end visibility and intelligent insights to help you detect issues early, reduce downtime and keep your applications resilient.

Explore observability solutions
IBM Consulting AIOps

Detect issues early, predict outages and help you prevent disruptions before they impact your business.

Explore IBM Consulting AIOps
Take the next step

IBM Concert® and observability solutions combine AI-driven insights with full-stack visibility to help you detect issues faster, predict risks and improve resilience across complex, modern environments.

  1. Discover IBM Concert
  2. Explore observability solutions