Partitions: Virtual images of a mainframe or LinuxONE system

A partition is a virtual representation of the hardware resources of an IBM Z® or LinuxONE system. A partition is the runtime environment for either a hypervisor and its guest operating-system images, each with their own applications; or a single operating system and its applications, which are sometimes called the workload.

The system planners at your company order and configure mainframe or LinuxONE systems according to their plan for the business applications that each system will support. This plan determines the system on which you configure your Linux® server and its workload, and determines which system resources are available when you configure a partition.

The following operating systems and hypervisors can run in a partition on a DPM-enabled system:
  • Various Linux distributions, which are listed on the IBM® tested platforms page for Linux environments. These distributions include supported versions of Red Hat® Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), and Ubuntu Server (KVM or LPAR DPM).
  • z/VM® 7.1 or later. z/VM is supported as a virtualization hypervisor on which you can run multiple Linux images.

DPM also supports Secure Service Container, which is a container technology through which you can more quickly and securely deploy firmware and software appliances. Unlike most other types of partitions, a Secure Service Container partition contains its own embedded operating system, security mechanisms, and other features that are specifically designed for simplifying the installation of appliances, and for securely hosting them.

Figure 1 illustrates the physical and virtual resources of a mainframe or LinuxONE system, along with the firmware components that are used to manage these resources. Systems can be configured to run in either standard Processor Resource/Systems Manager (PR/SM) mode or IBM Dynamic Partition Manager (DPM) mode. DPM uses PR/SM functions but presents a simplified user interface for creating partitions and managing system resources through tasks in the Hardware Management Console (HMC) / Support Element (SE).

Figure 1. Partitions configured on a DPM-enabled system
This diagram illustrates several partitions as virtual copies of physical hardware
In Figure 1, several partitions are configured on a DPM-enabled system. Each partition hosts either a hypervisor or an operating system, and has virtual system resources that represent its share of physical resources: processors, memory, and adapters.
  • Partitions A through C each host one Linux operating system image.
  • Partition D hosts one z/VM image and its multiple Linux guests.
  • Partition E hosts one Linux hypervisor (for example, Ubuntu KVM) and its multiple guests.
  • Partition F is a Secure Service Container partition that hosts a supported software appliance.
Note that DPM does not manage any hypervisor guests, or any appliances that run in a Secure Service Container partition.

Partition properties and configuration settings

A partition definition contains the specific properties and configuration settings for one partition on a DPM-enabled system. You use the New Partition task to create a partition definition; through that task, you specify how many processors, how much memory, and which adapters to use.

When you use the New Partition task to create a partition definition, DPM indicates which system resources are available for your partition to use, and also shows the current usage or reservation of system resources by active (started) partitions or by partitions with reserved resources. You may define more resources than are currently available, and you can specify whether DPM is to reserve those resources for the partition. When you specify that the system resources for a partition are to be reserved, DPM does not allocate them to any other partitions. This reservation means that your partition is guaranteed to be startable; in contrast, partitions without reserved resources might fail to start, if sufficient resources are not available.

The following list describes key properties and configuration settings of partitions on a DPM-enabled system. The list labels correspond to navigation labels or individual fields in the New Partition task, and the Partition Details task, through which you can modify an existing partition definition. For a complete list of the partition properties and settings, see the online help for either task.

Name
A partition name must uniquely identify the partition from all other partitions defined on the same system. On a DPM-enabled system, you can define a name for your partition that is 1 - 64 characters in length. Supported characters are alphanumerics, blanks, periods, underscores, dashes, or at symbols (@). Names cannot start or end with blank characters. This partition name is shown in HMC task displays that contain information about system partitions.

A partition also has a short name, which is a name by which the operating system can identify the partition. By default, DPM automatically generates a partition short name that you can modify.

Partition type
Administrators can choose one of the following partition types for a new partition. Through the partition type, DPM can optimize the partition configuration for a specific hypervisor or operating system.
Linux
In this type of partition, you can install and run a Linux distribution as a single operating system, or as a hypervisor for multiple guests.
z/VM
In this type of partition, you can install and run z/VM as a hypervisor for multiple Linux guests.
Secure Service Container
This type of partition is a Secure Service Container, in which you can run only specific software appliances that the Secure Service Container supports.
Processors
Most DPM-enabled systems support one type of processor: Integrated Facility for Linux (IFL). In some cases, a system might also support an additional type: Central Processor (CP).

Each partition on a system can either have exclusive use of a specific number of physical processors installed on the system, or can share processor resources from the pool of physical processors that are not dedicated to other partitions on the same system. The number of available processors is limited to the number of entitled processors on the system. Entitled processors are processors that are licensed for use on the system; the number of entitled processors might be less than the total number of physical processors that are installed on the system.

When you create a new partition on a DPM-enabled system:
  • You can select which processor type to use only if both types are installed on the system. Generally, IFLs are the most appropriate choice for Linux servers. If you want to enable simultaneous multithreading for this partition, you must select the IFL processor type.
  • You can specify the number of processors to assign to the partition, and view how your selection affects the processing resources of other partitions on the system. The number of processors that you can assign ranges from a minimum value of 1 to a maximum value of the total number of entitled processors on the system.
Memory
Each partition on a DPM-enabled system has exclusive use of a user-defined portion of the total amount of entitled memory that is installed on the system. Entitled memory is the amount of memory that is licensed for use, which might be less than the total amount of memory that is installed on the system. The amount of memory that a specific partition requires depends on the storage limits of the operating system that will run in it, on the storage requirements of the applications that run on the operating system, and on the size of the I/O configuration.

When you define the amount of memory to be assigned, or allocated, to a specific partition, you specify an initial amount of memory, and a maximum amount that must be equal to or greater than the initial amount. The partition receives its initial amount when it is started. If the maximum amount of memory is greater than the initial amount, you can add memory up to this maximum to the active partition, without stopping and restarting it.

Secure Service Container partitions require an initial amount of at least 4096 MB (4 GB).

Network
Network interface cards (NICs) provide a partition with access to internal or external networks that are part of or connected to a system. Each NIC represents a unique connection between the partition and a specific network adapter that is defined or installed on the system.

You need to define a NIC for each network connection that is required for the operating system or hypervisor that runs on this partition, or for the applications that the operating system or hypervisor supports. DPM supports several types of network adapters, including Open Systems Adapter-Express (OSA-Express) features, IBM HiperSockets, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) Express features.

Note: Starting with DPM R5.2, you can create and manage HiperSockets only through the Configure Partition Links task. Although you can use the Network section of the New Partition and Partition Details tasks to manage NICs for OSA and RoCE adapters, you cannot use that section to manage HiperSockets NICs. For details, see The HiperSockets user experience with DPM R5.2.

Secure Service Container partitions require at least one NIC for communication with the Secure Service Container web interface.

Storage
Storage groups or tape links provide a partition with access to internal storage devices, or to external storage area networks (SANs) and devices that are connected to a system. A storage group is a logical group of storage volumes that share certain attributes, such as the type or size. A tape link defines the attributes of a connection that one or more partitions can use to access one FCP tape library in the SAN.

System administrators create storage groups or tape links for partitions to use. The system administrators work together with storage administrators to correctly configure a storage group or tape link and its associated devices for use.

For partitions to access storage, you attach one or more storage groups or tape links to the partition. Through storage groups and tape links, partitions can access the following types of storage:
  • Fibre Connection (FICON®) extended count key data (ECKD) direct-access storage devices (DASD), and Fibre Channel Protocol (FCP) Small Computer System Interface (SCSI) disk storage devices, including FCP tape libraries. These devices are physically located in the SAN. FICON and FCP storage groups and FCP tape links can be defined as either dedicated for use by only one partition, or shared by multiple partitions.
  • Non-Volatile Memory Express® (NVMe) solid state drives, which are installed in a system. Only one partition can use an NVMe storage group at any given time; an NVMe storage group cannot be shared. However, a partition that has attached NVMe storage groups can also have attached FICON and FCP storage groups, and FCP tape links.
Cryptos
The term cryptos is a commonly used abbreviation for adapters that provide cryptographic processing functions. DPM supports various Crypto Express features.

Crypto features are optional and, therefore, might not be installed on the system. If these features are installed, your decision to enable your partition to access them depends on your company's security policies, and the workload that your partition will support. Your system planner or security administrator can advise you about the use of available crypto features.

Partition links
Partition links interconnect two or more partitions that share the same network configuration and reside on the same system. Through the Configure Partition Links task, you can quickly configure network connections among partitions on the same system to improve performance.

The New Partition task (both basic and advanced modes) and Partition Details contain a section for partition links. However, this section is read-only because you can specify the partitions that use a partition link only through the Configure Partition Links task.

Boot options
When you define a partition with a type of Linux or z/VM, you can specify the boot option through which DPM locates and installs the executables for the hypervisor or operating system to be run in the partition. You can choose one of several different options, including booting from a storage device, network server, FTP server (with your choice of protocol), and Hardware Management Console removable media.

DPM automatically sets the boot option for the first-time start of Secure Service Container partitions.

Note: Starting with DPM R4.0, you can select options to validate the operating system image that you boot from a volume in a storage group. For more information, see Validating boot images of operating systems.

Creating, starting, and managing a partition

To create a partition, you use the New Partition task, through which you define the hardware resources that the partition can use: processors, memory, adapters, and so on. The end result of the task is a partition definition, which you can modify through the Partition Details task, or use to start the partition through the Start task. When you start a partition, DPM uses the partition definition to determine which hardware resources to allocate to the partition, and starts the initialization process.

After the partition definition exists, you can use the Partition Details task to modify it; note that you cannot change the partition type after you create the partition definition. You can also use the Stop task to stop a partition, or the Delete Partition task to delete it. You can accomplish these tasks programmatically as well, through the Hardware Management Console Web Services application programming interfaces (APIs) for DPM.

To check on the status of partitions, select the Systems Management node in the HMC navigation pane, and select the Partitions tab. The Status column for each partition contains one of the following values.
Active
Indicates that the partition has successfully started and is operating normally.
Communications not active
Indicates a problem with the communication between the Hardware Management Console (HMC) and the Support Element (SE).
Degraded
Indicates that the partition successfully started and is operating, but the availability of physical resources to which it has access is less than required, as stated in the partition definition. This status might be acceptable, for example, for partitions that do not have reserved resources.
Paused
Indicates that, because a user has stopped all processors, the partition is not running its workload. In this case, because the partition was successfully started, its resources are shown as active and are still associated with this partition.
Reservation error
Indicates that the availability of physical resources does not match the reserved resources that are stated in the definition for this partition. The partition cannot start until sufficient resources are available.
Starting
Indicates the transitional phase between Stopped state and Active state, as the result of a Start task issued against this partition.
Status check
Indicates that the current status of the partition is unknown. This condition usually occurs under one of the following circumstances:
  • When the SE is starting up; in this case, this partition status is temporary.
  • When the SE and the DPM-enabled system to which it is attached cannot communicate.
Stopped
Indicates that the partition has normally ended its operation, and exists only as a partition definition.
Stopping
Indicates the transitional phase between Active state and Stopped state, as the result of a Stop task issued against this partition.
Terminated
Indicates that all of the processors for this partition are in a disabled wait state, or a system check stop occurred. The partition is not running its workload. In this case, because the partition was successfully started, its resources are shown as active and are still associated with this partition.

You can create as many partition definitions as you want, but only a specific number of partitions can be active at any given time. The system limit determines the maximum number of concurrently active partitions. Practical limitations of memory size, I/O availability, and available processing power usually reduce the number of concurrently active partitions to less than the system maximum. In fact, conditions on the system might prevent a partition from successfully starting, or change its status after it has successfully started. You can view the status of a partition through the Partition Details task or use the Monitor System Events task to set notifications for specific partition events, such as a change in status.

For more details about working with partitions, see Basic tasks for Linux administrators.