General Page
Chapter 1. Overview
Logical partitions divide the physical resources of a single IBM Z machine amongst multiple logical machine images, or partitions. A logical partition is a subset of the processor hardware that is defined to support the operation of a system control program.
Each partition looks and operates like its own physical machine. It operates independently of, and without knowledge of other partitions. Logical partitions can be defined to have potentially different configurations, such as processors, storage, I/O, or local time. Each logical partition has its own operating system.
Chapter 2. How logical partitions are used
Logical partitions are used for hardware consolidation and workload balancing. Partitions are managed by the Processor Resource/Systems Manager (PR/SM), which is built into an IBM Z machine. The system operator defines the resources that are to be allocated to each logical partition. Most resources can be reconfigured non-disruptively (or, without requiring a power-on reset) to other partitions.
Once an General Linux-only, SSC, or z/VM logical partition is defined and activated, you can load a supported control program into that logical partition. Firmware is automatically loaded and started in a Coupling Facility mode logical partition when it is activated.
Central storage and Storage Class Memory
Central storage is defined to a logical partition before activation. When a logical partition is activated, the storage resources are allocated in contiguous blocks. These allocations can be dynamically reconfigured. Sharing of allocated central storage among multiple logical partitions is not allowed.
Storage Class Memory is a different type of memory that you can purchase and use. The Physical memory in your IBM Z can be either central storage or storage class memory and the amounts of each type can only be changed via arranging for a formal machine configuration change. Every logical partition requires central storage. Storage Class Memory is optional and is not supported by all operating systems.
Figure 1. Example of a logical partition
Chapter 3. Defining logical partitions
At least one logical partition is required on a machine. However, up to 85 partitions can be defined/active at any given time on some of the larger machines, with fewer allowed on other machines. (The maximum number of Logical partitions depends on your server, consult your server's documentation for this information.
Logical partition names and associated I/O are defined via the I/O Configuration Program (IOCP). The complete I/O definition results in an I/O Configuration Dataset (IOCDS) which is used to initialize the machine's I/O configuration during Power-On Reset. The following IOCP input statement shows an example of defining 3 logical partitions in the configuration:
RESOURCE PARTITION=(CSS(0),(LP01,1),(LP02,2),(LP03,3))
The rest of the logical partition configuration definitions are defined and contained in activation profiles. Each activation profile contains information such as the number and type of processors and the amount of storage.
Activation of the logical partition is done manually via the Hardware Management Console (HMC), or automatically at Power-On Reset (POR).
Partition types
There are several configuration options to consider when defining logical partitions. One is the partition type. Depending on which of these partition types are available on your IBM Z server, you can define your logical partition to have any of these possible types:
- General
- Coupling facility
- Linux only
- z/VM
- Secure Services Container (SSC)
Processor types
Another configuration option is the definition of processor types. The types are:
- General processor (represented as GP in the figures)
- zIIP - Integrated Information Processors
- IFL - Integrated Facility for Linux
- ICF - Internal Coupling Facility

Shared and dedicated processors
Physical processors are either:
- Shared amongst all logical processors of the same processor type in any partition
Using shared processors is the best way to maximize machine utilization. Excess processor resource from one partition can be used by another partition. This can be limited via per partition capping, however.
- Dedicated to a single logical processor in a single partition
Dedicating a processor provides the best performance. However, it does not allow excess processor time to be used by other logical processors.

Figure 4. LP02 and LP04 have shared physical processors
Figure 5. Dedicated physical processors. This figure shows how LP01 has dedicated general purpose and ZIIP processors
Processing weights
For shared processors, the amount of physical processor time given to a partition is based on the partition's logical processor weight, relative to the rest of the active partitions. For example, LP01 is defined to have a weight of 200, and LP02 has a weight of 100. If both are then activated:
- LP01 gets up to 200/300 of the total processing power of the machine, and
- LP02 gets up to 100/300 of the total processing power of the machine.
Chapter 4. Managing logical partitions
There are various ways to manage logical partitions.
Dynamically adding or removing logical partitions
You can add or remove processors and/or storage to logical partitions dynamically by configuring or deconfiguring resources. This requires specifying reserve resources at partition activation, in order to add processors/storage as needed. For example, you may initially need 2 general processors, but may want to reserve the right to add up to 4 more as workload requires.
Newer servers, such as z10 EC and BC, allow you to change a partition definition for processors dynamically, so that pre-planning is not required.


Adjusting a partition's weight

Figure 11. Redistribution of general processors after a partition weight change.
Changing shared and dedicated processors
In Figure 12, LP04 is changed from a shared ESA/390 partition to a dedicated Coupling Facility partition. Upon reactivation, the dedicated ICF is added to LP04. LP01 and LP02 make use of the available general processor, according to their share percentage.
Figure 12. Changing LP04 to a dedicated ICF processor.
Defining LPAR group capacity limits
Customers can define LPAR group capacity limits, specifying one or more groups of LPARs on a server, each with its own capacity limit. This allows z/OS to manage the groups in such a way that the sum of the LPARs' CPU utilization within a group will not exceed the group's defined capacity. Each logical partition in a group can still optionally continue to define an indivual logical partition capacity limit.
LPAR group capacity limit requires that all logical partitions to be managed in the group be running z/OS V1.8 or higher. LPAR group capacity limits may help provision a portion of a server to a group of logical partitions allowing the CPU resources to float more readily between those logical partitions, but not to exceed the group capacity in total, resulting in more productive use of "white space" and higher server utilization. This can be an effective tool for services bureaus or for sub capacity software pricing.
Figure 13 shows GROUP1 defined as a group having three partitions which are managed as an entity. Because LP05, LP06, and LP07 are managed as a group, LP05 could use all the time allotted for the group, leaving nothing for LP06 and LP07.
Figure 13. Limiting groups of partitions as an entity.
Chapter 5. Using Workload Manager (WLM)
Automatic adjustments can be made using Workload Manager (WLM). You can:
- Add/remove partition processors (up to a reserved amount)
- Shift weight between members of a sysplex on the same machine (a cluster)Partition capping is mutually exclusive with WLM CPU Management.

Figure 14. Shifting of weight between members of a sysplex on the same cluster.
Chapter 6. Additional resources
The following publication provides detailed information about logical partitions.
• PR/SM Planning Guide
Hardware Management Console (HMC) and Support Element (SE) information can be found on the console help system
Was this topic helpful?
Document Information
More support for:
IBM Z
Document number:
7167965
Modified date:
24 September 2024
UID
ibm17167965