Starting a partition and its operating system or hypervisor
This procedure provides step-by-step instructions for starting a partition with a type of Linux or z/VM, and its operating system or hypervisor.
Before you begin
- For partitions with a type of Secure Service Container, see the appropriate edition of Secure Service Container User's Guide for information about starting and managing Secure Service Container partitions and their appliances. This book is available on https://www.ibm.com/docs/en/systems-hardware.
- Make sure that you log in to the Hardware Management Console (HMC) with a user ID that has authorization to use the Start task to start a partition. You can use either a customized user role that is authorized to this task, or one of the default user IDs listed for the Start task in DPM task and resource roles.
Procedure
Results
- Success indicates partitions that have started.
- Failed indicates partitions that failed to start.
- Cancelled indicates partitions for which the start operation was canceled.
If the result is anything other than successful, use the information in the Details column to diagnose and correct the problem.
What to do next
- To log in to a Linux® operating system, use the Operating System Messages task or the Integrated ASCII Console task. The Integrated ASCII Console task must be enabled through the operating system before you can use it.
- To log in to a z/VM® hypervisor that is hosting multiple Linux guests, use the Integrated 3270 Console task.
- Configuring partition resources on the operating system
- If this time is the first time that you have started this
partition, you need to configure the partition resources (processors, memory, and adapters) through
configuration files on the operating system. The suggested practice is to open the
Partition Details task and use it as a reference as you create or modify the
appropriate configuration files on the operating system. Depending on the version of Linux that you have installed, the operating system might automatically configure some resources.
- Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) devices are automatically configured when you are running any of the minimum supported Linux versions. For recommended Linux on IBM Z® and LinuxONE distribution levels, see the IBM® tested platforms at: https://www.ibm.com/support/pages/linux-ibm-z-tested-platforms
- Auto-configuration of other devices requires a version of the Linux operating system that supports auto-configuration. These other devices include Fibre Channel connections (FICON®) in Fibre Channel Protocol (FCP) mode, IBM HiperSockets, and Open Systems Adapter-Express (OSA-Express) devices. See the Red Hat®, SUSE, or Ubuntu product information page to determine which RHEL, SLES, or Ubuntu Server version provides this support.
- Ensuring that FICON or FCP storage groups are visible to the Linux operating system
- After you start the new partition, you might need to enter Linux commands to make FICON or FCP storage groups available to the operating system that the partition
hosts.
NVMe storage groups are automatically detected by the operating
system, so you do not need to enter Linux commands to make that type of storage group available to the operating
system. Similarly, the tape devices that are
available through attached tape links are automatically detected by the operating system, so you do
not need to enter Linux commands for tape
devices either.
Typically, the operating system stores the FCP HBA or FICON volume configuration so it can automatically bring the devices online on the next reboot, so you need to take action only for the initial boot of the operating system.
- When attaching a storage group in Complete state
-
- For an FCP storage group:
- If the storage group contained the boot volume, the operating system brings online all of the HBAs for this storage group, and all volumes in the storage group are available. No action is required unless you have attached other storage groups.
- If the storage group does not contain the boot volume, and the operating system is not configured to bring HBAs online automatically, you need to issue the chccwdev command to bring online all of the HBAs.
- For a FICON storage group, the operating system brings online only the boot volume. You need to issue the chccwdev command to bring online all of the remaining volumes in the storage group that contains the boot volume, as well as the volumes in any other storage groups that you attached.
- For an FCP storage group:
- When attaching an unfulfilled storage group that becomes Complete as the partition is running
-
- For an FCP storage group:
- If adapters were assigned to HBAs while the partition is running, you need to use the chchp command to activate the channel paths for those new adapters.
- To access the volumes in the storage group, you need to issue the chccwdev command to bring online all of the HBAs.
- For a FICON storage group:
- If the adapters connecting the storage group to the storage subsystem were assigned while the partition is running, use the chchp command to activate the channel paths for those new adapters.
- All volumes are offline. You need to issue the chccwdev command to bring online all of the volumes in the storage group.
- For an FCP storage group:
To find the IDs that you need to use for the Linux commands, use the following tasks.- HBA device numbers are available in the Host Bus Adapters (HBA) table when you expand the storage group table entry in the Storage section of the Partition Details task.
- Channel path IDs for FCP adapters are shown in the Host Bus Adapters (HBA) table when you expand the storage group table entry in the Storage section of the Partition Details task.
- Channel path IDs for FICON adapters are shown on the ADAPTERS tab of the Storage Group details; open the Configure Storage task and select the storage group in the Storage Overview to open the Storage Group details page.
- FICON volume device numbers are shown on the VOLUMES tab of the Storage Group details page; open the Configure Storage task and select the storage group in the Storage Overview to open the Storage Group details page.
- Attaching partition links to a stopped or active partition
- Use the
Configure Partition Links task to attach a partition link to one or more
stopped or started partitions. To open the Configure Partition
Links task to the partition links overview, you do not need any specific task
authorization; however, to view any partition links in the overview, you must have object access to
one or more partition links. To create, delete, or edit a partition link, you also must have a
customized user ID with the predefined System Programmer Tasks role or equivalent
permissions.To attach a partition link to a stopped or active partition:
- You need to have the correct authorization to either create or edit the partition link, and you need object access to the partition.
- When you either create a new or edit an existing partition link through the Configure Partition Links task, you add the partition during the create or edit process.
- When you create a new partition link or save your changes to an existing partition link, DPM asynchronously attaches the partition link to the partitions that you added as part of the create or edit request. For partitions in Active state, DPM dynamically configures the partition link as soon as it is created.
For more information about required authorization plus instructions for creating or editing a partition link, see the online help for the Configure Partition Links task.
- Verifying that the partition resources are online
- To verify that the partition resources are online, use the
appropriate Linux commands, samples of which are displayed
in the following list.
- To display information about processor resources, use the
lscpu command. The following screen shows a sample display that results from
entering this command.
Figure 1. Sample displays resulting from the lscpu command 
- To display information about memory resources, use the
lsmem command. The following screen shows a sample display that results from
entering this command.
Figure 2. Sample displays resulting from the lsmem command 
- To
display information about adapters, use the appropriate command for the device type. For example, to
view Open Systems Adapter-Express
(OSA-Express) features, use the
lsqeth, lscss, and lschp device driver
commands. The following screens show sample displays that result from entering these commands.
Figure 3. Sample displays resulting from the lsqeth and lscss commands 
Figure 4. Sample displays resulting from the lschp command
To display information about adapters that use the PCI Express (PCIe) protocol, such as Non-Volatile Memory Express® (NVMe) adapters, use the lspci command, as shown in Figure 5.Figure 5. Sample displays resulting from the lspci command
To display information about FCP tape storage, use the lscss command to display the HBA device numbers that the partition is using for tape links, and the lstape command to list the available tape drives. Figure 6 shows sample results from the lstape command.Figure 6. Sample displays resulting from the lstape command 
- To display information about processor resources, use the
lscpu command. The following screen shows a sample display that results from
entering this command.
- Specifying the relative port number of an OSA device
- If the partition is connected to a
network through an OSA-Express adapter port other than port 0, you need to manually
specify the relative port number through a Linux
qethdevice driver command, before entering the Linux command to bring the device online. The following sample commands show how to create a device group, to specify the relative port number and layer mode, and to bring the group of devices online. The highlighted command (the second line) specifies the port number; that command contains1for the port number, along with the attributeportno.echo 0.0.1100,0.0.1101,0.0.1102 > /sys/bus/ccwgroup/drivers/qeth/group echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.1100/portno echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.1100/layer2 echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.1100/online - Configuring secure execution for a Linux hypervisor
- If the IBM
Secure Execution for Linux feature is enabled on this system, you can configure a Linux
operating system that functions as a hypervisor for secure execution, which isolates and protects any guests that run on a
hypervisor by restricting host access to guest workloads and data.
- To determine whether the IBM Secure Execution for Linux feature is enabled on the system, go to the General section of the System Details task and check the values displayed for the Secure Execution field.
- To configure Linux for secure execution, see the product documentation for the Linux distribution that you are using as a hypervisor.
- To determine whether the Linux hypervisor that runs on a partition is configured for secure execution, go to the General section of the Partition Details task and check the value displayed for the Secure Execution field.
- Finding additional information about operating system or hypervisor commands
-
- For more information about using Linux commands to work with partition resources and adapters, see the Device Drivers, Features, and Commands documentation for the Linux kernel version that you are using. This documentation, which also describes commands and parameters for configuring Linux on IBM Z and LinuxONE servers, is available in IBM Documentation at https://www.ibm.com/docs/en/linux-on-systems?topic=overview-device-drivers-features-commands
- For information about using z/VM commands to work with partition resources and adapters, see the z/VM: CP Commands and Utilities Reference for the z/VM version that you are using. The z/VM library is available in IBM Documentation at https://www.ibm.com/docs/en/zvm