HMC command line

1 like Updated 4/12/13, 4:25 PM by BrianRappTags: None


The management of a virtualized operating environment is normally done using the Web-based System Management Interface (WebSM) or Remote WebSM client on systems connected to a Hardware Hanagement Console (HMC) or using the Integrated Virtualization Manager (IVM) web-based front end if you don't use/have/want/whatever a HMC. Both methods are based on a graphical user interface (GUI) and both environments are enabled for the use of remote management - i.e. you don't have to sit in front of the machine to manage the logical partitions.

But due to this feature both management GUIs rely on a certain network bandwidth available - the Remote WebSM client more than the IVM GUI - and this fact tends to become an issue if you are not on a local LAN but let's say somewhere else (patience is a goodness).

And sometimes it might be bothersome to click through all those colorfull menus to get some information or do some simple task. And based on this sometimes you could do a job faster and automatically based on scripts - but how to write a script for a GUI...

Regardless if you use a HMC or the IVM to manage POWER5/5+ based systems there's a command line interface (cli) available for both and I want to show you some of the possibilites to use it.


Please note...

The following examples are mainly valid for the use in a HMC-based environment. Though most of the commands are available on the IVM, too. But the syntax and the possible options may differ.

Enabling the Remote Command Line Execution

Per default the remote command line execution is enabled on an IVM based system - as soon as you'll login to the Virtual I/O Server (VIOS) running as LPAR1 either using telnet, ssh or a local console you can invoke the commands.

In an HMC-based environment it is different - here you must allow remote command line execution.

Enabling Remote Command Line Execution on the HMC

On the HMC GUI (either locally or remote) click on HMC Management in the Navigation area, then on HMC Configuration and finally on Enable/Disable Remote Command Execution. That's it - simple. You can try to login to the HMC using a SSH client of your choice.


Please note...

The login name is hscroot not root!

Enabling SSH access without password

Now that you can access the HMC using SSH it might be usefull to allow access from certain workstations without get prompted for a username and a password - this is quite usefull especially when using scripts. But if you don't need it just skip the following steps.

First step is to generate a public-private key file on your client. Here's an example on my Linux workstation.

[pjuerss@ankh-morpork ~]$ ssh-keygen -f /home/pjuerss/.ssh/id_dsa -q -t dsa -N ""

The keys are stored at /USER/HOME/.ssh/id_dsa and /USER/HOME/.ssh/

Now you must tell the HMC to accept this keys using the mkauthkeys command.

[pjuerss@ankh-morpork ~]$ssh hscroot@hmc-570 "mkauthkeys --add '[key-string von /home/pjuerss/.ssh/]'"'s password:

That's it - now try it with some command like...

[pjuerss@ankh-morpork ~]$ ssh date
Fri Sep 15 16:49:22 CEST 2006 see if it works.


Please note...

On a IVM based system you don't have mkauthkeys command so you'll have to include the generated key at

/home/padmin/.ssh/authorized_keys2 manually. Please be aware that this is not supported by IBM!

So do it on your own risk !!!

The Command Line Interface

Now that you can use SSH to connect to a HMC you are ready to start using the well documented and easy to use command line interface.

Well, it's true that the documentation of the command syntax is good - try using a command without any option and you will get all possible and required fields. In addtion the man pages of each command give you more informations and some usage example which are a great help.

In the following section I will show you some commands I found quite usefull but there are some more of them.

Informations of the various commands who can use them area available at the IBM System p Hardware Information Center:

HMC Command Line Reference

IVM Command Line Reference

Here're - I hope - all HMC related command available on the restricted shell:

bkconsdata    bkprofdata
chaccfg       chhwres            chsysstate
chcod         chled              chusrtca
chcuod        chlparutil         chvet
chhmc         chsacfg            chsyscfg
chhmcusr      chsvcevent         chsyspwd
hmcshutdown   hmcwin
installios    osinstall
lscuod        lshwinfo     lslparutil   lssvcevents  lsvet
lsdump        lshwres      lsmediadev   lssyscfg
lshmc         lsled        lspartition  lssysconn
lsaccfg       lshmcusr     lslic        lsrefcode    lssysplan
lscod         lshsc        lslock       lssacfg      lsusrtca
migrcfg       mkaccfg     mkhmcusr    mksysconn   mkvterm
mkauthkeys    mksyscfg    mksysplan
rmaccfg      rmlparutil   rmsysplan    rstprofdata
rmhmcusr     rmsyscfg     rmvterm      runlpcmd
rmlock       rmsysconn    rsthwres

I repeat myself but each command comes with a good man page documentation and using the command without addl. options will give you an impression how to use it.

The sad news is that some of the commands tends to be - well - large and sometimes not really self explaining.

Getting informations about the universe and the rest of it

One of the first things you might want to know is which systems are connected to my HMC and how many lpars are already defined on them etc. To get the desired information you can use the lssyscfg command.

syshscroot@hmc-op:~> lssyscfg -r sys

As you might have noticed the output is a comma-seperated list (csv) which is not really nice to read (from a human point of view).

Ok, most of the times and for most other commands you don't need the full blown output. You must know the name of the system you're working on - this is important! Try this.

hscroot@hmc-op:~> lssyscfg -r sys -F name

As you can see, the option -F will limit the output on things you're interested in. Nevertheless, the command without -F might be usefull to get all the field names available to use.

Now let's have a look about a specific system and the LPARs defined on this system and the status of each LPAR.

hscroot@hmc-op:~> lssyscfg -m op710-2-SN1008B2A -r lpar -F name,lpar_id,state

As you see, you must specify a system with -m to get a list of the LPARs on that system - which makes sense somehow.


Please note...

For most commands the syntax is (command) -m (systemname) (some other options). Because it is possible to attach more than on server to a HMC, you must specify the system you're planning to work on. This is true for IVM based servers, too.

And finally it might be interesting to see which profiles are defined for a specific partition.


Please note...

The output of the "real" command shown below is one large line. The following example uses line breaks to be more readable.

Nevertheless it is a point to remember: all commands - and it doesn't matter how long the command will be - are written in one line!

hscroot@hmc-op:~> lssyscfg -m op710-1-SN1008B1A -r prof --filter "lpar_ids=1"
name=normal,lpar_name=op710-1-VIO-Server,lpar_id=1,lpar_env=vioserver,all_resources=0, \
min_mem=1024,desired_mem=1024,max_mem=2048,proc_mode=shared,min_proc_units=0.1, \

You might have notices the option --filter at the end of the command. This little helper is a - guess what - filter because sometimes it doesn't make sense to scroll through a very large list just to get the information of one LPAR.

Now let's have a look at the attion LEDs of the LPARs. This example is from a IVM system.

$ lsled -r sa -t virtuallpar
lpar_id=1,lpar_name=IBM VIO 1.3-FP8.0,state=off
$ lsled -r sa -t phys

Nice, huh? To see how to manipulate the LED state read on. But first let's have a look at the installed HW resources in a system and please forgive me that I didn't show you all possible queries available. Just some nice ones.

hscroot@hmc-op:~> lshwres -r mem -m op720-1-SN100486A --level sys

hscroot@hmc-op:~> lshwres -m op720-1-SN100486A -r io --rsubtype slot -F description,unit_phys_loc,bus_id,phys_loc,lpar_id
Universal Serial Bus UHC Spec,U787B.001.DNW1733,2,T7,none
Other Mass Storage Controller,U787B.001.DNW1733,2,T16,none
PCI Fibre Channel Disk Controller,U787B.001.DNW1733,2,C3,1
PCI RAID Disk Unit Controller,U787B.001.DNW1733,2,C4,2
PCI Fibre Channel Disk Controller,U787B.001.DNW1733,2,C5,2
PCI 10/100/1000Mbps Ethernet UTP 2-port,U787B.001.DNW1733,3,T9,1
PCI RAID Disk Unit Controller,U787B.001.DNW1733,3,T14,1
PCI 10/100/1000Mbps Ethernet UTP 2-port,U787B.001.DNW1733,3,C1,2
PCI 10/100/1000Mbps Ethernet UTP 2-port,U787B.001.DNW1733,3,C2,none

hscroot@hmc-op:~> lshwres -m op720-1-SN100486A -r io --rsubtype slot -F description,drc_index,lpar_id
Universal Serial Bus UHC Spec,21010002,none
Other Mass Storage Controller,21020002,none
PCI Fibre Channel Disk Controller,21030002,1
PCI RAID Disk Unit Controller,21040002,2
PCI Fibre Channel Disk Controller,21050002,2
PCI 10/100/1000Mbps Ethernet UTP 2-port,21010003,1
PCI RAID Disk Unit Controller,21020003,1
PCI 10/100/1000Mbps Ethernet UTP 2-port,21030003,2
PCI 10/100/1000Mbps Ethernet UTP 2-port,21040003,none

The first example is self explaining. The second and the third are OK but what you're gonna do with that - ok, you might need this information if you plan to assign a HW ressource to a partition. You'll see.

And last but not least one of my favourite commands.

hscroot@hmc-op:~> lpar_netboot -M -n -t ent "linux_test" "normal" "op710-2-SN1008B2A"
# Connecting to linux_test
# Connected
# Checking for power off.
# Power off complete.
# Power on linux_test to Open Firmware.
# Power on complete.
# Getting adapter location codes.
# Type  Location Code          MAC Address
ent U9123.710.1008B2A-V7-C4-T1 0a67e0007004

Now you can use the MAC address in your dhcpd.conf. By the way -n means "do not really boot the LPAR".

CLI at work

Ok now we have found a lot of usefull information of our systems and LPARs but what to do with it? And by the way the examples above show only a few queries. There're plenty of possibilities and choices.

First let's go back to our LED example and see how to query and change the state of the different LEDs.

$ chled -r sa -t virtuallpar -o on --id 2
$ lsled -r sa -t virtuallpar
lpar_id=1,lpar_name=IBM VIO 1.3-FP8.0,state=off
$ chled -r sa -t virtuallpar -o off --id 2
$ lsled -r sa -t virtuallpar
lpar_id=1,lpar_name=IBM VIO 1.3-FP8.0,state=off

Nice. But let's do something really important - let's add the DVD-ROM drive to a partition and move it to another partition and remove it from there.

hscroot@hmc-op:~> lshwres -m op720-1-SN100486A -r io --rsubtype slot -F description,drc_index,lpar_id
Other Mass Storage Controller,21020002,none

OK, nobody owns the DVD-ROM drive. Let's give it to our VIOS.

hscroot@hmc-op:~> chhwres -r io -m op710-2-SN1008B2A -o a --id 1 -l 21030002
hscroot@hmc-op:~> lshwres -m op710-2-SN1008B2A -r io --rsubtype slot -F description,drc_index,lpar_id --filter "lpar_ids=1"
Other Mass Storage Controller,21030002,1

The option -l is for the drc_index which is an easy identifier for a device.

Now the VIOS doesn't need the DVD-ROM anymore but our client "op710-2-Client1-SLES9SP3" needs it - so let's move.

hscroot@hmc-op:~> chhwres -r io -m op710-2-SN1008B2A -o m --id 1 -l 21030002 -t op710-2-Client1-SLES9SP3
hscroot@hmc-op:~> lshwres -m op710-2-SN1008B2A -r io --rsubtype slot -F description,drc_index,lpar_id --filter "lpar_ids=2"
Other Mass Storage Controller,21030002,2

Finished with the job and now remove it from the partition.

hscroot@hmc-op:~> chhwres -r io -m op710-2-SN1008B2A -o r --id 2 -l 21030002    
hscroot@hmc-op:~> lshwres -m op710-2-SN1008B2A -r io --rsubtype slot -F description,drc_index,lpar_id --filter "lpar_ids=2"
No results were found.

Not bad. Please note that the partition must be up and running to assign/reassing HW ressources. And also pls. note that after the movement you must tell the operating system in each LPAR that it has a new device or it hasn't anymore.

Now let's initiate a network boot.

hscroot@hmc-op:~> lpar_netboot -t ent -m 0a67e0007004 -s auto -d auto "linux_test" "normal" "op710-2-SN1008B2A"
# Connecting to linux_test
# Connected
# Checking for power off.
# Power off complete.
# Power on linux_test to Open Firmware.
# Power on complete.
# Network booting install adapter.
# bootp sent over network.


And now let's initiate some commands to a VIOS using viosrvcmd.

hscroot@hmc-570:~> viosvrcmd -m Server-9110-510-SN100129A -p VIOS1.3-FP8.0 -c "mkvg -f -vg datavg hdisk2 hdisk3"
hscroot@hmc-570:~> viosvrcmd -m Server-9110-510-SN100129A -p VIOS1.3-FP8.0 -c "mklv -lv testlv datavg 10G"
hscroot@hmc-570:~> viosvrcmd -m Server-9110-510-SN100129A -p VIOS1.3-FP8.0 -c "lsvg -lv datavg"
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
testlv              jfs        160   160   1    closed/syncd  N/A


Please note...

Please note that the viosrvcmd only works with none interactive command.

Creating, activating, deactivating and deleting partitions

The above examples are nice to gather informations of attached system(s), LPARs defined, the profiles of each LPAR, working with LEDs, moving HW ressources etc.

But I am facing sometimes the situation where I must create one or more LPARs for testing purposes and delete them afterwards. This could be painfull when working at a remote site and using the graphical WebSM tool. SSH is much faster.

There're several possibilities to create a partition using the CLI. But first let's think - just one minute...

  • ...if you want to create a partition it must have a name...
  • ...within its name the partition must have at least one profile where you assign the resources but you can define more than one profile for a partition with different resource allocation...
  • ...each partition has a specified role - either aixlinux or vioserver and be aware that this role cannot be changed afterwards...
  • ...finally each partition need memory, cpu and I/O...

Ok, still with me? Good. You can use the mksyscfg command to create a LPAR and actually you'll have three choices:

  • Typing the whole string at the HMC cli.
    • This is not very comfortable because this string could be very long.
  • Create one or more template file(s) and copy this/them to the HMC and use mksyscfg with the -f option.
    • This is more comfortable but the config file must be available on the HMC.
  • Use a local script and SSH.
    • Also very comfortable but a little bit tricky concerning the syntax.

The first two choices are explained in Virtualization:Creating LPAR using SSH. So let's start.

Regardless which way you'll choose the mksyscfg command requires the same information from you.

  • First of all you must specify the system where you want to create the partition.
  • Then you must specify the name, the role and the profile-name.
  • Next specify how much memory this partition should get - desired, minimum and maximun.
  • Tell if the partition should run with dedicated CPU resources or in a shared pool and if it runs in a shared pool how many capacity entitlements (CE) and virtual CPUs it should get - desired, minimum and maximum.
  • Decide how many adapters (virtual, physical) the partition should use.
  • And finally think about stuff like bootmode etc.

Create LPAR using the CLI

Let's make an example and please note that the whole command should be written in one line! I've seperated it like my terminal emulation would do.

hscroot@hmc-570:~> lssyscfg -r sys -F name
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100129A -r lpar -F name
hscroot@hmc-570:~> mksyscfg -m Server-9110-510-SN100129A -r lpar -i "name=linux_
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100129A -r lpar -F name        

I think you could figure out the meaning of each variable by yourself but I will go into detail for virtual_eth_adapters and virtual_scsi_adapters a little later.


Please note...

Please note that the maximum value for CPU (CE, virtual CPU) and memory are only relevant for DLPAR operations.

Important are the values for min and desired because the system will try to assign the desired value when a partition starts. If it can't get this value it will try anything between desired and min as long as it can assign a value from this rangt. If it can't then the partition will not start!

Create LPAR using a configuration file

Ok, as yo've seen in the above example this is not really comfortable - or at least for me it is not really comfortable.

The second choice would be to write all those stuff in a file and use scp to transfer it to your HMC.

Please note that you'll have to remove the double quotes in that file!

[pjuerss@ankh-morpork tmp]$ cat createLPAR_norm

Once again - in reality this is one large line! The line breaks in the example above are based on a 80x24 terminal setting.

So transfer the file to your HMC using scp and create the LPAR using the -f option of mksyscfg.

hscroot@hmc-570:~> ll createLPAR_norm
-rw-r--r--    1 hscroot  hmc           417 Sep 19 10:39 createLPAR_norm
hscroot@hmc-570:~> mksyscfg -m Server-9110-510-SN100129A -r lpar -f ./createLPAR_norm
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100129A -r lpar -F name,lpar_id

Much easier I think.

Create LPAR using a SSH script

Now the last example is based on a little script. In fact I am using two files - one for the configuration values and one for the remote SSH command execution.

Here's my config file:

# ----- Config File for mksyscfg -----

# ----- Client LPAR default values -----
CLIENT_NAME="linux_test"        # Name of the partition
CLIENT_PROFIL="client_default"  # Name of the profile
CLIENT_ENV="aixlinux"           # Operating environment
CLIENT_MINMEM="512"             # Minimum memory in megabyte
CLIENT_DESMEM="512"             # Desired memory in megabyte
CLIENT_MAXMEM="512"             # Maximum memory in megabyte
CLIENT_PMODE="shared"           # shared or ded
CLIENT_SMODE="uncap"            # cap or uncap
CLIENT_SWEIGHT="128"            # Value between 0 and 255
CLIENT_MINPU="0.1"              # Min processing units
CLIENT_DESPU="0.4"              # Des processing units
CLIENT_MAXPU="2.0"              # Max processing units
CLIENT_MINVP="1"                # Min virtual CPU
CLIENT_DESVP="2"                # Des virtual CPU
CLIENT_MAXVP="4"                # Max virtual CPU
CLIENT_VSLOT="10"               # Number of virtual slots
CLIENT_VETH="2/1/1//0/1"        # Virtual Ethernet adapter
CLIENT_VSCSI="3/client/1//4/1"  # Virtual SCSI client adapter
CLIENT_START="0"                # Start with manages system or not
CLIENT_BOOT="norm"              # Boot mode = normal
CLIENT_PWR="none"               # Power controlling partition
CLIENT_CON="0"                  # Connection monitoring
CLIENT_IOPOOL="none"            # IOPOOL

And here's the little script:

#! /bin/bash

. /home/pjuerss/files/scripts/hmc/lpar.conf

echo -n "Creating LPAR..."
ssh $USERID@$HMC mksyscfg -m $SYSTEM -r lpar -i \"name=$CLIENT_NAME,\
echo "done"

And here're both at work - by the way it takes <12 sec. to finish:

[pjuerss@ankh-morpork hmc]$ ./
Creating LPAR...done

And here's the output on the HMC:

hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100129A -r prof --filter "lpar_ids=2"


Please note...

To add physical devices to a LPAR you can use the io_slots options.

Virtual Ethernet and SCSI adapter settings explained

Now you know how to create an LPAR you might have asked yourself what's behind the options for virtual_scsi_adapters and virtual_ethernet_adapters. Ok, let's have a look.

Virtual Ethernet Adapters

The syntax of the virtual Ethernet adapters is:


So the adapter with this setting 2/1/1//0/1 would say it is in slot_numer 2, it is ieee, the port_vlan_id is 1, it has no additional VLAN ids assigned, it is not a trunk adapter and it is required.

To create a trunk adapter with the priority of 2 which is required, has the addl. VLANS 2 and 20 with a default port VLAN ID of 1 and is IEEE compatible and resides in slot 90 the syntax would be: 90/1/1/"2,20"/2/1.

Virtual SCSI Adapters

Similar to the virtual Ethernet adapters the syntax for the virtual SCSI adapters is:


Ok the adapter called 3/client/1/VIOS1.3FP8.0/4/1 is a client adapter in slot 3, the remote lpar has the ID 1 and the name VIOS1.3FP8.0 and this remote lpar has a VSCSI-Server adapter in slot 4 for my client partition and this VSCSI-client adapter is required.

For Virtual SCSI-Server adapter the definition could read like this virtual_scsi_adapters=5/server/any//any/1 and I think you should be able to read it by yourself .


The virtual serial adapters are created automatically - so there's no need to specify them at creation time. But to be complete, here's the syntax for them:


Changing the configuration

Now it could be possible that you must change the configuration of the partition profile for some reason. This can be done using the

chsyscfg command. So let's assume we want to add one additional virtual_scsi_adapter in slot 7.

hscroot@hmc-570:~> chsyscfg -m Server-9110-510-SN100129A -r prof -i 'name=defaul
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100129A -r prof --filter "lpar_


Please note...

Please note that in the above example it is mandatory to specify not only the new additional virtual SCSI adapter but also all former available and configured adapters! Otherwise it could happend that you'll overwrite the existing configuration in the profile - which is not really bad unless you stop the partition and start it again which forces it to reread the profile information. So be carefull!

Activating and deactivating a partition

Finally after you've created the partition you'll activate it to do the job it was meant to be. To do so use the chsysstate command.

hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100129A -r lpar -F name,lpar_id,state,default_profile
linux_test,2,Not Activated,client_default
hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100129A -r lpar -o on -b norm --id 2 -f client_default

The above example would boot the partition in normal mode. To boot it into SMS menu use -b sms and to boot it to the OpenFirmware prompt use -b of.

To restart a partition the chsysstate command would look like this:

hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100129A -r lpar --id 2 -o shutdown --immed --restart

And to turn it off - if anything else fails - use this:

hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100129A -r lpar --id 2 -o shutdown --immed
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100129A -r lpar -F name,lpar_id,state
linux_test,2,Shutting Down

Deleting a partition

Finally to delete a partition use the rmsyscfg command.

hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100129A -r lpar -F name,lpar_id
hscroot@hmc-570:~> rmsyscfg -m Server-9110-510-SN100129A -r lpar --id 2
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100129A -r lpar -F name,lpar_id

Other usefull commands

Now we've seen how to gather informations about systems, LPARs and so on, creating, activating, deactivating and deleting LPARs there are still some other usefull commands available. I want to show you four of them I use frequently.

Accessing the Advanced System Management Interface

In a normal setup the Service Processor of a system is only connected to one (or two) HMC(s). You can of course use the WebSM GUI to access the Web interface of the Service Processor. You can use the CLI, too!

First we must know which IP address the system's Service Prozessor is using and then we could use the asmmenu command which will open a web browser (Opera) on the HMC and if you're connected to the HMC with X11 forwarding you will get the browser window on your desktop - but be aware that this could be slow and bothersome if your connection to the HMC is slow!


Please note...

The asmmenu command is not available on IVM systems!

hscroot@hmc-570:~> lssyscfg -r sys -F name,ipaddr
hscroot@hmc-570:~> asmmenu
hscroot@hmc-570:~> asmmenu --ip

Using the Virtual Terminal

Virtual terminals are quite usefull to access a partition because you normally don't have a serial adapter in each partition for the initial terminal or if you want to access the operating system shell while there's no or a misconfigured network available.

On the WebSM GUI you can alway open or close a terminal emulation but to be honest - the Java based terminal is not comfortable at all.

Assuming you are connected to the HMC with a SSH client of your choice you can open a virtual terminal connection to a partition using the mkvterm command.

hscroot@hmc-570:~> mkvterm -m p5-570_Technical_Center_Stgt. --id 14
Open in progress..

 Open in progress

 Open Completed.

Welcome to SUSE LINUX Enterprise Server 9 (ppc) - Kernel 2.6.5-7.244-pseries64 (hvc0).

570-lpar2 login:



To finish a VTERM, simply press ~ followed by a dot .!

To remove a virtual terminal connection use rmvterm.

hscroot@hmc-570:~> rmvterm -m p5-570_Technical_Center_Stgt. --id 14
Sending Force close..


Please note...

On an IVM system the command for virtual terminals are called mkvt -id (id) and rmvt -id (id).

A more comfortable way to get to a virtual terminal session is the use of the vtmenu command which is not available on IVM based systems. Try it .