How To
Summary
Strictly at your own risk but if you are a UNIX guru some useful techniques.
Objective
Steps
Warning: If you get the nslim or dd commands, wrong then you
will destroy your LU virtual disks & virtual machine content.Please tell me, if you do - I enjoy a good laugh :-)
Please, test the commands & your understanding on a test SSP!
Shared Storage Pool four Ksh script commands and a program that I hope you will find useful:
- ncluster status of all VIOS
- nlu improved lu replacement
- npool storage pool use
- nmap finds if a lu is online (mapped) on any VIOS
- nslim copy a fat LU backup to a now THIN LU - see below for details. Version 3 with C source code.
- Download these commands including a README and hints - available from GitHub:
- Look for Shared-Storage-Pool-Tools
- GitHub https://github.com/nigelargriffiths or directly
- GitHub https://github.com/nigelargriffiths/Shared-Storage-Pool-Tools
Contents
- What are sparse files?
- Setting some useful Shell variables
- The new nslim command
- LU I/O Performance note
The nine hands-on by example tasks you might find useful
- 1) Rename a Thick Provisioned LU
- 2) Rename a Thin Provisioned LU
- 3) Extract a Shared Storage Pool virtual disk LU
- 4) Recover a previous Thick Provisioned Backup
- 5) Recover a previous Thin Provisioned Backup
- 6) Extract a point in time LU Snapshot
- 7) Have a look at the 1 MB blocks in use within a Thin Provisioned LU
- 8) Slimming down a Thin Provisioned LU that got too Thick (AIX only)
- 9) To move a Thin Provisioned LU from a remote SSP LU to the local SSP
- 10) New! Is a particular LU virtual disk mapped to a Virtual Machine across the whole SSP, if so where?
- UNIX has for decades supported sparse files.
- Normally in UNIX, AIX or Linux a file is written out as a stream of bytes and all the data end up in a stream of disk block.
- But when a program writes to file it can also use the lseek() system call to position the file pointer to a specific place in a file and write a block starting at that place.
- Doing your file I/O like this gets the OS to only allocate the disk blocks for the parts of the file you (or your program) has written too.
- - If you know about inodes that are used to hold file attributes and then rest is for block numbers for where the data it held. In that block number list and entry of a zero means that no block has yet been allocated for that part of the file. If you write to that part of the file then the OS will allocate a block, add that block number to the block number list at the right place and then do the disk write.
- This means you can have a file that is reported as, for example, 10 GB in size but has only been allocates a handful of blocks.
- If you position the file pointer to a block that has never been written and read data then UNIX knows there is no data for you, so it returns a block of zeros.
- Also, if you read the file contents with say UNIX commands like cat, cp or dd from the start to the end - it will give you zeros for all the disk parts that have never been written.
- For examples, for a sparse file of 10 GB but only using 28 x 4KB blocks. If you use cp to make a copy of the file then copy will be 10 GB but full of blocks (2,621,440 x 4 KB) and the copy is NOT a sparse file.
- The SSP Thin Provisioned LUs (without the -thick option) operates using sparse files but the SSP block size is 1 MB so the 10 GB file only has 10,240 blocks. All the blocks are allocated in real time as you (or your program or OS) writes to the LU.
- The SSP Thick Provisioned LUs (lu -create -thick) are "full fat" i.e. all disk bocks are allocated at creation time.
- You should not create files in the SSP filesystem - only use the lu command.
Below are setting shell variables that are used below to save a lot of typing
A) Save the base directory name of the SSP filesystem:
export SSP=$(df -g | grep SSP | awk '{print $1}' )
Check that it worked and it should look something like this:
$ echo $SSP
/var/vio/SSP/globular/D_E_F_A_U_L_T_061310
Notes:
- In the above, you can see my SSP cluster name is "globular" - of course, yours will be different
- The LUs are in the directory $SSP/VOL1 with the name of the LU then a full stop and followed by the hexadecimal LUUDID name
- The PowerVC deploys images are in the directory $SSP/IM
B) Save your Cluster name
export CLUSTER=$(cluster -list | grep CLUSTER_NAME | awk '{printf $2 }')
$ echo $CLUSTER
globular
C) Save your Pool name
export POOL=$(lssp -clustername $CLUSTER -field POOL_NAME | awk '{ print $2 }')
$ echo $POOL
Pacific
To keep your Thin Provisioned Logical Unit (LU) virtual disks you will need to use the nslim program as the root user.
This is a new C program that I have written specifically for this job and it is heavily optimised.
Here is the help information on running it.
# ./nslim -?
Usage: ./nslim (v4) is a filter style program using stdin & stdout
It will thinly write a file (only copy non-zero blocks)
It uses 1MB blocks
If a block is zero-filled then it is skipped using lseek()
If a block has data then it will write() the block unchanged
Example:
./nslim <AIX.lu >SSP-LU-name
Flags:
-v for verbose output for every block you get a W=write or .=lseek on stderr
-V for verbose output on each GB you get count of written or skipped blocks
./nslim -v <AIX.lu >SSP-LU-name
this gives you visual feedback on progress
-t like verbose but does NOT actually write anything to stdout
this lets you passively see the mix of used and unused blocks
./nslim -t <AIX.lu
-h or -? outputs this helpful message!
Warning:
Get the redirection wrong and you will destroy your LU data
nslim is a compiled C progream
- Download to your VIOS as root
- Make it executable:
chmod u+rx nslim - Run it from the root user (oem
_set up_e nv) . - Don
't forget the the root user path would not find it in the local directory so: chmod u+rx nslim
Using the nslim source code
- nslim-v4.c is the actual program source code - don't get excited it is ~120 lines of very simple C code but optimised
- Compile with:
cc -g -O3 nslim_v4.c -o nslim - I compiled it on AIX 6 using the IBM XLC C compiler to run on the current VIOS
nslim version 4
- New -V options (capital V) which output once for each GB two numbers = number of Written block & number of Skipped Block.
- Much less output than thousands of "W" or dots.
Performance Note:
While reading and writing Shared Storage Pool LU files using large block sizes massively increases the performance - like an order of magnitude faster.
Below, for example, we use the command dd with a block size of 64 MB.
1MB takes twice the time and anything below that is extremely slow.
1) Rename a Thick Provisioned 64 GB LU called old99 to new42
Why?
- Perhaps you reused an LPAR and its disks but now the name is misleading
- Perhaps you made a typo in the LU name and need to fix it (been there)
- As the VIOS padmin user:
lsmap -all | more -p '/Backing device old99'
Then find the previous vhost adapter - below we assume it is called vhostXXX
Note 8 spaces between "device" and the LU name - If you have a second VIOS then find the vhost adapter name on that VIOS too (assuming this is called vhostYYY.
- Create a new LU with exactly the same size:
lu -create -lu new42 -size 64GB - Stop the client VM
- Goto root user: oem_setup_env
- Copy the old LU to the new LU:
Warning: Take care to get the if= and of= the right way arounddd bs=64m if=$SSP/VOL1/old99.* of=$SSP/VOL1/new42.* -
On the first VIOS: lu -unmap -lu old99 On the first VIOS: lu -map -lu new42 -vadapter vhostXXX On the second VIOS: lu -unmap -lu old99 On the second VIOS: lu -map -lu new42 -vadapter vhostYYY - Boot the client VM
- Once happy the new LU is working you can remove the old LU, on either VIOS:
lu -remove -lu old99
2) Rename a Thin Provisioned LU called old99 to new42
Why?
- Perhaps you reused an LPAR and its disks but now the name is misleading
- Perhaps you made a typo in the LU name and need to fix it (been there)
Notes on keeping it Thin
- Below we use the same procedure as above for Thick Provisioned LU but with one change for instruction 6
- Use the nslim program instead of dd
- Note nslim is a filter type program and will copy a stdin to stdout.
- As the VIOS padmin user:
lsmap -all | more -p '/Backing device old99'
Then find the previous vhost adapter - below we assume it is called vhostXXX
Note 8 spaces between "device" and the LU name - If you have a second VIOS then find the vhost adapter name on that VIOS too (assuming this is called vhostYYY.
- Create a new LU with exactly the same size:
lu -create -lu new42 -size 64GB - Stop the client VM
- Goto root user: oem_setup_env
- Copy the old LU to the new LU:
-
Warning: Take care to get the < and > the right way aroundnslim -v <$SSP/VOL1/old99.* >$SSP/VOL1/new42.* -
On the first VIOS: lu -unmap -lu old99 On the first VIOS: lu -map -lu new42 -vadapter vhostXXX On the second VIOS: lu -unmap -lu old99 On the second VIOS: lu -map -lu new42 -vadapter vhostYYY - Boot the client VM
- Once happy the new LU is working you can remove the old LU, on either VIOS:
lu -remove -lu old99
3) Extract a Shared Storage Pool virtual disk LU called new99
Why?
- Perhaps you need to move an OS image disk LU or data LU to a different Shared Storage pool - useful for PowerVC images
- Perhaps you need to back up the virtual machine(s) and want to do that on the VIOS
You should prepare a fast file system - not a single internal brown spinning disk. It would work but it would be slow compared to the SSP.
You could use the default Virtual Optical Media Library space /var/vio/VMLibrary = not recommend but it is normal large - we assume this below
- Check the /var/vio/VMLibrary is a mounted filesystem or you will fill up /var and that can cause major problems, use:
df /var/vio/VMLibrary
Stop the client VM - so you get a consistent set of disk blocks with no filesystem issues.
- Copy the old LU to a file:
-
Warning: Take care to get the if= and of= the right way arounddd bs=64m if=$SSP/VOL1/new99.* of=/var/vio/VMLibrary/new99.lu - Start the client VM
- This will create a file which is the whole LU size (Thick provisioned) regardless if it was originally SSP Thin or Thick provisioned.
- You could compress the file "on the fly" but this can take 1 minute per GB for example with gzip i.e. this is pretty slow
-
dd bs=64m if=$SSP/VOL1/new99.* | gzip --fast --force --stdout >/var/vio/VMLibrary/new99.lu
4) Recover a previous Thick Provisioned Backup
Why? Something ghastly happened and you want that VM back ASAP
- Stop the client VM
- Copy the backup file to the old LU:
-
Warning: Take care to get the if= and of= the right way arounddd bs=64m if=/var/vio/VMLibrary/new99.lu of=$SSP/VOL1/new99.* - Start the client VM
5) Recover a previous Thin Provisioned Backup
Why? Something ghastly happened and you want that VM back ASAP
- Stop the client VM
- Next we have to remove, create and reattached the new LU - this maximises free space and makes sure you have no disk block full of old leftover data.
- Unmap the original LU - if Dual VIOS from both of them:
lu -unmap -lu NAME - Remove the original LU - on one VIOS:
lu -remove -lu NAME - Create a new Thinly provisioned LU with the same name:
lu -create -size XXXG -lu NAME - Reconnect the new LU it to the client VM vSCSI virtual adapters - if Dual VIOS on both of them:
lu -map -lu NAME-vadapter vhostXXX - Copy the backup file to the old LU:
-
nslim -v </var/vio/VMLibrary/new99.lu >$SSP/VOL1/new99.* - War
ning : Take care to get the < and > the right way around - Start the client VM
6) Extract a point in time LU Snapshot called SNAP101 of a LU called new88
Why?
- This allows a live capture of a running OS - hopefully in a quieter period of the day
- Note: on recovery it will have to do a file system clean up chfs and AIX LVM log.
- No downtime but longer recovery (that you hope you never have to do so that might be acceptable) Old guys will "sync" the disks before the snanpshot command.
You should prepare a fast file system - not a single internal brown spinning disk. It would work but it would be slow compared to the SSP.
You could use the default Virtual Optical Media Library space /var/vio/VMLibrary = not recommended but it is normally large - we assume this below
Check the /var/vio/VMLibrary is a mounted file system or you will fill up /var and that can cause major problems, use:
df /var/vio/VMLibrary
Create the snapshot as the padmin user:
snapshot -create SNAP101 -lu new88 -clustername $CLUSTER -spname $POOL
- Copy the old Snapshot to a file LU:
-
Warning: Take care to get the if= and of= the right way arounddd bs=64m if=$SSP/VOL1/new88.*@SNAP101 of=/var/vio/VMLibrary/new88.snapshot - Then remove the snapshot as it releases SSP disk space back to the pool:
Warning: this will also remove any snapshots for this LU taken after SNAP101 snapshot was created.snapshot -remove SNAP101 -lu new88 -clustername $CLUSTER -spname $POOL
Notes:
- The file $SSP
/VOL 1/ne w88. *@SN AP10 1 will NOT be found if you use the "ls" command - this is normal. - This snapshot backup file can be used to recover your VM LU exactly the save way the two previous Recover procedures including Thin or Thick provisioning.
I would first remove any other Snapshots for that LU(s) for your sanity.
7) Have a look at the 1MB blocks in use within a Thin Provisioned LU called new77
Why?
- This will output for each 1MB block a W if it has non-zero content that would need to be written and "." (dot) for each zero-filled block
- -t will not actually write anything
- -v for verbose does the normal writing to standard out AND shows you on your terminal the W and "." as it writes or skips blocks.
# ./nslim -t <$SSP/VOL1/new77.*
Processing
W..........................................................................................................
.................................................................................................W...............W...WWWWWWWWW..
...........................................................................................................
..................................................................W...WWWWWWWWWWWWWWWWWWWWWWWWW
WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW.WWWWWWWWWWW.....WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW
WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW...................................................
............................................................................................................
.............................................................................................................
.....................................................................
...............................................................................................................
Done
- It seems AIX tends to use first the blocks in the middle of the virtual disk - this was best practice years ago on physical disks.
8) Slimming down a Thin Provisioned virtual disk LU that got too Thick (AIX only)
Why?
- Perhaps you have a large Thin Provisioned LU that you filled up by accident and would like to make it Thin again to free up SSP disk space.
8.1) Free up the no longer required disk space
- Assuming you LPAR/virtual machine is running AIX, you need to release the now used disk blocks:
- Shrink your filesystems (like: chfs -a size=1G /home) so AIX no longer uses the disk space - this shrinks the LV as well
- Remove any filesystems no longer needed
- Shrink or remove any raw Logical Volumes that are not needed
- Try to make the lsvg rootvg output FREE PPs number as large as possible
- Assuming the SSP is RAIDed (it should be) and you could even remove your AIX level Logical Volume mirrors
8.2) Zero fill that new space
- Next you need to zero fill that new freed up disk space and we use a crude trick
- Create a new Logical Volume called JUNK that will be as large as possible = all of the size of the Volume Groups Free PPs.
- Then we use the dd command to fill the JUNK_lv Logical Volume with zeros
- Use the command:
dd bs=1m if=/dev/zero of=/dev/rJUNK_lv - Let that run till if fails when it fills up the JUNK_lv Logical Volume - this is normal
- Next delete the JUNK_lv Logical Volume
8.3) Recreate the Thin LU
- If you don't mind a LU name having a different name to the original
- Follow the Rename a Thin Provisioned LU section above.
- Making a copy of the LU to a new one using nslim to maximise the unallocated disk space
- If you demand the same LU name
- Follow the Rename a Thin Provisioned LU section above Twice
- That is:
- Copy the LU to a temporary LU name and remove the original LU
- Copy the temporary LU back to a newly create Thin LU with the original name
9) To move a Thin Provisioned LU from a remote SSP LU vm61 to the local SSP xx61
Why?
- You need to move a virtual machine between two Shared Storage Pools
- If you are using PowerVC perhaps you want every SSP to have the same OS deployable images
- You could use 3) Extract a LU on the source SSP, then SFTP to the target VIOS and follow that using 4) or 5) to add it to the target SSP.
- The trick here is NOT to write the LU to a file (because that takes disk space and I/O time) but straight between the two SSP LU files
- We are going to use secure copy command scp - as using ftp is insane in the 21st century!.
- We have to get the remote VIO Servers SSP directory name which includes the remote SSP name
- I set this to the shell variable SSPremote - see above A)
- Next I have to get to the remote VIOS as root as padmin can't red the SSP files
- If you have security certificates between the two VIOS on the different SSP that you are "good to go"
- If you don't then it gets tricky. I used a ghastly temporary work around that is rather embarrassing:
- I recorded the old root password (from /etc
/sec urit y/pa sswd ) - Set a temporarily the root password to allow scp access.
- Later I put the old root password back.
- Use scp to copy the file from the remote machine (redvios1) to the local one as the local root user:
-
scp root@redvios1:$SSPremote/VOL1/vm61.c3d2ab030ac0a5e2a763448037173a0c \ $SSP/VOL1/xx61.10b21d70aa478960b469471a3b851567 The authenticity of host 'redvios1 (9.137.62.37)' can't be established. RSA key fingerprint is f8:28:bb:65:dd:f6:2b:13:a8:67:69:ee:34:8f:60:44. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'redvios1,9.137.62.37' (RSA) to the list of known hosts. root@redvios1's password: vm61.c3d2ab030ac0a5e2a763448037173a0c 7% 2422MB 40.9MB/s 12:21 ETA ... - That took for my 32 GB LU about 14 minutes on my 1Gb/s network = roughly 40 MB/s - the security of scp does hammer a whole CPU.
- This gets the job done for a Thick Provisioned LU
- If the original was Thin Provisioned you need to make it Thin Provisioned again so there is a further step ...
- Use nslim to then move the data from this local LU to a different Thin Provisioned LU - that took 80 seconds at 400 MB/s
One stop scp straight to a Thin provisioned LU
- Alternative is using the scp and nslim in one go BUT scp insists in writing to a file so we can't pipe the output from scp into nslim
Or can we ??? - Had a Problem with the below so its not recommended until I can investigate it.
- We could use an old UNIX guru trick of using a FIFO file to do that in one go but its largely pointless if you have a spare 32 GB in the pool.
- Chain up the commands and pipes like this scp --> FIFO file --> nslim --> SSP LU file
- It would involve
-
mkfifo FIFO nslim <FIFO >$SSP/VOL1/xx61.* & scp root@redvios1:$SSPremote/VOL1/vm61.c3d2ab030ac0a5e2a763448037173a0c FIFO
- If you don't understand the above - don't do it but get some UNIX education :-)
- What I recommend is to scp into a temporary LU disk of the same size and then nslim it in to the real target LU disk.
- But the scp speed is still the limiting factor but we don't need that temporary disk or SSP space.
10) New! Is this LU virtual disk called vm61, mapped to a Virtual Machine, if so where?
Why? You are performing maintenance on your Shared Storage Pool and suspect some LU's are no longer in use.
- First check that the SSP is in good health using the padmin command and my SSP cluster called globular:
$ cluster -status -clustername globular Cluster Name State globular DEGRADED Node Name MTM Partition Num State Pool State indigovios1 8231-E1C020659FDR 1 OK OK rubyvios1 8408-E8E0221D494V 1 DOWN emeraldvios1 8286-42A02100EC7V 2 OK OK emeraldvios2 8286-42A02100EC7V 3 OK OK rubyvios2 8408-E8E0221D494V 2 DOWN purplevio1 9117-MMB02100525P 3 OK OK purplevio2 9117-MMB02100525P 4 OK OK limevios1 8284-22A02215296V 1 OK OK limevios2 8284-22A02215296V 4 OK OK greenvios1 8231-E2B0206FC44P 10 OK OK greenvios2 8231-E2B0206FC44P 1 OK OK $-
Note I have a machine offline so two VIO Servers are DOWN
-
- Get to the root users: oem_setup_env
- And use command clcmd to run a sub-command on all the VIOS in the SSP cluster. Here the lsmap command with some formatting options and a grep to pick out the results we need
-
# clcmd /usr/ios/cli/ioscli lsmap -all -field SVSA backing Physloc -fmt : 2>/dev/null | grep -e NODE -e vm61 NODE greenvios2.aixncc.uk.ibm.com NODE greenvios1.aixncc.uk.ibm.com NODE limevios2.aixncc.uk.ibm.com vhost1:U8284.22A.215296V-V4-C10:vm61b.10b21d70aa478960b469471a3b851567 NODE limevios1.aixncc.uk.ibm.com vhost1:U8284.22A.215296V-V1-C14:vm61b.10b21d70aa478960b469471a3b851567 NODE purplevio2.aixncc.uk.ibm.com NODE purplevio1.aixncc.uk.ibm.com NODE rubyvios2.aixncc.uk.ibm.com NODE emeraldvios2.aixncc.uk.ibm.com NODE emeraldvios1.aixncc.uk.ibm.com NODE rubyvios1.aixncc.uk.ibm.com NODE indigovios1.aixncc.uk.ibm.com # - Notes:
- Be careful with false positive results if you LU names are short and not specific. The above would have also listed <any
thin g>vm 16<a nyth ing > - The vm16b LU is mapped on limevios1 vhost1 vSCSI slot C10 and limevios2 vhost1 slot C14
- Also, vm16a and vm16c are not mapped at all so I can lu -remove them and know I will NOT effect a virtual machine.
- The "2>/dev/null" removes remote execution errors on the two VIO Servers that are offline.
- Be careful with false positive results if you LU names are short and not specific. The above would have also listed <any
- This can be made much simpler using the new nmap ksh command
-
$ nmap blueroot Seach the SSP for blueroot NODE limevios2.aixncc.uk.ibm.com NODE limevios1.aixncc.uk.ibm.com NODE emeraldvios2.aixncc.uk.ibm.com 0x0000000c:vhost15:U8286.42A.100EC7V-V3-C17:blueroot.ecb6aad83bf1377ba2aaa8b8f6c54537:blueweb.baf929635dafa79359b2279a3d5e28c1:blueback.eebce38c64259a40d48126ecc1f4bfb9:bluescratch.693fdb8af4eba3d54a14337645120f9f NODE emeraldvios1.aixncc.uk.ibm.com 0x0000000c:vhost19:U8286.42A.100EC7V-V2-C22:blueroot.ecb6aad83bf1377ba2aaa8b8f6c54537:blueweb.baf929635dafa79359b2279a3d5e28c1:blueback.eebce38c64259a40d48126ecc1f4bfb9:bluescratch.693fdb8af4eba3d54a14337645120f9f NODE rubyvios2.aixncc.uk.ibm.com NODE rubyvios1.aixncc.uk.ibm.com NODE indigovios1.aixncc.uk.ibm.com NODE greenvios2.aixncc.uk.ibm.com NODE greenvios1.aixncc.uk.ibm.com - Not
es - this nmap v2 also outputs the Client LPAR ID number - unfortunately this is in hexadecimal - 0x0000000c = Decimal 12. You can find this via the HMC this is LPAR/Virtual Machine ID = 12. - Running the same command without the grep at the end and will give you a complete mapping of the SSP and it nicely shows VIOS vSCSI slots without and thing attached = another resource waste you can clean up.
A subset of the output:
... ------------------------------- NODE emeraldvios1.aixncc.uk.ibm.com ------------------------------- vhost0:U8286.42A.100EC7V-V2-C4:vm27a.7f98da6ba062c131a866191e054d6da3:vm27b.be2d179f8c92399243ad44d941f059e7 vhost1:U8286.42A.100EC7V-V2-C7:volume-vm114-45d46a8f-00000023-828642A_100EC7V-boot-0.8c89df91894af8f4b359a86ec95abda2 vhost2:U8286.42A.100EC7V-V2-C15: vhost3:U8286.42A.100EC7V-V2-C6:vm21a.7aa7b65d9e2333d1efe274ecd9229aeb vhost4:U8286.42A.100EC7V-V2-C16: vhost5:U8286.42A.100EC7V-V2-C17: vhost6:U8286.42A.100EC7V-V2-C23:ruby32boot.fe3bb16ff06efb4c5095d9e767ef7983:ruby32data1.6048acb6185ddc0faa2f83263d72c20b vhost7:U8286.42A.100EC7V-V2-C10:volume-orange5_data1.15703ed231219103bb4fa4da58b8cfc5:orange5a.09dffc708885652b209076de34212ed2 vhost8:U8286.42A.100EC7V-V2-C11:vm21_SLES113.97b617baae60787c4a6a589e7a092085 vhost10:U8286.42A.100EC7V-V2-C9:/var/vio/VMLibrary/Ubuntu_14_10_LE:volume-vm23boot.cadf3bbdfc678ff2ee6ce2b7594e2ad8 vhost12:U8286.42A.100EC7V-V2-C19: vhost13:U8286.42A.100EC7V-V2-C20: vhost14:U8286.42A.100EC7V-V2-C13:volume-boot-828642A_100EC7V-vm181f9e83c1217b43b789320b2ceea74a99.e1af0a6671166c030e84d190d72b8530 vhost15:U8286.42A.100EC7V-V2-C12:volume-boot-828642A_100EC7V-vm1911ece18ee70d40f1af5b379c50557d12.068fcd38497a125e3c7eecba5808d8bd vhost16:U8286.42A.100EC7V-V2-C5:volume-boot-828642A_100EC7V-vm88fba5f87eef6146e59aa2182bdd969db8.d9b77edcf41160f71a72f599a3fcf43d vhost17:U8286.42A.100EC7V-V2-C21: vhost18:U8286.42A.100EC7V-V2-C8:volume-boot-828642A_100EC7V-vm16ae7facc034c3465eaa801b9766e054fd.89223c52b9a0bec0aa9d667459664fb0 ... - So vSCSI slots C15, C16, C17, C19, C20 and C21 are not supporting any resources for the virtual machine and are candidates to be removed.
- This can be made much simpler using the new nmap ALL Note: nmap is a ksh command downloadable from GitHub
Warning: If you get these commands wrong
- you will destroy your LU virtual disks and virtual machine content
Additional Information
Other places to find content from Nigel Griffiths IBM (retired)
Document Location
Worldwide
Was this topic helpful?
Document Information
Modified date:
20 December 2023
UID
ibm11116285
