I don't really get the chance to update this blog (it has been over 12 months) however I did find something of interest this week while installing new Power7+ 750 machine.
One thing I wanted to evaluate was enabling Favour Performance mode with IBM EnergyScale.
Details of what it is and how it works can be found here:
From the HMC launch the ASM, select system configuration, power management, power mode setup and select “Enable... [More]
If you run into any issues with DLPAR not working correctly try the steps below to restart the RMC service. We have seen this procedure work in the past to rectify the problem.
On the HMC command line, first check the DCaps value of the LPAR. If the value is DCaps "<0x0>" there is an issue with the RMC connection on that LPAR.
# lspartition -dlpar
<#1> Partition:<6*9117-MMA*xxxxxx, , 10.x.x.x>
Recently I was involved in changing the Ethernet bonding configuration for an IBM Storwize V7000 Unified . This particular V7000 Unified is connected to a Cisco Network Switch (4500 family). I decided to create a blog entry for this topic as it seems the information provided in the IBM information center “changing a bond network interface” is not correct for a V7000 Unified. See link below. http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/topic/com.ibm.storwize.v7000.unified.doc/mng_t_pub_netw_chg_nwbond.html The link above describes a... [More]
Remote replication using V7000 has been very popular, and there are many clients happily replicating without FC connectivity between the sites. I have written about SVC and V7000 low bandwidth global mirror before (link below) but it's worth writing up how to get the FCoIP connectivity working. https://www.ibm.com/developerworks/mydeveloperworks/blogs/talor/entry/low_bandwidth_global_mirror_svc_v70009?lang=en In the typical environment you would need: - Two SAN switches at the Production site, in making up two redundant fabrics. eg. IBM SAN80B... [More]
We have a number of clients who have a single AIX server in their environment. There is no NIM server, and no other AIX system to send mksysb backups to. One way of backing up the system is to use a backup to a local tape drive, or a DVD drive. This is fine until you want to use some backup software to take care of the backups for you. This will do a great job of backing up any applications or files on the AIX system, however there is some capability in backup products to backup the operating system (eg/ Tivoli Storage Manager for system backup... [More]
One question I have been asked latley is how to replicate an AIX system's rootvg to another AIX system using storage system replication. Can I just boot from the rootvg LUN on another system? Well... unfortunately it's not as simple as just remote mirroring the rootvg and booting off it at the DR site. This will work, but it's not a good idea, and I'm not sure if it's exactly supported. For the non-rootvg disks that is simple, you can just replicate them via your storage system, and in a DR map the target volumes (or a snapshot of them) to the... [More]
Tivoli Storage Manager from 6.3 onwards has had a feature called node replication. This brings up a new command called “replicate node”. The only pre-requisite is that you have two TSM servers on version 6.3 or above, and server-to-server communications are configured. The command replicate node, can replicate an individual node, or a group of nodes. I would strongly recommend using node groups, because you can have a single group containing many nodes, then have a schedule to replicate the group. Having a schedule for each node to be... [More]
This week I found that after updating our HMC and VIOS our VIO mksysb (backupios) backups started failing. Our HMC is on 7.7.3 HMC code. Our VIOS is on 126.96.36.199 code. The mksysb backups are scripted from our NIM server, where we do the following to perform the backup: - Setup SSH keys to the HMC - Mount our /export/nim/mksysb filesystem on the VIO server as /mksysb - Run the "viosvrcmd" command on the HMC to run the "backupios" command on each VIO server - Define the mksysb backup as a NIM resource - Un-mount the NFS... [More]
There have been a number of times where I've need to take zoning from one fabric, and create it on another fabric. Typically this is when doing switch migrations where I am going to create one or more new fabrics, re-create the zoning, then move the host and storage FC cables one fabric at a time. This happened recently when I was migrating to new switches, and the old switches didn't have a license to create an ISL. This is also a handy way to have a backup of your zoning configuration. I have a linux VM running on my laptop with ksh... [More]
As part of an LVM migration to new storage, we found that our mksysb's started failing. This was also our NIM master which held mksysb images for all the LPARs in our environment so this needed to be fixed quickly. We used unmirrorvg and mirrorvg to migate to the new disks, for example moving our rootvg from hdiskY to hdiskX: # extendvg rootvg hdiskX # mirrorvg -S -m rootvg hdiskX # bosboot -ad hdiskX # bootlist -m normal hdiskX # unmirrorvg rootvg hdiskY # reducevg rootvg hdiskY # rmdev -dl hdiskY After this was done, we saw the below error... [More]
I wrote up an entry on SVC and V7000 compression recently. We updated to 6.4 code and the results were very impressive. The link to it is here . For those who have an SVC or V7000, there is a 45 day free of charge evaluation from IBM, where you can upgrade your V7000 or SVC to 6.4 code, add compressed volume copies and see what compression savings you will achieve, as well as be able to monitor performance to ensure that there is no performance degradation of your storage system while compression is turned on. The 45 Day... [More]
From PowerHA 7.1.1 you are able to replace the CAA repository disk to another disk. This would really only be used if you lost access to whatever reason to the LUN that is used for the CAA repository. As part of testing a cluster build, it is worthwhile testing that the CAA disk can be moved to another LUN without issue. I ran through the steps on my cluster in the office, and everything worked ok. First, check the cluster level: root@ha71_node1:/home/root # halevel -s 7.1.1 SP1 I am going to replace hdisk50 (existing repository disk)... [More]
In a number of situations you would have an NFS filesystem that you would allow a number of hosts root access to. A simple example of this is an NFS mount with all of your install media. You would then export the filesystem, and specify all the hosts that are allowed root access. In the case of an AIX environemnt I have been working on, I had a number of LPARs on a single system, with a private network setup for backups, and for the LPARs to communicate without going outside of the POWER7 frame. When I added new LPARs, I didn't want to have... [More]
As part of a HDLM device driver upgrade, we had to remove any native MPIO disks from a Virtual I/O server, install the driver and re-create the mappings. Since this was a new HDLM install, we have to do the below to to get HDLM installed: - Disable paths for VIO #1 on our client LPAR. - Record VIO Mappings - Remove any MPIO hdisk devices. Luckily our VIO servers are booting from internal disk. - Install HDLM - Put the VIO Mappings Back. - Enable paths for VIO #1 on our client LPAR. - Repeat for VIO #2. The plan was to do one VIO server at a... [More]
Typically when a disk fails in a V7000, you just go into events, and follow the procedure to replace the disk and the drive is re-build automatically. I have done this before without issue, on firmware older than 6.4, however possibly this is a 6.4 code issue. Today we had an issue where a disk failed, but when we looked in events we could see that a spare disk was in use, however we had no procedure to replace the failed drive. Under the internal storage tab, the drive was offline. The fix? Work out what drive is offline, in the... [More]
Recently I was looking at a TSM server, and could see that TSM database backups to tape were working and expiring without issue, however database backups taken to disk were not being removed, and filling up the filesystem. There are three types of backups you can take of the TSM database: - Full. This is a full backup of then entire TSM database and this will truncate the TSM server active and archive logs. - Incremental. This will backup the changes in the TSM database between the current point in time and the last full database backup. -... [More]
For the last few years, I have been using TPC (typically the TPC for disk component) to manage IBM storage, ranging from DS4000s, DS5000s, DS8000s, SVC and more recently V7000s. In terms of functionality TPC is great. The statistic collection and report generation is good, the ease of use is okay, but in comparison to the GUI that the storage systems have, it’s nothing spectacular. That is until now, as version 5.1 has just been released. IBM have put the XIV style GUI into TPC, and it looks awesome. The other thing that I noticed in TPC... [More]
Recently IBM have introduced compression into the SVC and V7000 storage systems for block volumes. Our business runs on V7000, so we updated it to 6.4 code (required for compression) and I had a play around with it today. Compression is a licensed feature, and in a V7000 is licensed per tray of disk in your V7000. There are two ways to use it: 1. Create a new compressed volume. 2. Add a compressed copy to an existing volume, and then remove the original non-compressed copy. To create a non-compressed volume, is easy, here is how. What's... [More]
In the case that you have a PowerHA cluster that contains multiple resource groups that are related in some way and need to always exist on the same node, it is always best to have dependencies configured to ensure that when you fail over, both resource groups are always active on the same node. I came across this today, and opened up # smitty cm_rg_dependencies_menu and there were two ways to go about it: 1. Have a parent/child dependency, where one Resource group is a child of the other. 2. Configure an online on same node dependency. ... [More]
I had to create an AIX volume group and some filesystems on an AIX LPAR using NPIV connected to an HDS VSP Storage Array. The
first thing I did was install the ODM drivers, then HDLM, set the queue
depth on my LUNs and I was ready to create an AIX volume group.
# mkvg -S -f -y my_vg -s 256 -P 64 hdisk1 hdisk2
Changing the PVID in the ODM.
Changing the PVID in the ODM.
/usr/sbin/varyonvg: The varyonvg failed because the volume group's