
IBM EnergyScale Favour Performance mode on Power7+ with AIX
Hi, I don't really get the chance to update this blog (it has been over 12 months) however I did find something of interest this week while installing new Power7+ 750 machine. One thing I wanted to evaluate was enabling Favour Performance mode with IBM EnergyScale. Details of what it is and how it works can be found here: http://public.dhe.ibm.com/common/ssi/ecm/en/pow03039usen/POW03039USEN.PDF From the HMC launch the ASM, select system configuration, power management, power mode setup and select “Enable Dynamic Power Saver (favor... [More]
Tags:  more_gigawatts aix power7 energyscale |
DLPAR not working fix - Restart RMC (Resource Monitoring and Control)
Hi All, If you run into any issues with DLPAR not working correctly try the steps below to restart the RMC service. We have seen this procedure work in the past to rectify the problem. On the HMC command line, first check the DCaps value of the LPAR. If the value is DCaps "<0x0>" there is an issue with the RMC connection on that LPAR. # lspartition -dlpar <#1> Partition:<6*9117-MMA*xxxxxx, , 10.x.x.x> Active:<0>, OS:<, , >, DCaps:<0x0> ,... [More]
Tags:  rmc dlpar |
Change ethernet aggregation (bond mode) on V7000 Unified 1GbE Interface
Recently I was involved in changing the Ethernet bonding configuration for an IBM Storwize V7000 Unified . This particular V7000 Unified is connected to a Cisco Network Switch (4500 family). I decided to create a blog entry for this topic as it seems the information provided in the IBM information center “changing a bond network interface” is not correct for a V7000 Unified. See link below. http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/topic/com.ibm.storwize.v7000.unified.doc/mng_t_pub_netw_chg_nwbond.html The link above describes a... [More]
Tags:  v7000 ethernet change on mode unified configuration (bond) aggregation |
How to configure your SAN for Global Mirror using FCoIP with Brocade Multiprotocol Devices
Remote replication using V7000 has been very popular, and there are many clients happily replicating without FC connectivity between the sites. I have written about SVC and V7000 low bandwidth global mirror before (link below) but it's worth writing up how to get the FCoIP connectivity working. https://www.ibm.com/developerworks/mydeveloperworks/blogs/talor/entry/low_bandwidth_global_mirror_svc_v70009?lang=en In the typical environment you would need: - Two SAN switches at the Production site, in making up two redundant fabrics. eg. IBM SAN80B... [More]
Tags:  v7000 integrated_routing fabric global_mirror svc san zoning fcoip storwize brocade |
When would you backup an AIX system to a CIFS share?
We have a number of clients who have a single AIX server in their environment. There is no NIM server, and no other AIX system to send mksysb backups to. One way of backing up the system is to use a backup to a local tape drive, or a DVD drive. This is fine until you want to use some backup software to take care of the backups for you. This will do a great job of backing up any applications or files on the AIX system, however there is some capability in backup products to backup the operating system (eg/ Tivoli Storage Manager for system backup... [More]
Tags:  cifs aix iso mksysb |
Replicate an AIX rootvg for DR purposes
One question I have been asked latley is how to replicate an AIX system's rootvg to another AIX system using storage system replication. Can I just boot from the rootvg LUN on another system? Well... unfortunately it's not as simple as just remote mirroring the rootvg and booting off it at the DR site. This will work, but it's not a good idea, and I'm not sure if it's exactly supported. For the non-rootvg disks that is simple, you can just replicate them via your storage system, and in a DR map the target volumes (or a snapshot of them) to the... [More]
Tags:  rootvg alt_rootvg_op alt_disk_copy storage disaster_recovery replication aix |
TSM 6.3 Node Replication - Using nodegroups to make it easy.
Tivoli Storage Manager from 6.3 onwards has had a feature called node replication. This brings up a new command called “replicate node”. The only pre-requisite is that you have two TSM servers on version 6.3 or above, and server-to-server communications are configured. The command replicate node, can replicate an individual node, or a group of nodes. I would strongly recommend using node groups, because you can have a single group containing many nodes, then have a schedule to replicate the group. Having a schedule for each node to be... [More]
Tags:  tsm node_replication tsm_6.3 nodegroup tivoli_storage_manager |
Timeouts when using viosvrcmd on the HMC to launch ios backups
This week I found that after updating our HMC and VIOS our VIO mksysb (backupios) backups started failing. Our HMC is on 7.7.3 HMC code. Our VIOS is on 2.2.1.4 code. The mksysb backups are scripted from our NIM server, where we do the following to perform the backup: - Setup SSH keys to the HMC - Mount our /export/nim/mksysb filesystem on the VIO server as /mksysb - Run the "viosvrcmd" command on the HMC to run the "backupios" command on each VIO server - Define the mksysb backup as a NIM resource - Un-mount the NFS... [More]
Tags:  viosvrcmd ios_mksysb vio aix hscl2970 hmc mksysb backupios |
Generate zoning commands from zoneshow output on brocade switches
There have been a number of times where I've need to take zoning from one fabric, and create it on another fabric. Typically this is when doing switch migrations where I am going to create one or more new fabrics, re-create the zoning, then move the host and storage FC cables one fabric at a time. This happened recently when I was migrating to new switches, and the old switches didn't have a license to create an ISL. This is also a handy way to have a backup of your zoning configuration. I have a linux VM running on my laptop with ksh... [More]
Tags:  shell_script zoning san brocade |
mksysb issues after re-mirroring the rootvg
As part of an LVM migration to new storage, we found that our mksysb's started failing. This was also our NIM master which held mksysb images for all the LPARs in our environment so this needed to be fixed quickly. We used unmirrorvg and mirrorvg to migate to the new disks, for example moving our rootvg from hdiskY to hdiskX: # extendvg rootvg hdiskX # mirrorvg -S -m rootvg hdiskX # bosboot -ad hdiskX # bootlist -m normal hdiskX # unmirrorvg rootvg hdiskY # reducevg rootvg hdiskY # rmdev -dl hdiskY After this was done, we saw the below error... [More]
Tags:  mksysb aix rootvg lvm |
45 Day SVC/V7000 Compression Evaluation
I wrote up an entry on SVC and V7000 compression recently. We updated to 6.4 code and the results were very impressive. The link to it is here . For those who have an SVC or V7000, there is a 45 day free of charge evaluation from IBM, where you can upgrade your V7000 or SVC to 6.4 code, add compressed volume copies and see what compression savings you will achieve, as well as be able to monitor performance to ensure that there is no performance degradation of your storage system while compression is turned on. The 45 Day... [More]
Tags:  comprestimator svc compression v7000 |
Live Replacement of CAA Repository Disk
From PowerHA 7.1.1 you are able to replace the CAA repository disk to another disk. This would really only be used if you lost access to whatever reason to the LUN that is used for the CAA repository. As part of testing a cluster build, it is worthwhile testing that the CAA disk can be moved to another LUN without issue. I ran through the steps on my cluster in the office, and everything worked ok. First, check the cluster level: root@ha71_node1:/home/root # halevel -s 7.1.1 SP1 I am going to replace hdisk50 (existing repository disk)... [More]
Tags:  hacmp powerha aix |
AIX NFS Export over a Hypervisor Network
In a number of situations you would have an NFS filesystem that you would allow a number of hosts root access to. A simple example of this is an NFS mount with all of your install media. You would then export the filesystem, and specify all the hosts that are allowed root access. In the case of an AIX environemnt I have been working on, I had a number of LPARs on a single system, with a private network setup for backups, and for the LPARs to communicate without going outside of the POWER7 frame. When I added new LPARs, I didn't want to have... [More]
Tags:  powervm aix export nfs |
Re-Create VIO Virtual SCSI Devices
As part of a HDLM device driver upgrade, we had to remove any native MPIO disks from a Virtual I/O server, install the driver and re-create the mappings. Since this was a new HDLM install, we have to do the below to to get HDLM installed: - Disable paths for VIO #1 on our client LPAR. - Record VIO Mappings - Remove any MPIO hdisk devices. Luckily our VIO servers are booting from internal disk. - Install HDLM - Put the VIO Mappings Back. - Enable paths for VIO #1 on our client LPAR. - Repeat for VIO #2. The plan was to do one VIO server at a... [More]
Tags:  vio hdlm vscsi powervm |
V7000 Drive Replacement Issue
Typically when a disk fails in a V7000, you just go into events, and follow the procedure to replace the disk and the drive is re-build automatically. I have done this before without issue, on firmware older than 6.4, however possibly this is a 6.4 code issue. Today we had an issue where a disk failed, but when we looked in events we could see that a spare disk was in use, however we had no procedure to replace the failed drive. Under the internal storage tab, the drive was offline. The fix? Work out what drive is offline, in the... [More]
Tags:  v7000 drive failure storwize |