AIX Down Under
AnthonyEnglish 270000RKFN Tags:  integrated_virtualization... sea management ethernet lpar virtual ivm id outage shared rob_mcnelly hmc v18.104.22.168 dlpar power server aix adapter ibm hardware_management_conso... migration 12,877 Views
In September 2009 Rob McNelly wrote on his AIXChange blog about Migrating from the IVM to the HMC. I have documented my own experience of this procedure. You can download it from here, at a very affordable price of USD 0.00 (no refunds).
The IVM or Integrated Virtualization Manager, is a browser interface to the VIO server on smaller systems, and it has HMC-like functionality, such as Dynamic LPAR, the ability to configure LPARs, stop and start them and so on.
The HMC (Hardware Management Console, as you know) is able to manage several physical servers and is mandatory for larger systems. It can also be used for smaller systems, and is a worthwhile investment, in my view, once you get beyond a single small server.
Two servers, two IVMs
I had a client who had bought a production Power6 550 and a P6-520 for Dev and Test. After some months of discussion, their Business Partner convinced them of the benefit of investing in an HMC to manage these two systems with their growing number of LPARs. The challenge was migrating each of the servers from being IVM-managed to the HMC. I have put together a document of my own experience of the migration. It doesn't attempt to be a step-by-step guide. More of a diary for my own benefit but you may find it useful.
Forward planning brings us unstuck
We thought we were being safe by getting some work done ahead of the outage time. We racked and cabled the HMC and put it on the network, in preparation for the scheduled outage two weeks hence. Problem was, no one told the HMC the planned go live date. To our surprise, it immediately discovered the two servers. At the same time, the HMC was reporting the two servers were in "Recovery" state, but it wouldn't take further control of the systems or their LPARs until the outage which was scheduled for after a huge month end. The IVM had been effectively disabled, so any IVM-specific commands were out of bounds. No profile backups, no DLPAR, no shutdown and activation of LPARs was permitted, either from the IVM or from the HMC. Nothing would undo it - not even powering off and disconnecting the HMC from the network.
We had a VIO server, but no IVM and no HMC that we could do anything useful with. It was the technological equivalent of a hung parliament.
All's well what ends well
In the end, it all worked, and the customer has been running happily on the HMC for many months now. Still, it was a challenge. You can find my comments about the migration from IVM to HMC Migration - A Customer's Experience
Looking back, it was quite funny, I suppose. As long as you weren't me.
AnthonyEnglish 270000RKFN Tags:  mpio ibm hardware_management_conso... virtual_io dual_vios integrated_virtualization... hmc concurrent spof multi_path failover firmware shared_ethernet_adapter sea scheduled_outage aix vio ivm single_point_of_failure redundance dual_vio_server 8,625 Views
PREVENTING SPOF ATTACKS
When you think about it, our first job is to keep our systems up and running. Or at least walking, crawling, even staggering. But keep them up we must. The mail must get through, the system must stay up. We build redundancy at all sorts of levels into our systems because we want to avoid that dreaded disease known as SPOF - the Single Point of Failure.
Outages are expensive
Unscheduled outages bring your computer system to its knees. They can have similar effects on your nervous system. There is the post-mortem where you hope to find the root cause and hope to prove it's not you.
But even scheduled outages - the ones we plan weeks ahead - are costly. There is the planning, begging for permission for the outage, negotiating a time which most suits the users (and most disrupts your family), the co-ordinating of key stakeholders to be on standby at an ungodly hour. Maybe there are fallback plans which need to be in place and then there is that intangible user perception that the system is unstable because they keep having to work around another weekend of downtime.
So before you plan to inconvenience maybe thousands of users by shutting down your server, ask yourself one question:
Do you really want an out?
Even if you get the agreement from the
business that you can have an outage on your Power server, think of
whether having a Hardware Management Console (HMC) as part of your
configuration can help you to reclaim your weekends or sleep time. Here are some ways the HMC allows you to skip several of those outages:
On smaller systems, you may not need a dedicated HMC for management. Even so, it is worth thinking about having one anyway, especially when you consider the benefits of not having to schedule system outages for many key tasks. If you don't have an HMC and you use the Integrated Virtualization Manager (IVM), many of the tasks mentioned above will still require an outage. For example, with the IVM, every firmware update is disruptive - it requires a reboot of all LPARs on the system. And if you want dual VIO servers for redundancy, you must have an HMC.
how the two VIO servers can work to prevent unnecessary outages.
Dual VIO servers
On an HMC-managed system, you can set up SEA Failover for the network and even have SEA Failover with VLAN tagging. When VIO Server 1 goes down, traffic can still go through VIO 2. Similarly, you can configure multiple paths to storage, as explained in this excellent presentation by Janel Barfield on Advanced Power6 VIO configurations.
Converting from IVM to HMC
Although there is no official procedure to migrate from IVM to HMC, I have done it successfully for a p6-520 and a p6-550. Rob McNelly has a post on IVM to HMC Migrations. If you're scheduling lots of outages for tasks which could be done during business hours if only you had your systems connected to an HMC, you might find the total savings of avoiding those outages more than justifies the purchase of a single HMC to manage multiple Power systems.
AnthonyEnglish 270000RKFN Tags:  storage virtual_i/o_server shared_ethernet_adapter vios vio donkey burro virtual ivm padmin aix sea hmc butter active_memory_sharing 12,954 Views
Who's afraid of the VIO?
Pretty much everybody, at least initially. I can think of two good reasons for that. The first is that your VIO server may have been set up long ago - maybe by someone else when your system was first built. You never touch it. You never need to. It does its magic to provide disk and network traffic for your LPARs and you prefer to leave it alone.
Lost in translation
The second reason is that those familiar with AIX commands find the VIO commands very similar, but different enough to be confusing. Your knowledge of one can cloud your grasp of the other. Like the Italian fellow in a Spanish restaurant who wanted to butter his toast with burro (exhibit 1).
Even those of us for whom AIX is our bread and burro (Italian meaning) found the Virtual I/O server a bit daunting when we first came across it. Sure, it's got a restricted shell with a very limited command set, so it should be easy enough to learn. But the syntax of the VIO server commands (when you log in as padmin) are sufficiently different from AIX that they can be a bit scary.
For example, this command changes a setting on a LUN using MPIO.
chdev -l hdisk0 -a reserved_lock=no
but on the VIO server the syntax is slightly different:
chdev -dev hdisk0 -attr reserved_lock=no
There is also a VIO server command called rmvdev (that's r-m-v-dev, not to be confused with rmdev, which, in turn, has different syntax from the same command on AIX). Tempting for AIX old hats to log in as padmin then go to the full AIX shell (oem_setup_env) and follow the way we know and love.
Proceed with caution
It's a good thing that people are
cautious and even nervous about the VIO server. After all, it's a
central component for disk and network traffic. So yes, it's important that your VIO remains stable, but it's worth being ready if you should ever need to build a new one, rebuild an existing one or just add a new LPAR to your system. The more you steer away from dedicated adapters for your LPARs, the easier your configuration will become, but that does require a bit of leg work getting a grasp of what the VIO is all about.
Let's look briefly at rebuilding the VIO server. Then we can look at where we can go to have a snoop around it. We won't change anything ... just dip our toes in.
Actually, building the VIO server from scratch is easier than doing a new install of an AIX LPAR. The installation of the VIO server is simply a mksysb restore and can be done off the installation media or NIM. Depending on your system configuration, you should be able to have a VIO server up and running in an hour with a vanilla installation.
The VIO server is really about providing devices, so it's really specific to the hardware configuration of the server it's using. If you're cloning to new hardware, for example for a Disaster Recovery
run, you may well prefer to install the VIO server from scratch and build the device mappings yourself rather
than restoring off a backup. If you're restoring to the same physical hardware, you can use the VIOS backup which you hopefully have created using the backupios command or the more recent viosbr command. Chris Gibson shows on his blog how he has used viosbr to backup the VIO server.
VIO for smarties
If you want to get a more high-level understanding of the VIO server and its benefits, check the Virtual I/O (VIO) and Virtualization wiki. It's also got lots of links to valuable documentation. Also, the New Virtualisation Features on the IBM Wiki Movies gives more recent information. You may find movie 52 - VIOS 2.1 Features - particularly helpful.
If you don't have an HMC, then as soon as you install the VIO server you should have access to the Integrated Virtualization Manager (IVM). This is a browser interface to the VIO server which allows you to configure your storage and network via a GUI. See the IVM section on the IBM Wiki Movies.
Since May 2009 the HMC and VIO versions have provided the HMC GUI for VIOS commands. (You may need to bring your HMC up to date first). Although this is slower than the command line, it's probably suitable if you're building a single LPAR or want to do some simple configuration changes.
Whether or not you've got an HMC, you should be able to log in to the VIO server command line and run some basic commands for managing storage or managing networks (via the SEA) on the VIO server. If you're using Active Memory Sharing, you will probably already know about using the VIO server for paging devices.
VIO for dummies
We'll look at the VIO server in more detail on this blog in the future, but for the time being it's worth knowing: