Brian Smith's AIX / UNIX / Linux / Open Source blog
|Modified on by brian_s|
|Modified on by brian_s|
|Modified on by brian_s|
Often times you'll find a command line that works perfectly when you run it locally on a server, but doesn't work when you run it remotely over SSH. Usually the problem is related to double quotes, or backticks in the command. In this post, we will go over problems with double quotes, but the same issue would apply to command lines with backticks in them. In this example, we are running a command locally on an HMC:
If I decide to run this command over SSH (perhaps through a script), it won't work:
What's going on here? Well' the problem is the way the quote marks are processed by the shell running the SSH command. We can see what is happening by changing the "ssh" part of the command to "echo". This will show what the shell is doing to the quote marks:
So what we need to do is tweak our "echo" command line until we get what is echo'd back to the screen to match the originally run command that worked when run locally:
Now that command is echoing back the exact command that works locally, it should also work over SSH:
Another option would have been to use single quotes, however with single quotes you'll have problems if you are trying to use shell variables within the command line, which is very common when scripting something like this. This is why I prefer to use the double quotes and just escape them. Without variables, this command with single quotes will work as well:
I recently received an email from someone who said they needed to change the CPU Pool on hundreds of LPAR's and they asked if I had any suggestions to make the process easier.
There are a couple of ways this could be automated. One option might be to create a script that would generate the commands needed to make the change. But what I would probably do in this instance is just use a spreadsheet and a specially crafted formula to generate the command needed to make the change.
Basically, you create a spreadsheet with 3 columns:
Column A: LPAR Name
Column B: Frame Name
Column C: Desired CPU Pool
Column D: Our special formula to generate the commands
The formula in Column D needs to be something like this for Row 2 of the spreadsheet:
This formula builds the command line by pulling out the LPAR Name, Frame Name, and CPU Pool names out of columns A, B, C in Row 2 (Row 1 being a header).
You then go down to the bottom right of the formula cell, until your mouse turns in to a cross, then click and drag the formula down in to all of the cells below it in column D. Now you have your formula setup in Column D all the way down the spreadsheet, and you just need to fill in columns A, B, and C with your LPAR, Frame, and CPU Pool details. Then simply copy/paste the generated commands in to your HMC to make the changes.
The commands generated by the formulas look like the lines below. Basically it is the command to DLPAR change the CPU pool, followed by a command to force overwrite the current profile with the running configuration so that the CPU pool change gets updated in the profile as well:
IBM recently released a draft Redbook covering the upcoming HMC Version 8 Release 8.8.1.
I've read through the Redbook, and here are the no nonsense highlights I noticed:
POWER5 servers won't be supported in HMC Version 8
POWER6/POWER7/POWER8 servers will be supported. This one caught me by surprise and I am hoping that IBM will change this and end up supporting POWER5 on HMC Version 8 at some point in the future. If you have POWER5 servers still in your environment make sure you let IBM know that you want POWER5 support on HMC Version 8.
Your old HMC's might not be compatible with HMC Version 8
You need to have a Rack Mounted CR5 or later HMC or a Desktop C08 or later HMC with at least 2 GB of memory to run HMC Version 8. So this means people with 7042-CR4 HMC's or older will not be able to upgrade to HMC Version 8.
Running HMC Version 8 as a Virtual Machine still not supported
Totally absent from the Redbook draft is any mention of running HMC Version 8 as a Virtual Machine (under VMware for example). This is disappointing because with the short lived SDMC IBM supported running it in a virtual environment. Hopefully this will change and IBM will one day support running the HMC as a virtual machine.
New Performance and Capacity Monitor
A very cool new feature in HMC Version 8 is a integrated performance and capacity monitor. This will graph information about CPU usage, memory usage, network throughput, and storage throughput. It will support POWER6 and later servers. In previous HMC versions we had to use 3rd party software like LPAR2RRD for this kind of functionality. This is a very cool feature and I'm looking forward to trying it out.
Further SR-IOV Support
HMC Version 8 will add further support for virtualizing adapters with SR-IOV. This is similar in concept to the old IVE (Integrated Virtualized Ethernet) adapters in that it lets you take a single physical port and assign logical ports to multiple LPAR's. SR-IOV works independent from the VIO server and doesn't require a VIO server at all. You can create up to 48 logical ports per physical adapter. It is very fast, high performing and also supports QoS (quality of service). However the big drawback to SR-IOV is that it doesn't support Live Partition Mobility (LPM), suspend/resume, or remote restart. One possible way to get around this limitation is to assign a SR-IOV logical port to a VIO and create a SEA adapter out of it. But I'm not sure of a practical scenario in which someone would do that.
You might have a multiple step upgrade process to get to HMC 8
You can only upgrade to HMC Version 8 from 7.7.8 (with MH01402) or from 7.7.9. So if you are running a version older than this you'll need to do a multi-step upgrade and upgrade to one of these levels first, and then to HMC Version 8. Not a big deal, but something people need to be aware of so that they can get all the correct media needed for the upgrade and allocate enough time to do a multi-step upgrade.
Partition Remote Restart enhancement
Remote Restart is a very cool feature that allows LPAR's to automatically come back up on another frame in the event of an outage on the frame they were originally running on. This is super handy since you can't use LPM if the source server is down. Previously you could only enable Remote Restart on a LPAR at the time the LPAR was created. With HMC Version 8 this limitation has been removed and it can now be enabled without having to re-create the LPAR. Awesome!
Other Miscellaneous Improvements
Here are some other improvements
Post a comment if I missed any other big new features in HMC Version 8.
This post is about a script I wrote for building filesystems on AIX. It automates the process of creating logical volumes, filesystems, mounting them, setting user/group owners, and setting permissions. It can be used to create large numbers of filesystems quickly, and it is also handy if you need to create the same filesystems across multiple different servers.
Start by creating a CSV file based on this example/template (the first line is the header line). Simply copy and paste this in to a new file and name it with a .csv extension:
Open up this CSV file in your favorite spreadsheet application (I'm using LibreOffice in this example, but Excel should work as well). Once in the spreadsheet make changes to your CSV file specify what filesystems you want to create:
The columns are pretty self explanatory. The "Mount Options" is optional (and if you specify multiple mount options separate them with a period, i.e. rbrw.cio.dio) The "Log" is also optional (if you don't specify it will default to an existing log in the volume group).
Once you are done editing the file in the spreadsheet save it in CSV format. It MUST be CSV to work. To make sure, transfer the file to your AIX server and "cat" the file, and you should see something similar to this:
Now run the script and specify the CSV as a parameter. By default, the script doesn't make any changes or actually do anything at all other than show the commands that need to be run to create the filesystems:
Review the output to make sure everything looks good. If you want to actually run the commands generated, you can either redirect the output to a file and run that file as a script, or you can just run the scriptfs script and pipe it to "ksh" which will cause it to run the commands and actually create the filesystems:
When this ran, it created the logical volume, filesystem, mounts them, changes the user/group, and sets the permissions.
Here is the script:
I was asked about how to connect to a Power 5 server through a serial cable and made a quick video that shows what you need, and how to connect to the server and access the text based ASMI as well as boot in to SMS and then boot from a CD all through a serial cable using PuTTY.
This isn't the best quality video but I thought I would put it out here in case it can help someone else who is having problems accessing their server through a serial cable..
There are several storage related settings in AIX that cannot be changed if the device is active. These include "fast_fail" , Dynamic tracking (dyntrk), and the "num_cmd_elems" for HBA's and the Queue Depth for hdisks.
Your options to set these are either make the device inactive (usually by taking redundant paths offline) and then make the change, or to use the "-P" flag on chdev and then reboot the server to make the change effective at the next boot.
The "-P" option on chdev has one major drawback however. As soon as you make the change with chdev "-P" it appears that the setting is active right away even before the reboot. If you check with "lsattr" it will appear as if the setting has taken effect. However it actually won't take effect until the next reboot. What has essentially taken place is that the running configuration is out of sync with the ODM. The ODM reflects the updated settings, however they can't be changed in the running configuration of the AIX kernel until the next reboot.
Until recently, the only way to really verify if these kinds of attributes were actually active was to check in KDB. Last year I even wrote a script that would check this in KDB and report differences (see post: Script to show if your AIX HBA / hdisk settings are actually in effect )
Very recent versions of AIX have made a major improvement to "lsattr" where you now have a "-P" flag to show what is actually currently active. Chris Gibson did a very good write up on this on his blog (see his post: Thanks kdb but lsattr's got me covered! )
I wrote a script that will go through every device on your AIX server and compare the "lsattr -Pl" (running config) versus "lsattr -El" (ODM config) and show you all devices that have differences. If the script finds any differences it will show you which attributes are different between the ODM and the running configuration. If everything is in sync then there isn't any output.
Since the script relies on "lsattr -Pl" you must be at AIX 6.1 TL9 or AIX 7.1 TL3 or later for this script to work! If you are running an older AIX version check out my previous script that uses KDB (Script to show if your AIX HBA / hdisk settings are actually in effect)
Here is some example output from the script that shows fcs0's num_cmd_elems and hdisk0's queue_depth attributes are changed in the ODM but not on the running/active system. The ODM configuration for num_cmd_elems is 199 but the running configuration is 200. Likewise, the queue depth ODM configuration is 19 but the running configuration is 20.
Here is the script (again, note that it needs AIX 6.1 TL9 or AIX 7.1 TL3 or later to work):
I recently got an email from Dan Aldridge with some information about a very handy AIX command, "chdef". I wasn't familiar with this command before, and it is super-useful, so I thought I would write a quick post about it..