Brian Smith's AIX / UNIX / Linux / Open Source blog
|Modified on by brian_s|
Often times you'll find a command line that works perfectly when you run it locally on a server, but doesn't work when you run it remotely over SSH. Usually the problem is related to double quotes, or backticks in the command. In this post, we will go over problems with double quotes, but the same issue would apply to command lines with backticks in them. In this example, we are running a command locally on an HMC:
If I decide to run this command over SSH (perhaps through a script), it won't work:
What's going on here? Well' the problem is the way the quote marks are processed by the shell running the SSH command. We can see what is happening by changing the "ssh" part of the command to "echo". This will show what the shell is doing to the quote marks:
So what we need to do is tweak our "echo" command line until we get what is echo'd back to the screen to match the originally run command that worked when run locally:
Now that command is echoing back the exact command that works locally, it should also work over SSH:
Another option would have been to use single quotes, however with single quotes you'll have problems if you are trying to use shell variables within the command line, which is very common when scripting something like this. This is why I prefer to use the double quotes and just escape them. Without variables, this command with single quotes will work as well:
Often times when searching for information on the internet you'll find a link to exactly what you are looking for... and then you click on the link and see this:
Here are a couple of ideas on how to still see the site...
#1 - Google Cache
If you found the link on a Google search, simply hit the little down arrow next to the link, and click "cache":
This will show Google's cache of the page even if the website is currently down. You will see this at the top that shows the date that Google cached the information:
#2 - Internet Archive Wayback Machine
Google Cache works great most of the time, but what if the dead link you are trying to get to isn't in Google's cache, or what if you need an older version of the website than what is in Google's cache? That is where the Internet Archive Wayback Machine comes in very handy.
Simply go to the Wayback Machine website, and put in the URL for the dead link you are trying to get to in to the search box.
You'll see options for which dates the Wayback Machine has copies of the site, and you can select a date to see what the site looked like on that date. You will have a bar at the top of the screen where you can select different dates for the site:
Note that it generally all of the snapshots in the Wayback Machine are over 6 months old, but it has saved me many times by being able to find sites that disappeared from the web years ago.
I recently received an email from someone who said they needed to change the CPU Pool on hundreds of LPAR's and they asked if I had any suggestions to make the process easier.
There are a couple of ways this could be automated. One option might be to create a script that would generate the commands needed to make the change. But what I would probably do in this instance is just use a spreadsheet and a specially crafted formula to generate the command needed to make the change.
Basically, you create a spreadsheet with 3 columns:
Column A: LPAR Name
Column B: Frame Name
Column C: Desired CPU Pool
Column D: Our special formula to generate the commands
The formula in Column D needs to be something like this for Row 2 of the spreadsheet:
This formula builds the command line by pulling out the LPAR Name, Frame Name, and CPU Pool names out of columns A, B, C in Row 2 (Row 1 being a header).
You then go down to the bottom right of the formula cell, until your mouse turns in to a cross, then click and drag the formula down in to all of the cells below it in column D. Now you have your formula setup in Column D all the way down the spreadsheet, and you just need to fill in columns A, B, and C with your LPAR, Frame, and CPU Pool details. Then simply copy/paste the generated commands in to your HMC to make the changes.
The commands generated by the formulas look like the lines below. Basically it is the command to DLPAR change the CPU pool, followed by a command to force overwrite the current profile with the running configuration so that the CPU pool change gets updated in the profile as well:
IBM recently released a draft Redbook covering the upcoming HMC Version 8 Release 8.8.1.
I've read through the Redbook, and here are the no nonsense highlights I noticed:
POWER5 servers won't be supported in HMC Version 8
POWER6/POWER7/POWER8 servers will be supported. This one caught me by surprise and I am hoping that IBM will change this and end up supporting POWER5 on HMC Version 8 at some point in the future. If you have POWER5 servers still in your environment make sure you let IBM know that you want POWER5 support on HMC Version 8.
Your old HMC's might not be compatible with HMC Version 8
You need to have a Rack Mounted CR5 or later HMC or a Desktop C08 or later HMC with at least 2 GB of memory to run HMC Version 8. So this means people with 7042-CR4 HMC's or older will not be able to upgrade to HMC Version 8.
Running HMC Version 8 as a Virtual Machine still not supported
Totally absent from the Redbook draft is any mention of running HMC Version 8 as a Virtual Machine (under VMware for example). This is disappointing because with the short lived SDMC IBM supported running it in a virtual environment. Hopefully this will change and IBM will one day support running the HMC as a virtual machine.
New Performance and Capacity Monitor
A very cool new feature in HMC Version 8 is a integrated performance and capacity monitor. This will graph information about CPU usage, memory usage, network throughput, and storage throughput. It will support POWER6 and later servers. In previous HMC versions we had to use 3rd party software like LPAR2RRD for this kind of functionality. This is a very cool feature and I'm looking forward to trying it out.
Further SR-IOV Support
HMC Version 8 will add further support for virtualizing adapters with SR-IOV. This is similar in concept to the old IVE (Integrated Virtualized Ethernet) adapters in that it lets you take a single physical port and assign logical ports to multiple LPAR's. SR-IOV works independent from the VIO server and doesn't require a VIO server at all. You can create up to 48 logical ports per physical adapter. It is very fast, high performing and also supports QoS (quality of service). However the big drawback to SR-IOV is that it doesn't support Live Partition Mobility (LPM), suspend/resume, or remote restart. One possible way to get around this limitation is to assign a SR-IOV logical port to a VIO and create a SEA adapter out of it. But I'm not sure of a practical scenario in which someone would do that.
You might have a multiple step upgrade process to get to HMC 8
You can only upgrade to HMC Version 8 from 7.7.8 (with MH01402) or from 7.7.9. So if you are running a version older than this you'll need to do a multi-step upgrade and upgrade to one of these levels first, and then to HMC Version 8. Not a big deal, but something people need to be aware of so that they can get all the correct media needed for the upgrade and allocate enough time to do a multi-step upgrade.
Partition Remote Restart enhancement
Remote Restart is a very cool feature that allows LPAR's to automatically come back up on another frame in the event of an outage on the frame they were originally running on. This is super handy since you can't use LPM if the source server is down. Previously you could only enable Remote Restart on a LPAR at the time the LPAR was created. With HMC Version 8 this limitation has been removed and it can now be enabled without having to re-create the LPAR. Awesome!
Other Miscellaneous Improvements
Here are some other improvements
Post a comment if I missed any other big new features in HMC Version 8.
This post is about a script I wrote for building filesystems on AIX. It automates the process of creating logical volumes, filesystems, mounting them, setting user/group owners, and setting permissions. It can be used to create large numbers of filesystems quickly, and it is also handy if you need to create the same filesystems across multiple different servers.
Start by creating a CSV file based on this example/template (the first line is the header line). Simply copy and paste this in to a new file and name it with a .csv extension:
Open up this CSV file in your favorite spreadsheet application (I'm using LibreOffice in this example, but Excel should work as well). Once in the spreadsheet make changes to your CSV file specify what filesystems you want to create:
The columns are pretty self explanatory. The "Mount Options" is optional (and if you specify multiple mount options separate them with a period, i.e. rbrw.cio.dio) The "Log" is also optional (if you don't specify it will default to an existing log in the volume group).
Once you are done editing the file in the spreadsheet save it in CSV format. It MUST be CSV to work. To make sure, transfer the file to your AIX server and "cat" the file, and you should see something similar to this:
Now run the script and specify the CSV as a parameter. By default, the script doesn't make any changes or actually do anything at all other than show the commands that need to be run to create the filesystems:
Review the output to make sure everything looks good. If you want to actually run the commands generated, you can either redirect the output to a file and run that file as a script, or you can just run the scriptfs script and pipe it to "ksh" which will cause it to run the commands and actually create the filesystems:
When this ran, it created the logical volume, filesystem, mounts them, changes the user/group, and sets the permissions.
Here is the script: