Brian Smith's AIX / UNIX / Linux / Open Source blog
|Modified on by brian_s|
Update 10/24/13: See also Version 2 of script to show recent Error Report entries on AIX
Here is a script that will show you recent Error Report (errpt) entries on AIX. As an argument to the script you specify the number of minutes you want to go back, and the script will only show errpt entries that have occurred within that many minutes from now.
This can be helpful as a standalone utility, or as part of a monitoring script that would automatically notify you if a new errpt entry came up within the last few minutes.
For example, to only show error's that have occurred within the last 15 minutes:
Or the last hour:
Or the last day:
Here is a screenshot:
One common problem I have personally made and seen others make while shell scripting is trying to set a variable to be the contents of a file or the output of a multiline command, and then trying to echo the variable to process it further with grep, awk, a while loop, etc. Depending on how you do this you might get some unexpected results because you might have missing newlines and spaces from your output.
In this example, we have a file named "testfile" that contains 5 lines. I then set the testvar variable to contain the contents of the line by running "testvar=`cat testfile`". However, when I attempt to "echo $testvar" all the lines of the file are shown on one line! If I try to grep for "test line 2" it still see's everything on one line.
The same problem happens when trying to set testvar to be the output of "ls -al" and then echoing the variable:
So what is going on here? It all has to do with how command line parameters are parsed by the shell and provided to the echo command.
Here is an example that illustrates what is happening:
Notice how the echo command removed all the extra spaces out of what I had typed? It is doing this because the shell is parsing each word as an argument and providing the arguments to the echo command:
The shell is parsing arguments and providing them to the echo command. As part of parsing the arguments, all of the extra spaces are being removed by the shell before they are passed to the echo command. This is the same thing that was happening when our newlines where removed from the first examples. The shell was parsing everything as arguments and removing all extra new lines and spaces.
How to fix this? It is very easy... Just use double quotes so that a single parameter is passed to the echo command:
By putting quotes around the text, it causes the shell to pass the entire text within the quotes as a single argument to echo which preserves the spaces.
This same technique works when echoing a variable that contains the output of a command or the contents of a file:
Hopefully this post helped you understand how commands are parsed by the shell and why you can sometimes see echo commands unexpectedly removing spaces and newlines.
I have always really liked the AIX sddpcm mulitpathing software. It is easy to use, and easy to gather information from. However, one thing I have wanted to do in the past is run a command that just shows the hdisk device and the corresponding SAN serial number.
The closest thing SDDPCM has to this is the "pcmpath query device" command which shows a bunch of information for each SAN LUN:
If you want to filter this down and only show the hdisk device and serial number it can be a little tricky. Normally something like this would be as simple as a grep and an awk to print the fields you want, however in this case the information we want to pull out is on 2 different lines.
We can do a "egrep" and get all the lines that start with "DEV" or "SERIAL":
This is closer to what we want, but still has a lot of extra information and doesn't have the hdisk name and serial number on the same line.
The trick to fix this is the "paste" command. If we add another pipe on the end of the command, and pipe to "paste - -" it will merge every other line together:
This is very close to what I originally wanted. Now all we need to do is add a awk command at the end to only print the fields we want:
If you are not familiar with "expect", it is a script/programming language that is designed to automate interactive processes. For example, suppose you need to install a piece of software on many UNIX/Linux servers. The installation program runs from the command line, but must be run interactively. When run, it interactively asks the user for several pieces of information, and then installs. With expect, you could write a script to automate this task so that when the installation program prompts for information expect supplies the information, essentially simulating a user and automating the task.
Expect allows you to turn a normally interactive-only process in to a completely non-interactive, automated task.
The author of the expect language, Don Libes, wrote the definitive book on expect. The book is called "Exploring Expect" by O'Reilly. I would highly recommend the book to anyone wanting to learn expect.
Here is a great picture from the "Exploring Expect" book that sums up the power of Expect:
In this posting I will cover when you should use expect and when you should avoid it.
When approaching a problem, my first rule of thumb when it comes to expect is to avoid using expect unless it is the only solution. Why? Expect is an amazing tool, but can be complicated to use and very fragile. The basic premise of expect is to set it up to look for certain strings of text ("expect"ing them), and when it sees certain text, respond in a certain way. For example, you could write a script that expects the text "Specify directory to install application to: " and when it sees this, type back in "/opt/software/" However, if in the next version of the software the text of the installer changes slightly (i.e. "Specify directory location to install application to: ") , your expect program will no longer see what it is looking for and will fail to work. If there is a different way to get something done other than using an expect script then it is usually a better option in my experience. It would be possible to write an expect program to automate opening vi, editing the file, and then saving and exiting vi. But it would be much easier and more reliable to use a tool such as sed, awk, perl, etc. to edit the file. Make sure you are using the right tool for the job.
My second rule of thumb is to not use expect to automate tasks that deal with passwords. When you look around for expect information and examples, a lot of them deal with automating things like SSH, SFTP, SCP, etc. For example, the Wikipedia page on Expect lists 4 example scripts for using expect. They all deal with automating logging in to a service with password authentication (telnet, ftp, sftp, and ssh). I would not recommend using expect to automate anything like this. There is a very good reason why tools like SFTP and SSH don't allow you to script using a password without a tool like expect: It is a very bad idea! The first problem is you generally need to include the clear text password in your expect script. If anyone gets access to the script, they have access to the password as well. The second big problem when using expect with passwords is the risk that expect will "type" in the password at the wrong time. For example, if you have a expect script setup to expect certain things, and then send the password, and something goes wrong, the script might send the password too early or too late. The password might then end up somewhere inappropriate or visible to other users or in a shell history file. The bottom line is, if you need to automate running remote commands or copying files around, use SSH keys. SSH keys are so much safer than passwords for a variety of reasons, and there are several things you can do to make them a good option for automated tasks. If you MUST use expect to automate a password related task, one method to help the situation would be to have the expect script prompt you for the password when beginning the task and have the user type it in each time. This way the password is not stored in the script file.
Another password related example of what NOT to do with expect: automate changing passwords. Expect even includes an example script with it named "passmass" that will change your password on multiple servers. From a security perspective I think this is a really, really bad idea for the reasons I specified in the previous paragraph. The right tool for this kind of job is the "chpasswd" command. The chpasswd utility even allows you to specify a password hash ("encrypted" password) when setting a users password to make it more secure to script. chpasswd isn't perfect, but in my opinion it is a much better option than expect when it comes to automating changing passwords.
So when should you consider using expect? You should think about expect anytime you have a manual task that needs to be repeated and that only provides a interactive interface to the user. We already covered the example of a interactive software installation program. Another example is any propriety software that forces you to go through a text based menu to do something. Using expect, you could write a script to navigate the menu and automate the task.
Another extremely good way to use expect is when you need to automate an appliance or other closed system that doesn't have the ability to be scripted. To do this, you use expect on a Linux/UNIX machine to connect to the appliance or closed system, and then complete a task. For example, you could write an expect script that would connect to a Cisco switch and run a series of commands on the switch.
Expect is also a good option when creating test cases. If you need to routinely test software functionality then expect might make your life easier.
You can also use expect to not fully automate tasks, but just assist with manual tasks. This is because expect allows you to partially automate tasks while still allowing parts to be manual completed by a real person. An example of this is the AIX command "mkdvd" which burns mksysb images on to a DVD. When you run this command it writes the first DVD, and then if needed it will prompt you to insert additional DVD's. With expect, you could write a script that would email you or page you whenever it was time to put in the next DVD or when the mkdvd command was completed. This script needs to be customized with 2 command lines to email you/page you.
This mkdvd expect script helps with a manual process and this is not something you could do without a tool like expect.
Please post comments with some creative examples of when you have used expect, or horror stories of other people using expect when they shouldn't have :)
|Modified on by brian_s|
|Modified on by brian_s|
2012 was a great year. I really enjoyed getting more involved with the online AIX community and making some contributions. Here are some of the AIX related projects I released last year. If you haven't checked them out - take a look. They might save you some time and make your life easier. If you have any suggestions on these or ideas for future projects I would love to hear from you.
prdiff - Shows differences between LPAR profiles and their running configs. You should run this before you shutdown any LPAR's to ensure the LPAR profile is in sync with the running configuration. http://prdiff.sourceforge.net/
pslot - Validates and optionally visualizes Virtual Fibre Channel and Virtual SCSI slots. If you have a medium to large PowerVM environment this will almost certainly find issues with your configuration that you want to fix. It can optionally produce nice diagrams of your virtual slot layouts as well. http://pslot.sourceforge.net/
EZH - The Easy HMC command line interface. If you would like to use the HMC command line interface more instead of the HMC GUI but you are frustrated with the native HMC command line interface then EZH is for you. It provides a very simple command line interface to the most common HMC functions and also provides a lot of new functionality not available in the native HMC command line interface. http://ezh.sourceforge.net/
npivgraph - This utility produces detailed visual diagrams of your PowerVM NPIV environment. This is extremely useful for troubleshooting, understanding, and architecting NPIV systems. Plus the graphs will totally impress your boss! http://npivgraph.sourceforge.net/
graphlvm - This one generates visual diagrams of your physical volumes, volume groups, logical volumes, and filesystems. Makes it very easy to see where your data is stored and to understand how everything relates to each other in the LVM. http://graphlvm.sourceforge.net/
vethgraph - This utility visualizes your Virtual Ethernet VLAN's. Again, these graphs will totally impress your boss and make it easier for you to troubleshoot the environment. http://vethgraph.sourceforge.net/
|Modified on by brian_s|
This posting is all about return codes, the test command, and the if statement.
Here is a video covering this material as well:
Every command you run in a Linux/UNIX environment has a "return code" when the command completes. A zero (0) return code generally means the command was successful, and non-zero return code generally means failure. You can check the special $? variable to see the return code of the last command that was run.
Here are some examples of commands that return a zero return code (aka success):
$ ls testfile #Listing a file that exists
Here are some examples of commands that have a non-zero return code (aka failure)
$ ls aoeu #Listing a file that doesn't exist
The exclamation point, "!", reverses the return code. If you run "! false", the return code is "0". If you run "! true" the return code is 1. This is useful when you want to test for a situation that is not true.
The test Command:
You don't see the command "test" in scripts very often, but you frequently see statements such as: if [ "$var" == "value" ]; then ... The bracket "[" is just a shortcut for referring to the test command. The if [ "$var" == "value" ]; then ... is equivalent to if test "$var" == "value"; then ... So if you are trying to remember what options you have for comparisons, just run "man test" to list them.
The test command plays by the same rules as other commands, it evaluates the statement and exits with a return code of either zero for success (the statement in question was true), or non-zero for failure (the statement in question was false). Here are some examples, using both the "test" syntax and the bracket "[" syntax:
$ test 4 -lt 6 #Using "test" syntax, is 4 less than 6?
The "if" statement
There is a misconception about the "if" statement in shell scripting. Many people incorrectly think that you always have to do a bracket ("[") comparison when using the if statement (i.e. if [ 5 -lt 6]; then .... ) In fact, the if statement is much more flexible than that. The if statement runs whatever command is specified, and if the result is zero, the if statement code block is run; if the return code is non-zero the if statement code block is skipped.
A lot of scripts use something like this:
A simpler way to do it is just to put the grep command right in the if statement line. The following is simpler and easier to read, and does the exact same thing:
brian_s 270002K5X3 Tags:  umask unix aix linux security script scripting 2 Comments 37,103 ViewsModified on by brian_s
|Modified on by brian_s|
I've always wanted to be able to write and run scripts directly on the HMC instead of having to setup SSH keys and run scripts from another server. Using the SSH keys from another server is OK, but it adds complexity, and can really slow things down if you need to run a lot of HMC commands (for example when iterating over every LPAR in every frame and running multiple commands per LPAR). Also, at some companies it might be difficult to get permission to setup SSH keys for the HMC.
I've found a method that allows you to write and run scripts directly on the HMC, and it is all within the HMC restricted shell and using only HMC supported commands.
To write and edit the scripts, use the "rnvi -f" HMC command (for example "rnvi -f testscript"). This will open a vi editor that allows you to create or edit any file in your home directory. Don't be alarmed if you see messages from rnvi about "Error: stderr: Bad file descriptor": according to the rnvi manual page this is normal.
Your script can run any HMC commands. Your script will still be in the restricted shell, but there is still quite a bit that you can do.
To run the script, use the "source" command. For example "source testscript". This will run the script in your current shell.
To test this method out, I created a quick example script using the command "rnvi -f testscript":
This example script loops through every managed system and every LPAR. For each LPAR it gets the CPU, Virtual CPU, and Memory settings from the profile and the OS version from lssyscfg. It then prints it out in a formatted output (this HMC only has 1 managed frame so the output is pretty brief):
If you are running HMC 188.8.131.52 SP1 or similar versions it looks like there might be a issue with the "rnvi" command. If you get a message "nvi: error while loading shared libraries: libdb-4.5.so: cannot open shared object file: No such file or directory" when you try using rnvi it is caused by a missing library in the directory rnvi chroots to. To fix: "cp /usr/lib/libdb-4.5.so /opt/hsc/rnvi/lib/" however this must be run with root permissions... If you run in to this you probably want to open a ticket with IBM and see what they recommend.
If you like this, you might also be interested in my previous posting Easy console access from the HMC command line which uses this same method to create an alias that allows you to open a virtual console on LPAR's very easily from the HMC command line.
The "ed" editor has for the most part been forgotten as a piece of UNIX history. But it can still be very handy when scripting file edits, especially when the script must work across multiple UNIX variants.
ed is a "line editor". What this basically means is it is designed to work on a teletype (aka keyboard with a printer). This makes it challenging to use as an interactive editor, but it is perfect for scripting. ed is part of the Single UNIX Specification so it should exist on any system claiming to be UNIX, so it is a good utility to rely on in your scripts if they are cross platform.
Many systems such as Linux support the "sed" command with the "-i" flag that specifies the file should be edited in place. Unfortunately systems such as AIX don't support an inline edit mode with sed so you either have to create temporary files (ugly in my opinion) or use something different such as ed.
All ed commands specify a range of lines (or a "," for the entire file), and then a command. For example, within ed you can display lines 1 through 5 with the command "1,5l", or you can display the entire file with ",l".
Here are some example command lines to edit files that can be used from scripts.
First off, we will edit the "sshd_config" file and replace the line "#Port 22" with "Port 222". To do this we use printf to pipe the commands we want to run to ed. In between each ed command we put a "\n" to put a carriage return / new line.
$ cat sshd_config | grep "Port "
So in this example we are telling ed to first run ",s/^#Port 22$/Port222". The first comma means apply to all lines, the "s" means search, the "^#Port 22$" means to look for a line that begins (^) and ends ($) with "Port 22". The "/" means replace, and the "Port 222" is what we want it to be replaced with. The "w" means write the file. The "q" means quit.
Here is an example of adding a line to a file. For example, lets say we want to add the comment line "Updated port to 222 on 8/19/12" to the top of the sshd_config file:
The "1" means to go to line 1, the "i" means insert mode, the "#Updated port to 222 on 8/19/12" is the text we are adding, the "." on the line by itself means to go back to command mode, the "w" is write, and the "q" is quit.
You can also easily delete lines. Lets suppose for example we decide to delete the "Port 222" line from the file:
$ cat sshd_config | grep ^Port
As you can see ed is a powerful utility to edit files from scripts quickly and easily.
One of the coolest features of AIX's smit command is the ability to hit F6 and see the command that was run. I use this all of the time to see how something is done so that the next time I can automate it rather than going through the smit menus.
The main problem with the "show command" is that the command it shows usually isn't straightforward or easy to understand. A lot of the time when you hit F6 you will see something like this:
But how do you decode this to see the simple view of what is really happening? It is actually quite easy: Simply take the "show command" output and create a new file with it. Find the command that is actually doing the work, which is often the last line of the function, and put a "echo" in front of it. In this example, its the "crfs" line.
So the new file we created would look like this: