Brian Smith's AIX / UNIX / Linux / Open Source blog
|Modified on by brian_s|
|Modified on by brian_s|
Often times you'll find a command line that works perfectly when you run it locally on a server, but doesn't work when you run it remotely over SSH. Usually the problem is related to double quotes, or backticks in the command. In this post, we will go over problems with double quotes, but the same issue would apply to command lines with backticks in them. In this example, we are running a command locally on an HMC:
If I decide to run this command over SSH (perhaps through a script), it won't work:
What's going on here? Well' the problem is the way the quote marks are processed by the shell running the SSH command. We can see what is happening by changing the "ssh" part of the command to "echo". This will show what the shell is doing to the quote marks:
So what we need to do is tweak our "echo" command line until we get what is echo'd back to the screen to match the originally run command that worked when run locally:
Now that command is echoing back the exact command that works locally, it should also work over SSH:
Another option would have been to use single quotes, however with single quotes you'll have problems if you are trying to use shell variables within the command line, which is very common when scripting something like this. This is why I prefer to use the double quotes and just escape them. Without variables, this command with single quotes will work as well:
Often times System Administrators need to copy a file to remote servers with root authority. One example might be to push out an updated resolv.conf file to all servers. However, often times remote root logins are disabled so it can be difficult to copy files with root authority if root logins are disabled.
Here is a quick and easy method to copy files using root authority even if remote root logins are disabled. For this method to work, you will need the following setp:
Once this is done, you can use a combination of "cat", "sudo", and "ssh" to easily push out a file to mulitple servers with root authority..
Here is an example of pushing out a new resolv.conf file to all servers listed in the "serverlist" file:
It works by catting the file to be transferred and pipping it to the SSH connection. Within the SSH connection we sudo to root and then use cat to write the standard input to the file.
This will also work with binary files - not just text files..
I had a new article published on IBM developerWorks today: Getting started with Nmap for system administrators
The latest development versions of PuTTY have a awesome new feature called "Share SSH Connections" (Connection Sharing).
I did a video that covers the basics of this feature and shows a demo of it in action:
Check out my latest article on Power IT Pro, "Compare Files Easily with the comm Command": http://poweritpro.com/system-admin/compare-files-easily-comm-command
Being careful not to shoot yourself in the foot when cleaning up users and home directories on AIX and Linux
It is possible on Linux or AIX to have users that share a home directory. For example, you might have user1, user2, and user3 all have their home directories set to /sharedhome.
Anytime you are deleting users and home directories you need to keep this in mind. You might only need to delete "user1" but want to leave "user2" and "user3" unaffected. But if you delete user1 and its home directory you might end up deleting the shared home directory which would have a big impact on "user2" and "user3".
Let's look at this situation:
AIX and Red Hat both have a "userdel" command that will optionally erase the users home directory as well ("-r" flag).
On most versions of AIX, if you did a "userdel -r" on any one of the 3 users, it would deleted the /sharedhome directory which would impact the remaining 2 users that you didn't want to affect.
On Red Hat Linux, "userdel" tries to be smarter, and verifies that the owner of the home directory matches the user that is being deleted. Thus, if you did a "userdel -r user1" it would happily wipe out /sharedhome, but if you did a "userdel -r user2" or "userdel -r user3" it wouldn't delete /sharedhome because the owner of the directory doesn't match the user being deleted.
Here is a one-liner that will do a better job checking to see if a given directory is a shared home directory between multiple users:
Change "/sharedhome" to whatever directory you would like to check. If this comes back and says it is a shared home directory, then you need to do more research before attempting to delete the home directory.
So anytime you are cleaning up users and home directories, always keep in mind that users might have shared home directories and that under some circumstances AIX and Linux will wipe out these shared home directories which might affect other users on the system.
One common problem I have personally made and seen others make while shell scripting is trying to set a variable to be the contents of a file or the output of a multiline command, and then trying to echo the variable to process it further with grep, awk, a while loop, etc. Depending on how you do this you might get some unexpected results because you might have missing newlines and spaces from your output.
In this example, we have a file named "testfile" that contains 5 lines. I then set the testvar variable to contain the contents of the line by running "testvar=`cat testfile`". However, when I attempt to "echo $testvar" all the lines of the file are shown on one line! If I try to grep for "test line 2" it still see's everything on one line.
The same problem happens when trying to set testvar to be the output of "ls -al" and then echoing the variable:
So what is going on here? It all has to do with how command line parameters are parsed by the shell and provided to the echo command.
Here is an example that illustrates what is happening:
Notice how the echo command removed all the extra spaces out of what I had typed? It is doing this because the shell is parsing each word as an argument and providing the arguments to the echo command:
The shell is parsing arguments and providing them to the echo command. As part of parsing the arguments, all of the extra spaces are being removed by the shell before they are passed to the echo command. This is the same thing that was happening when our newlines where removed from the first examples. The shell was parsing everything as arguments and removing all extra new lines and spaces.
How to fix this? It is very easy... Just use double quotes so that a single parameter is passed to the echo command:
By putting quotes around the text, it causes the shell to pass the entire text within the quotes as a single argument to echo which preserves the spaces.
This same technique works when echoing a variable that contains the output of a command or the contents of a file:
Hopefully this post helped you understand how commands are parsed by the shell and why you can sometimes see echo commands unexpectedly removing spaces and newlines.
Check out my latest article over at Power IT Pro: "Understanding SSH Authentication Key Basics", http://poweritpro.com/security/understanding-ssh-authentication-key-basics
Many people are not aware that you can specify multiple file names on the command line for most commands that manipulate files.
For example, if there were serveral files I wanted to change permissions on I can list all the file names on a single "chmod" command line:
$ chmod 400 /tmp/file1 /home/file5 /etc/testfile
This is more efficient than specifying 3 seperate commands:
$ chmod 400 /tmp/file1
This same technique works for almost any command that deals with files such as: ls, chown, chgrp, chown, grep, cat, which, tail, etc.
Understanding the concept by understanding wildcards
Most UNIX/Linux admins know that you can do a command such as chmod 400 * with the * wildcard. When you do this, what the shell is actually doing is changing the * wildcard in to a list of the file names in the directory before it ever even runs the chmod command. When the shell runs chmod the shell provides it the list of files in the directory as an argument to chmod. So when you do "chmod 400 *" the chmod command doesn't know that you specified a * wildcard. All it is aware of is the list of file names that the shell has provided it as arguments. This can be illustrated by running this command:
$ echo ls -al *
You can see when I ran the command echo ls -al * the shell translated the * in to the list of file names. So when you run ls -al *, the command that is actually being run in the end is ls -al file1 file2 file3 file4 file5 file6
Moving and copying files
If you need to move or copy multiple files in to a directory you can easily do it in a single command. For example, if you wanted to back up several files in to the /tmp directory you can run a command such as cp /etc/passwd /etc/group /etc/resolv.conf /tmp This will copy these 3 files in to the /tmp directory.
When you stop and think about it this makes perfect sense. If you were to run a command such as mv * /tmp as we have already covered what the shell is doing is changing the * wildcard in to the list of file names in the dircetory before ever calling mv. Here is another example showing this by using the echo command to show what is really being run:
$ echo mv * /tmp
Killing processes and editing files with vi
The kill command supports specify multiple pids on a single command line, i.e: kill 3419 456 532
Even the vi command supports specify multiple files to edit on a single command line: vi file1 file2 file3 Once in vi you can run ":n" to move to the next file
But Don't go Overboard
One thing to keep in mind is that there is a limit to how long your command line can be. If you start running in to this limit you need to start looking at utilities such as xargs which can help.
|Modified on by brian_s|
If you are not familiar with "expect", it is a script/programming language that is designed to automate interactive processes. For example, suppose you need to install a piece of software on many UNIX/Linux servers. The installation program runs from the command line, but must be run interactively. When run, it interactively asks the user for several pieces of information, and then installs. With expect, you could write a script to automate this task so that when the installation program prompts for information expect supplies the information, essentially simulating a user and automating the task.
Expect allows you to turn a normally interactive-only process in to a completely non-interactive, automated task.
The author of the expect language, Don Libes, wrote the definitive book on expect. The book is called "Exploring Expect" by O'Reilly. I would highly recommend the book to anyone wanting to learn expect.
Here is a great picture from the "Exploring Expect" book that sums up the power of Expect:
In this posting I will cover when you should use expect and when you should avoid it.
When approaching a problem, my first rule of thumb when it comes to expect is to avoid using expect unless it is the only solution. Why? Expect is an amazing tool, but can be complicated to use and very fragile. The basic premise of expect is to set it up to look for certain strings of text ("expect"ing them), and when it sees certain text, respond in a certain way. For example, you could write a script that expects the text "Specify directory to install application to: " and when it sees this, type back in "/opt/software/" However, if in the next version of the software the text of the installer changes slightly (i.e. "Specify directory location to install application to: ") , your expect program will no longer see what it is looking for and will fail to work. If there is a different way to get something done other than using an expect script then it is usually a better option in my experience. It would be possible to write an expect program to automate opening vi, editing the file, and then saving and exiting vi. But it would be much easier and more reliable to use a tool such as sed, awk, perl, etc. to edit the file. Make sure you are using the right tool for the job.
My second rule of thumb is to not use expect to automate tasks that deal with passwords. When you look around for expect information and examples, a lot of them deal with automating things like SSH, SFTP, SCP, etc. For example, the Wikipedia page on Expect lists 4 example scripts for using expect. They all deal with automating logging in to a service with password authentication (telnet, ftp, sftp, and ssh). I would not recommend using expect to automate anything like this. There is a very good reason why tools like SFTP and SSH don't allow you to script using a password without a tool like expect: It is a very bad idea! The first problem is you generally need to include the clear text password in your expect script. If anyone gets access to the script, they have access to the password as well. The second big problem when using expect with passwords is the risk that expect will "type" in the password at the wrong time. For example, if you have a expect script setup to expect certain things, and then send the password, and something goes wrong, the script might send the password too early or too late. The password might then end up somewhere inappropriate or visible to other users or in a shell history file. The bottom line is, if you need to automate running remote commands or copying files around, use SSH keys. SSH keys are so much safer than passwords for a variety of reasons, and there are several things you can do to make them a good option for automated tasks. If you MUST use expect to automate a password related task, one method to help the situation would be to have the expect script prompt you for the password when beginning the task and have the user type it in each time. This way the password is not stored in the script file.
Another password related example of what NOT to do with expect: automate changing passwords. Expect even includes an example script with it named "passmass" that will change your password on multiple servers. From a security perspective I think this is a really, really bad idea for the reasons I specified in the previous paragraph. The right tool for this kind of job is the "chpasswd" command. The chpasswd utility even allows you to specify a password hash ("encrypted" password) when setting a users password to make it more secure to script. chpasswd isn't perfect, but in my opinion it is a much better option than expect when it comes to automating changing passwords.
So when should you consider using expect? You should think about expect anytime you have a manual task that needs to be repeated and that only provides a interactive interface to the user. We already covered the example of a interactive software installation program. Another example is any propriety software that forces you to go through a text based menu to do something. Using expect, you could write a script to navigate the menu and automate the task.
Another extremely good way to use expect is when you need to automate an appliance or other closed system that doesn't have the ability to be scripted. To do this, you use expect on a Linux/UNIX machine to connect to the appliance or closed system, and then complete a task. For example, you could write an expect script that would connect to a Cisco switch and run a series of commands on the switch.
Expect is also a good option when creating test cases. If you need to routinely test software functionality then expect might make your life easier.
You can also use expect to not fully automate tasks, but just assist with manual tasks. This is because expect allows you to partially automate tasks while still allowing parts to be manual completed by a real person. An example of this is the AIX command "mkdvd" which burns mksysb images on to a DVD. When you run this command it writes the first DVD, and then if needed it will prompt you to insert additional DVD's. With expect, you could write a script that would email you or page you whenever it was time to put in the next DVD or when the mkdvd command was completed. This script needs to be customized with 2 command lines to email you/page you.
This mkdvd expect script helps with a manual process and this is not something you could do without a tool like expect.
Please post comments with some creative examples of when you have used expect, or horror stories of other people using expect when they shouldn't have :)
Guns and UNIX Admins have some similarities. Guns, when properly used, are very powerful tools that can do a lot of good, much like a sysadmin. However, if you are careless as a sysadmin, or careless with a gun the results can be devastating and even irreversible.
One of the basic NRA gun safety rules is to "ALWAYS keep your finger off the trigger until ready to shoot." People who break this rule often end up accidentally shooting family members because when startled or surprised your instinct is to clinch and if your finger is on the trigger you are going to pull it whether you are ready or not.
This same concept directly applies to being a UNIX/Linux sysadmin. Being logged in to a root user prompt is equivalent to having your finger on the trigger. One mistake at this point, and you can cause major damage. Commands such as "rm" are extremely unforgiving and very easy to make mistakes with.
I have seen sysadmins in the habit of right when they log in to a server the first thing they do is switch over to the root account, regardless of what they need to accomplish. This is a horrible habit to get in to. You should only switch to the root prompt when absolutely necessary. If you can perform whatever task you need to do as a regular user, then do that. If only part of what you need to do requires root access, only do that one part from the root prompt and then go back to the normal user account.
You can perform a ton of functions when logged in as a regular user account such as viewing almost all of the system configuration, information about filesystems, running processes, performance information, etc. And when logged in as a regular user there is very little damage you can do to the system if you make a mistake. If being logged in as the root user is like having your finger on the trigger on a loaded gun, then being logged in as a regular user is like having a Toy Nerf Gun - you probably aren't going to do much harm.
This especially applies if you are writing and testing scripts. Never test a script as the root user unless you are on an isolated, throw away, lab server. Testing scripts as the root user is equivalent to playing with a gun - something or someone is probably going to get hurt. One method that can help with this is Writing scripts that don't actually do anything.
So remember the next time you see the root # prompt... You essentially have a loaded gun in your hand with your finger on the trigger. You need to be careful and make sure you are ready to fire and think through each step of what you are doing.
|Modified on by brian_s|