Generating AIX audit reports

Audit filter

Using the AIX audit produces a lot of records that are triggered by the configured events on the system. These events need to be kept for an external audit reviewer. However, for day-to-day internal reports, a lot of these events can be filtered out, and the remaining records can be used to produce a more centered, daily audit report. AIX provides the auditselect utility to extract records; however, if you know what record types you need extracted, then the awk utility suffices.

Share:

David Tansley (david.tansley@btinternet.com), System Administrator, Ace Europe

David TansleyDavid Tansley is a freelance writer. He has 15 years of experience as a UNIX administrator, using AIX the last eight years. He enjoys playing badminton, then relaxing watching Formula 1, but nothing beats riding and touring on his GSA motorbike with his wife.


developerWorks Contributing author
        level

28 June 2011

Also available in Chinese

Introduction

Be sure to have a separate filesystem to hold all your audit logs for ease of log file maintenance.

In the IBM Systems magazine (August 2009) I wrote an article about 'Monitoring Events with AIX Audit' (see Resource). What I did not cover in that article was the generating of audit reports. AIX provides the auditselect utility to select event records from the audit log. However, when running audit in stream mode, you can use the sed and the awk utilities to generate formated reports. In this article, I discuss auditing events and demonstrate how to produce daily audit reports.


Audit overview

AIX audit can be configured to operate in three modes: stream, bin, or stream and bin. The stream mode is my personal choice, because it offers real-time viewing of audit events due to the audit log file being written to in text mode. Stream mode writes to a circular buffering file, called stream.out. Thus, if the filesystem that contains the audit log files fills up, it still continues to write events to the beginning of the log file.

When audit is used in bin mode data, it is written in binary format and is generally considered the preferred way if you want to collect and keep the audit event records over a long period. However, with good administration practice, the stream mode can be used to keep a history of your audit events.


Audit configuration files

When you start configuring auditing, it is pretty much a case of trial and error when you initially start collecting audit events, which, no doubt, system administrators whom have audit running will verify. To get the right events monitored requires periodic changes to the /etc/security/config file, to monitor the correct events for your system. For a full list of events that can be monitored, see the Resource section for the Redbook on 'Accounting and auditing'. Once you are satisfied with the events being monitored, you can then do bespoke monitoring on other files, for example:

/etc/sudoers
/etc/ssh/sshd_config
/etc/syslog.conf

The adhoc or system specific files to be monitored are placed in the /etc/security/objectsfile.

In the objects file, you specify which filename (called an object) should be monitored for either a read, write, or execute operation. A typical layout to monitor the files (sudoers, sshd_config, syslog.conf) for any write operations could be:

/etc/sudoers.tmp:
 w = "SUDO_WRITE"

/etc/syslog.conf:
 w = "SYSLOG_WRITE"

/etc/ssh/sshd_config:
 w = "W_SSHD_CONFIG_IBM"

The format for the object file output in the previous example is:

full path name of file:
 < access mode> = "tag name"

Where access mode is r for read access, w for write access, and x for execute access. The tag name is a unique name for the file that is being monitored. You should ensure that the tag name is descriptive of the access mode being given. Also, note that you can only have one entry on each line for each access mode. So, to monitor /etc/ssh/sshd_config for both read and write access, you could have the following entry:

/etc/ssh/sshd_config
w = "W_SSHD_CONFIG_IBM"
r = "R_SSHD_CONFIG_IBM"

For write access, my tag name starts with a "W_" and for the read access my tag name starts with a "R_". This helps me to identify straight away, when viewing the audit log or the audit report, if there has been a read or write access on the file.

For audit to understand how to print each object as a record to the audit log, it needs a corresponding entry in the /etc/security/events file.

A simple printf statement will suffice. For example, to print /etc/ssh/sshd_config with the tag name of: W_SSHD_CONFIG_IBM, as identified in the /etc/security/audit/objects file, previously demonstrated, you could have:

* /etc/ssh/sshd_config
W_SSHD_CONFIG_IBM = printf "%s"

In this example, the line starts with a '*', also the full-path name to the file to be monitored is given followed by the tag name, as given in the objects file, with a printf statement that prints a string of the command that is parsed.

The objects and events file are populated with the standard system files to monitor. You can add your own files you want to monitor.

To be able to get audit to print any options parsed, as part of the command being monitored, you need to ensure audit prints the trail part of the record. So, in the /etc/security/streamcmds file, make sure you have the '-v' as part of the auditpr command. Here's an example:

/usr/sbin/auditstream | auditpr -v > /audit/stream.out &

The audit events to monitor are placed in the /etc/security/config file. Here the classes are created that contain the event(s) to monitor. The contents of the config file is shown below. A class of events is then typically assigned to users. The default class, that is the class that is assigned to each user on the system, is also given. A user can belong to one or many classes. Notice in the class called "general" I am only monitoring for user attribute changes, for example, password change, group changes, chuser, rmuser, adduser, and su attempts. Your configuration may well be different.

# cat /etc/security/audit/config

        streammode = on
        binmode = off

bin:
  trail = /audit/trail
  bin1 = /audit/bin1
  bin2 = /audit/bin2
  binsize = 25000
  cmds = /etc/security/audit/bincmds

stream:
        cmds = /etc/security/audit/streamcmds

classes:
        general = File_Write,PASSWORD_Change,USER_Change,USER_Remove,USER_Create
,GROUP_Change,GROUP_Create,GROUP_Remove,USER_SU,PASSWORD_Flags

users:
        default = general

Using the lsuser command to view the current audit classes assigned to users, we can see that all users are assigned to the audit class general. This is the default class assigned to everybody as denoted in the users stanza in the config file. However, as previously mentioned, a user can belong to more than one class.

# lsuser -a auditclasses ALL
root auditclasses=general
daemon auditclasses=general
bin auditclasses=general
sys auditclasses=general
adm auditclasses=general
lpd auditclasses=general
lp auditclasses=general
invscout auditclasses=general
snapp auditclasses=general
ipsec auditclasses=general
ftp auditclasses=general
dxtans auditclasses=general
ukazap auditclasses=general
euazap auditclasses=general
auazap auditclasses=general
beazap auditclasses=general
laazap auditclasses=general
….
…

To start the audit subsystem, type:

# audit start

To stop the audit subsystem, type:

# audit stop

To view your current settings, including the defined classes and their events, plus the available events on the system and the objects being monitored (check that audit is currently running first) use:

# audit query

If the server is rebooted, or the audit service is stopped, the stream.out audit log will be over written when the system is restarted. So, be sure to have a mechanism in place that will save and copy the current stream.out contents to a new filename (this issue will be discussed more shortly). To make sure audit is restarted on a reboot, be sure to have the following line in your /etc/rc.tcpip file:

# start audit
/usr/sbin/audit start 1>&- 2>&-

If the system is rebooted, ensure that you have a command or script that copies the stream.out log file. These command(s) should go before the audit start command in the /etc/rc.tcpip file so the log file does not get overwritten when audit starts.


Audit logging

When an audit reports an event, it reports it on the initial login-id of the user and not the current su of that user. This is true for local connections only. With user remote connections, like SSH, the login-id used on the connection is reported. AIX audit does not know about su's done prior to the SSH connection.

A typical audit log file on my system is shown in Listing 1; it shows the trail record, as well:

Listing 1. stream.out
event           login    status      time                     command

--------------- -------- ----------- ------------------------ ------------------
-------------
USER_SU         dxtans   OK          Mon Jan 10 19:05:13 2011 su

        root
PASSWORD_Change dxtans   OK          Mon Jan 10 19:06:27 2011 passwd

        alpha
PASSWORD_Flags  dxtans   OK          Mon Jan 10 19:06:33 2011 pwdadm

        alpha -c
USER_SU         bravo    FAIL        Mon Jan 10 19:07:01 2011 su

        root
GROUP_Change    dxtans   OK          Mon Jan 10 19:07:33 2011 chgroup

        admin users=dxtans,charlie
PASSWORD_Change root     OK          Mon Jan 10 19:08:31 2011 tsm

        charlie
USER_SU         charlie  OK          Mon Jan 10 19:08:36 2011 su
        root
USER_Change     charlie  OK          Mon Jan 10 19:08:54 2011 chuser

        zulu rlogin=true

Looking more closely at Listing 1 we see from the first few entries, that user dxtans has su to root. Then he changed the password of user alpha and the pwdadm flags of that user. Because we have informed audit to print the trail part of the record, we can now see that the command options parsed to pwdadm was the 'alpha -c'. If no trail record was printed, this information would not be present. User bravo has also attempted to su to root, but has failed. This could be because, either user bravo does not know the password or is not authorized, due to not belonging to root's sugroup. In this demonstration user bravo is not authorized to access root, thus this is a violation.

Please note that when the trail part of the record is printed it is printed on a separate line. When generating a report, both parts of the records lines will have to be joined side-by-side, to make it presentable as a report. This will also bring the report into columns where awk can then be used to extract details.


Preparing to report

To generate a report, the first task is to take a copy of stream.out to work on. As you will discover when viewing the stream.out file, there could be more than one entry for a write on a file. This is especially true when recording writes to the /etc/sudoers file, so be sure to parse the file through uniq to strip duplicate entries out. Then, using sed to pull the paired records together on the same line, we would also need to produce a header for the report.

The following piece of code will achieve this, assuming the stream.out file is parsed as the first ($1) parameter, and the resulting output is redirected to the holdf file:

cat $1 | uniq > holdf
mv holdf $1
sed '1i\
'$host' P-Series Audit Report on User Account Changes
/command/a\
 ---------------------------------------------------------
$!N;s/\n/ /
# pull in the last column
s/				 //g' $1 >holdf

The contents of the log have the following format in columns:

  • Column 1 is the actual event, for example a USER_Change.
  • Column 2 is the login-id of the user.
  • Column 3 is the exit status of the command issued, either OK or FAIL.
  • Column 4 - 8, is the date/time of the event, for example, Tue Jan 11 11:42:02 2011.
  • Column 9 is the command executed, for example, chuser.
  • Column 10 is any other information parsed to the command, that is the information contained, if any from the trail record, which generally is the parameters parsed.

Now we can get down to extracting what information is required for a daily audit report, extracting the information that is relevant to your own audit security policy. For instance, monitoring unauthorised account switches or changing user account attributes that have not been authorised via an incident ticket or change request on the system.

For consideration, you may not care about the system administrator su-ing to certain accounts, as this maybe due to normal administrator work. You may not care about users changing their own password, but one would certainly want passwords that are changed by root, no matter what user-id it is. Once this extraction is completed the generated report can be sent via email to the administration manager or system administrator for review and justification, via incident tickets or change requests.

I find it more manageable to have separate awk statements for each condition I want met. Thus making it easier to change the rules or patterns on what gets extracted, and indeed for other system administrators to change the rules if you are not around. By using the NOT operator with awk, you can specify what records should NOT be extracted from the report. Then, each awk statement is AND-ed through out each awk statement contained in the script. For example, to exclude an event if a users tries to su's to themselves by mistake, we could have:

!($1 =="USER_SU" && $2 ==$10)

Or not to include user root who su's to the user-id poppy, we could have:

!($1=="USER_SU" && $2=="root" && $9=="su" && $10=="poppy")

Ignoring users whom change their own password we could have:

!($1=="PASSWORD_Change" && $2 == $10)

By AND-ing these statement (and other awk statements ), the results will only produce what we want excluded. For example:

awk '
!($1 =="USER_SU" && $2 ==$10 &&
!($1=="USER_SU" && $2=="root" && $9=="su" && $10=="poppy") &&
!($1=="PASSWORD_Change" && $2 == $10) &&
...more awk statements...
...

If you do not care about any user su-ing to a particular user, these can then be excluded. For example assume the user genrep1, which is a generic user-id, and you do not require these events in your report, you could use:

!($1=="USER_SU" && $10=="genrep1")

If you wish to ignore a whole record based on one pattern search, use the search statement:

 / <pattern>/

To ignore all records that contain the string 'xnpd', one could use the following awk statement:

!/xntpd/

Once you are satisfied on the records or events you do not want published on a daily basis, use the awk END block statement to print an "end of report" statement. To filter out events from the holdf file, you could use:

awk '
!($1 =="USER_SU" && $2 ==$10) &&
!($1=="USER_SU" && $2=="root" && $9=="su" && $10=="poppy") &&
!($1=="PASSWORD_Change" && $2 == $10) 
END {print "\t\t\t\t--- end of report ---"}' holdf

Once the report has been generated, the script could then email the report to the system administrators or manager for review.

Listing 2 shows a script with events that could be ignored from the audit log stream.out file. The pattern rules contained in the listing are the ones just demonstrated. The awk statements contained in Listing 2 gives you some idea on how to apply the awk patterns against the stream.out file, thus allowing you to take it further and personalize the report to your needs. Once the report is generated, it is emailed to the email list:rs6admins.

Listing 2. auditfilter
#!/bin/sh
# auditfilter

if [ $# != 1 ]
 then
  echo "`basename $0` <audit_log>"
  exit 1
fi
 if [ ! -f $1 ]
  then
   echo "file does not exist"
   exit 1
 fi

holdf=/tmp/holdf
host=$(hostname)
# extract the date part of the filename for the report
report=$(basename $1)
# filename ie: audit0118.log
date_rep=$(echo $report | sed -e 's/audit//g' -e 's/.log//g')
mth=$(echo $date_rep | cut -c 1,2)
day=$(echo $date_rep | cut -c 3,4)
datex="$day / $mth"
# email list here
list="rs6admins "

mailit2()
{
sendmail -t <<mayday
To:$list
Subject:audit report on $host
Content-Type: text/html
Content-Transfer-Encoding: 7bit
<body>
<body bgcolor="#C0C0C0">
Generated: `date`
<br>
Date of Audit : $datex
<hr>
<br>
<pre>
$(cat $1)
</pre>
</body>
mayday
}



cat $1 | uniq > $holdf
mv $holdf $1
# do header and join lines
sed '1i\
                   '$host' P-Series Audit Report on User Account Changes
/command/a\
                   ---------------------------------------------------------
$!N;s/\n/ /
# get rid of the ** line
s/-*//g
# pull in the last column, these are spaces - adjust as required in sed
s/                   //g' $1 >$holdf
# filter out
awk '
!($1 =="USER_SU" && $2 ==$10) &&
!($1=="USER_SU" && $2=="root" && $9=="su" && $10=="poppy") &&
!($1=="PASSWORD_Change" && $2 == $10) &&
!($1=="USER_SU" && $10=="operator") &&
!($1=="USER_SU" && $10=="genrep1") &&
!/xntpd/
END {print "\t\t\t\t--- end of report ---"}' $holdf >$1.rep
# cat $1.rep
mailit2 $1.rep

Listing 3 calls the auditfilter script to run. Once the auditfilter script completes, two files are generated. One contains the formatted audit report, the filename being in the format:

 audit<month><day>.log.rep

The other is a saved copy of the current stream.out file, the filename being in the format:

 audit<month><day>.log
Listing 3. auditroll
#!/bin/sh
# auditroll
OUTFILE=/audit/audit`date +%m%d`.log
#
# Shut down auditing
/usr/sbin/audit shutdown
sleep 3
#
# Move contents of capture file
mv /audit/stream.out $OUTFILE
#
# Restart auditing
/usr/sbin/audit start
#
# Kill off processes using the audit file
fuser -k $OUTFILE
#
# Filter contents of audit file
/audit/auditfilter $OUTFILE

Looking more closely at Listing 3, a new filename is specified using the date command as part of its name for the copied stream.out file. Audit is then shutdown. The stream.out file is then moved to the new (OUTFILE) filename in readiness to generate a new audit report file. Audit is then restarted; this will then create a new stream.out file for the that days audit activity. To err on the safe side and to make sure no processes are running against the OUTFILE post audit shutdown, fuser is used to kill any processes.

When the auditfilter script is executed, it produces a formatted report contained within an email similar to Figure 1.

Figure 1. Audit report
Screen shot showing an audit report example

The auditroll script could be executed every weekday to gather a report on the previous days audit activities.


Conclusion

In my opinion, using audit on AIX is a must. It allows you to monitor the security-related events on your system. When collecting audit reports within an enterprise environment, I suggest it is best done by collating all the reports into one email for review.

Resources

Learn

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into AIX and Unix on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=AIX and UNIX
ArticleID=682488
ArticleTitle=Generating AIX audit reports
publish-date=06282011