Creating a resource

Tivoli® SA MP provides a framework to automatically manage the availability of hardware or software resources such as software application, network interface card. The resources can be fixed or floating. Floating resources can move between nodes. You need to define Db2® Data Management Console server as a Tivoli SA MP resource with a script that includes start, stop, and status commands. You also need a definition file.

Procedure

To create a resource for Db2 Data Management Console:

  1. On both the primary and secondary nodes (dmc-1 and dmc-2), create a script with the name dmc. The script includes the Tivoli SA MP definition of start, stop, and status that aligns with the Db2 Data Management Console start, stop, and status scripts. You also need to start and stop the data sync script as part of start and stop the Db2 Data Management Console server, respectively, so that data sync occurs only one way from the online node to the offline node.
  2. Save the script in a directory, such as /etc/init.d/, illustrated in this example. The directory that is highlighted in bold is the directory where Db2 Data Management Console is installed (root authority might be needed).
    vi /etc/init.d/dmc
    cat /etc/init.d/dmc
    
    #!/bin/bash
    OPSTATE_ONLINE=1
    OPSTATE_OFFLINE=2
    Action=${1}
    case ${Action} in
      start)
        /usr/local/src/ibm-datamgmtconsole/bin/startup.sh >/dev/null 2>&1
        echo -e "$(date): dmc - server started" >> /root/syncup/setup.log
        /etc/init.d/datasync start >/dev/null 2>&1
        echo -e "$(date): dmc - datasync started" >> /root/syncup/setup.log
        RC=0
        ;;
      stop)
        /usr/local/src/ibm-datamgmtconsole/bin/stop.sh >/dev/null 2>&1
        echo -e "$(date): dmc - server stopped" >> /root/syncup/setup.log
        /etc/init.d/datasync stop >/dev/null 2>&1
        echo -e "$(date): dmc - datasync stopped" >> /root/syncup/setup.log
        RC=0
        ;;
      status)
        active=`/usr/local/src/ibm-datamgmtconsole/bin/status.sh | grep "no" | wc -l `
        if [ "$active" -eq "0" ]
        then
          RC=${OPSTATE_ONLINE}
        else
        RC=${OPSTATE_OFFLINE}
        fi
        ;;
    esac
    echo "RC:"${RC}
    exit $RC
    
  3. Only on the primary node (dmc-1), create a definition file named dmc.def. You can change the timing values. In this example, Tivoli SA MP checks the Db2 Data Management Console server's status every 5 seconds.
  4. Save the file in the same directory as the dmc script, which is /etc/init.d/.
    cat /etc/init.d/dmc.def
    
    PersistentResourceAttributes::
    Name="dmc"
    StartCommand="/etc/init.d/dmc start"
    StopCommand="/etc/init.d/dmc stop"
    MonitorCommand="/etc/init.d/dmc status"
    MonitorCommandPeriod=5
    MonitorCommandTimeout=5
    NodeNameList={"dmc-1","dmc-2"}
    StartCommandTimeout=180
    StopCommandTimeout=90
    UserName="root"
    ResourceType=1
    
  5. Run the export command on the primary node (dmc-1) (root authority needed):
    [root@dmc-1 ~]# export CT_MANAGEMENT_SCOPE=2 
    [root@dmc-1 ~]#]# echo 'export CT_MANAGEMENT_SCOPE=2' >> ~/.bash_profile
    
  6. Change file permission.
    cd /etc/init.d/
    chmod u=rwx dmc.def (on primary node)
    chmod u=rwx dmc (on both nodes)
    
  7. Generate the Db2 Data Management Console resource on the primary node by running the following command in the directory where dmc.def is located (root authority needed):
    [root@dmc-1 init.d]]# mkrsrc -f dmc.def IBM.Application
    [root@dmc-1 init.d]]# lsrsrc -s "Name='dmc'" IBM.Application
    

  8. On both nodes, create a script with the name datasync. The script contains the start, stop, and status actions. Save the script in a directory of your choice. In this example, the script is saved in the /etc/init.d directory. The /root/syncup directory is where syncup.sh will be created in the subsequent step.
    #!/bin/bash
    OPSTATE_ONLINE=1
    OPSTATE_OFFLINE=2
    Action=${1}
    case ${Action} in
            start)
              # remove crontab entries that scheduled periodical backup of /logs to offline node
              crontab -l | grep -v "ibm-datamgmtconsole" | crontab -
              echo -e "$(date): datasync - crontab scheduled backup removed" >> /root/syncup/setup.log
              nohup bash /root/syncup/syncup.sh >> /root/syncup/setup.log 2>&1 &
              echo -e "$(date): datasync - syncup started" >> /root/syncup/setup.log
              RC=0
              ;;
            stop)
              killall inotifywait
              echo -e "$(date): datasync - inotify stopped" >> /root/syncup/setup.log
              # remove crontab entries that scheduled periodical backup of /logs to offline node
              crontab -l | grep -v "ibm-datamgmtconsole" | crontab -
              echo -e "$(date): datasync - crontab scheduled backup removed" >> /root/syncup/setup.log
              RC=0
              ;;
            status)
              ps ax |grep -v "grep"| grep inotifywait>/dev/null
              if [ $? == 0 ]
              then
                if [  `crontab -l | grep "ibm-datamgmtconsole" | wc -l`  -eq "2" ]
                then
                  RC=${OPSTATE_ONLINE}  
                else
                  RC=${OPSTATE_OFFLINE}
                fi
              else
                RC=${OPSTATE_OFFLINE}
              fi
              # echo -e "$(date): datasync - status check.  RC="${RC} >> /root/syncup/setup.log
              ;;
    esac
    exit $RC
    
  9. Create a script that is named syncup.sh and save it on both the primary node (dmc-1) and the secondary node (dmc-2). Save the syncup.sh script in a directory of your choice but be sure to refer to the proper location in subsequent steps. In this example, the script is saved in /root/syncup.

    Replace the value that is assigned to inotifyDir with the directory where the inotifywait command resides on the primary and secondary nodes.

    The following command can be used to locate the directory:

    [root@dmc-1 ~]# locate inotifywait

    In this environment, it is in /usr/local/bin/.

  10. Replace the value that is assigned to destIP with the IP address of the pair server. Specifically, on the primary node, destIP is the IP address of secondary node; and on the secondary node, destIP is the IP address of the primary node.

    Replace the value that is assigned to srcDir with the directory where Db2 Data Management Console is installed on the node where data is synced from. Replace the value that is assigned to destDir with the directory where Db2 Data Management Console is installed on the node where data is synced to.

    As seen in the script, several pertinent data is used by Db2 Data Management Console that needs to be synced from the online to offline nodes, by using inotifywait and rsync mechanisms. Also, logs are periodically backed up and synced to the offline node in different folders through crontab scheduling.

    In this example, the Db2 Data Management Console base logs folder is synced every 5 minutes and the logs for configuration-based alerts are synced every hour. These values can be configured as needed.

    Create a directory for syncup:
    mkdir /root/syncup
    Example of syncup.sh on primary node:
    vi /root/syncup/syncup.sh
    #!/bin/bash
    # *** Need to verify the path to inotify in your environment
    inotifyDir="/usr/local/bin"
    scriptDir="/root/syncup"
    
    # Source install path
    srcDir="/usr/local/src/ibm-datamgmtconsole"
    
    # *** Need to change destination info according to your environment, especially the IP
    destIP="9.30.34.212"
    destDir="/usr/local/src/ibm-datamgmtconsole"   # Target install path
    destlogsDir="logsB"  # backup copy of logs folder
    dmclogs="logs" # default location for DMC logs.  If this location has been customized, change this variable accordingly
    
    drSDir="/addons/drs/drs-agent"
    
    dir=""
    action=""
    file=""
    subDir=""
    
    # purge all logs
    truncate -s 0 ${scriptDir}/syncup.log
    truncate -s 0 ${scriptDir}/eventsync.log
    truncate -s 0 ${scriptDir}/eventnotsync.log
    truncate -s 0 ${scriptDir}/logsync.log
    
    # module to sync file changes to other node
    rsyncfile()
    {
       if [[ $action == DELETE* ]]
         then
           echo -e "$(date) \n Warning: You've tried to delete important file $dir$file. It has been recovered from standby server." >> ${scriptDir}/eventsync.log
           rsync -avzP root@$destIP:$destDir$subDir$file $dir >> ${scriptDir}/eventsync.log
         else
           echo -e "\n $(date) Sync change of $action $dir$file to $destIP:$dir" >> ${scriptDir}/eventsync.log
           rsync -avzP --delete $dir$file root@$destIP:$destDir$subDir >> ${scriptDir}/eventsync.log
       fi    
    }   
    
    # Periodically copy/backup the logs file to the other node using crontab
    #debug
    echo -e "$(date): Setting up crontab for backing up $srcDir/$dmclogs/ to root@$destIP:$destDir/$destlogsDir/ and $srcDir$drSDir/logs/ to root@$destIP:$destDir/$drSDir/$destlogsDir/" >> ${scriptDir}/setup.log
    # write out current crontab
    crontab -l > cronlist 2>/dev/null
    # Add new cron into temp cron file to make an exact copy of all files in logs directory inside the logsB directory for every 5 minutes
    echo "*/5 * * * * rsync -avzP --delete $srcDir/$dmclogs/ root@$destIP:$destDir/$destlogsDir/ >> ${scriptDir}/logsync.log" >> cronlist
    # Add another cron to backup logs directory in DrS to logsB directory in the other node for every hour at first minute
    echo "1 */1 * * * rsync -avzP $srcDir$drSDir/logs/ root@$destIP:$destDir$drSDir/$destlogsDir/ >> ${scriptDir}/logsync.log" >> cronlist
    #install new cron file
    crontab cronlist
    rm cronlist
    
    # Wait for change events to the main directory and its subdirectory except the logs directory; and then process these change events
    #debug
    echo "$(date): Setting up inotifywait process" >> ${scriptDir}/setup.log
    $inotifyDir/inotifywait --exclude 'logs' -rmq -e modify,create,delete,attrib,move ${srcDir}/ | while read event
      do
        # debug
        echo -e "\n $(date) $event" >> ${scriptDir}/syncup.log
    
        # parse event record which should contain the directory, followed by action and file
        dir=$(echo ${event}|cut -d ' ' -f1)
        action=$(echo ${event}|cut -d ' ' -f2)
        file=$(echo ${event}|cut -d ' ' -f3)
        subDir=${dir#*$srcDir}  # Extract sub directory after source install directory
    
        #debug
        echo -e "dir:$dir" >> ${scriptDir}/syncup.log  
        echo -e "action:$action" >> ${scriptDir}/syncup.log
        echo -e "file:$file" >> ${scriptDir}/syncup.log
        echo -e "subDir:$subDir" >> ${scriptDir}/syncup.log
        
        case "$subDir" in
    
           "/Config/"* | "/wlp/usr/servers/dsweb/resources/security/" | "$drSDir/insightdb/")
            if [[ $file == ""  ||  $file == .*  ||  $file == .swp  ||  $file == .swx ]]
            # not valid file to be synced
            then
              echo -e "\n *** Event not synced: $event " >> ${scriptDir}/eventnotsync.log 
            else 
              rsyncfile
            fi 
            ;;
    
          "/wlp/usr/servers/dsweb/")
            if [[ $file == "bootstrap.properties"  ||  $file == "server.env"  ||  $file == "jvm.options"  ]]
              then
                rsyncfile
              else
                echo -e "\n *** Event not synced: $event " >> ${scriptDir}/eventnotsync.log 
            fi
            ;;
    
          # File in DrS add-on directory
          "$drSDir/")
          #"/usr/local/src/ibm-datamgmtconsole/addons/drs/drs-agent/")
            if [[ $file == ".env"  ||  $file == "config.yaml"  ]]
              then
                rsyncfile
              else
                  echo -e "\n *** Event not synced: $event " >> ${scriptDir}/eventnotsync.log 
            fi
            ;;
    
          # Otherwise
          *)
          echo -e "\n *** Event not synced: $event " >> ${scriptDir}/eventnotsync.log 
          ;;
    
        esac
    
      done