News
Abstract
This document contains the installation procedure to update the IBM Systems Director 6.3.2.1, the Common Agent (CAS) 6.3.2, and the DB2 software on the IBM Smart Analytics System 5600 V3.
Content
Prerequistes and co-requisites
1. Download the IBM Smart Analytics System 5600 V3 User Guide, version -03.
Note: An IBM ID is required.
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=idwbcu&lang=en_US
2. Verify that you must install the updates by checking for the following versions of the IBM Systems Director and the Common Agent (CAS) software that require the update:
- IBM Systems Director manager: Version 6.3.2.1
- Common Agent (CAS): Version 6.3.2
a. Verify that the IBM Systems Director manager software version is 6.3.2.1 by running the following command on the management node as root:
smcli lsver
b. Verify that the Common Agent (CAS) software version is 6.3.2 by running the following command on the management node as root:
xdsh bcucore cat /opt/ibm/director/agent/version.key |grep "version=" | xdshbak -c
The following example output is a result of the following command:
HOSTS:
-------------------------------------------------------------------------
host0101, host0102, host0103, host0106
-------------------------------------------------------------------------------
version=6.3.2
codebase_version=6.3.2
prodversion=BASE
3. Verify that the WASAPP and DB2APP resources are running on the management node. If these resources are offline, then bring them online. If these resources cannot be started on the management node, contact IBM Support. Run the following command on the management node as root:
hals -mgmt
The following example output is a result of running the previous command:
MANAGEMENT DOMAIN
+============+==========+==========+==========+=================+=================+=============+
| COMPONENT | PRIMARY | STANDBY | CURRENT | OPSTATE | HA STATUS | RG REQUESTS |
+============+==========+==========+==========+=================+=================+=============+
| WASAPP | host01 | host02 | host01 | Online | Normal | - |
| DB2APP | host01 | host02 | host01 | Online | Normal | - |
| XCAT | host01 | host02 | host01 | Online | Normal | - |
+============+==========+==========+==========+=================+=================+=============+
4. Stop the IBM InfoSphere Optim Performance Manager and the IBM InfoSphere Warehouse software, and the warehouse tools resources. Run steps 1 and 2 in the procedure "Stopping the IBM Smart Analytics System" of the IBM Smart Analytics System 5600 V3 User Guide version -03.
5. Verify that the xCAT resources are running on the management node by running the following command on the management node as root:
hals -mgmt
The following example output is a result of running the previous command:
MANAGEMENT DOMAIN
+============+==========+==========+==========+=================+=================+=============+
| COMPONENT | PRIMARY | STANDBY | CURRENT | OPSTATE | HA STATUS | RG REQUESTS |
+============+==========+==========+==========+=================+=================+=============+
| WASAPP | host01 | host02 | N/A | Offline | Offline | - |
| DB2APP | host01 | host02 | N/A | Offline | Offline | - |
| XCAT | host01 | host02 | host01 | Online | Normal | - |
+============+==========+==========+==========+=================+=================+=============+
The example output table shows that the xCAT resources are currently running on the host01 management node (PRIMARY), it is Online, and the HA STATUS is NORMAL.
Installation overview
Important: Plan to install the updates during a scheduled maintenance window because it requires an outage of the core warehouse.
To install the updates for the IBM Systems Director 6.3.2.1, the Common Agent (CAS) 6.3.2, and the DB2 software, complete the following tasks in order:
A. Removing existing IBM Systems Director discovery entries
B. Installing the updates on management and standby management nodes
C. Installing the updated bcucore image for core warehouse nodes
D. Updating the DB2 software on all nodes
Installation procedures
A. Removing existing IBM Systems Director discovery entries
Before installing the updates, you must delete the existing discovery entries from the IBM Systems Director inventory.
Procedure
Run all of the steps of this procedure as the root user.
1. Remove the node discovery entries and Integrated Management Module (IMM) discovery entries for the standby management host and for all core warehouse nodes from the IBM Systems Director inventory.
a. Verify that the IBM Systems Director is in the Active state by running the following command on the management node:
/opt/ibm/director/bin/smstatus
b. Remove the node entries for the standby management host and the core warehouse nodes from the IBM Systems Director inventory.
i.) Generate a list of the node entries for the standby management node and the core warehouse nodes. Run the following command on the management node:
/opt/ibm/director/bin/smcli lssys -t OperatingSystem |grep -v -w "<mgmt_host_name>" |grep -v ethsw01
The following example output is a result of running the previous command:
host0101
host0102
host0103
host0106
ii.) Remove each node entry that you identified in the previous step. For each node entry, run the following command on the management node:
/opt/ibm/director/bin/smcli rmsys <node_entry>
If you see an error message similar to the following message, it indicates that you are trying to remove the node entry for the management node. Do not remove the node entry for the management node.
DNZCLI0705E : (Run-time error) The system host0101(OID: 0xc5f) cannot be removed. It represents the local system of director server.
iii.) Verify that the node entries for the standby management node and the core warehouse nodes were removed from the inventory by running the following command on the management node:
/opt/ibm/director/bin/smcli lssys
c. Remove the IMM entries for the standby management node and the core warehouse nodes from the IBM Systems Director inventory.
i.) Identify the IMM entry for the management node. Run the following command on the management node:
/opt/ibm/director/bin/smcli lssys -t Server -l | grep -E "DisplayName|InstalledOSDisplayName" | grep -w -B1 <mgmt_host_name>
The following example output is a result of running the previous command:
DisplayName: IBM 7915AC1 KQ0R8AT
InstalledOSDisplayName: host01
ii.) Generate a list of the IMM entries for the standby management node and the core warehouse nodes. Run the following command on the management node, where IBM 7915AC1 KQ0R8AT represents the IMM entry for the management node that you identified in the previous step:
/opt/ibm/director/bin/smcli lssys -t Server |grep -v "IBM 7915AC1 KQ0R8AT"
The following example output is a result of running the previous command:
IBM 7915AC1 KQ0R8AX
host0101imm
host0102imm
host0103imm
host0106imm
iii.) Remove each IMM entry that you identified in the previous step. For each IMM entry, run the following command on the management node:
/opt/ibm/director/bin/smcli rmsys <IMM_entry>
If you see an error message similar to the following message, it indicates that you are trying to remove the IMM entry for the management node. Do not remove the IMM entry for the management node.
DNZCLI0705E : (Run-time error) The system IBM 7915AC1 KQ0R8AT(OID: 0xc91) cannot be removed. It represents the local system of director server.
iv.) Verify that the IMM entries for the standby management node and the core warehouse nodes were removed from the inventory by running the following command on the management node:
/opt/ibm/director/bin/smcli lssys
B. Installing the updates on management and standby management nodes
Install the updates for the IBM Systems Director and the Common Agent (CAS) software on the management node and the standby management node.
Procedure
Run all of the steps of this procedure as the root user.
1. Download the fix pack.
- For DB2 v9.7 systems, download the following fix pack:
- For DB2 v10.1 systems, download the following fix pack:
a. Verify that the /install directory on the management node has at least 4 GB of free space available to contain the fix pack.
b. Verify that the /tmp directory on the standby management node has at least 1.5 GB of free space available to contain the /install/mgmtsw directory that is copied over from the management node in step 2.
c. Click on the following link to access Fix Central:
d. To find the IBM Smart Analytics System fix packs on Fix Central, select the Information Management product group and the IBM Smart Analytics System product. Select 1.3.0.0 as the installed version and Linux 64-bit,x86_64 as the platform. On the Identify fixes page, select the Browse for fixes radio button.
e. Download the IBM Smart Analytics System 5600 V3 Fix Pack 1.3.0.1a (Fix Pack 1a) that matches the version of DB2 software that is on your system into the /install directory on the management node.
1.3.0.1-IM-ISAS5600V3-fp001a-db2v97
1.3.0.1-IM-ISAS5600V3-fp001a-db2v101
2. Decompress the fix pack file in the /install directory on the management node.
- For DB2 v9.7 systems, run the following command:
tar xzvf /install/1.3.0.1-IM-ISAS5600V3-fp001a-db2v97.tgz
- For DB2 v10.1 systems, run the following command:
tar xzvf /install/1.3.0.1-IM-ISAS5600V3-fp001a-db2v101.tgz
Important: Copy the contents of the /install/mgmtsw directory from the management node into the /tmp/mgmtsw directory on the standby management node.
3. Install the update. Update the management node by completing steps 3 and 4.
- For updating the management node, run the following command:
- For updating the standby management node, run the following command:
a. Change the directory from which to run the update.
cd /install/mgmtsw/
cd /tmp/mgmtsw/
b. Install the update by running the following command:
rpm -Uvh --force ibmcim-ssl-1.0.1-sles11.i386.rpm
The following example output is a result of running the previous command:
Preparing...
########################################### [100%]
1:ibmcim-ssl
########################################### [100%]
mv: cannot stat '/etc/opt/ibm/icc/truststore//*': No such file or directory
c. Set the OpenSSL environment variables. Run the following commands:
echo "export OPENSSL_CONF=/etc/opt/ibm/icc/openssl.cnf" >> /etc/profile.local
export OPENSSL_CONF=/etc/opt/ibm/icc/openssl.cnf
KEYSTORE_PATH=/etc/opt/ibm/icc/keystore
ICC_PATH=/opt/ibm/icc
export LD_LIBRARY_PATH=/opt/ibm/icc/lib:/opt/ibm/platform/lib
echo "export LD_LIBRARY_PATH=/opt/ibm/icc/lib:/opt/ibm/platform/lib " >> /etc/profile.local
HOSTNAME=`hostname`
Note: To run the previous `hostname` command, use the back tick (`) and not the single quote (') character.
d. Verify that the OpenSSL version is 1.0.1g. Run the following command:
/opt/ibm/icc/bin/openssl version
The following example output is a result of running the previous command:
OpenSSL 1.0.1g 7 Apr 2014
4. Generate a new SSL certificate and key.
a. Run the following command:
Note: Enter the command on a single line. It might display on multiple lines in this document.
echo -e "US\nNORTH CAROLINA\nRTP\nIBM\nSTG\n$HOSTNAME\n.\n.\n" | $ICC_PATH/bin/openssl req -x509 -nodes -sha256 -days 3650 -newkey 2048 -keyout $KEYSTORE_PATH/server.key -out $KEYSTORE_PATH/server.cert
The following example output is a result of running the previous command:
Generating a 2048 bit RSA private key
.....................................+++
....................................+++
writing new private key to '/etc/opt/ibm/icc/keystore/server.key'
-----
You are about to be asked to enter information that will be incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:State or Province Name (full name) []:Locality Name (eg, city) []:Organization Name (eg, company) []:Organizational Unit Name (eg, section) []:Common Name (eg, your websites domain name) []:Email Address []:
b. Restart the CIMOM server so that it uses the new key and certificate by running the following command:
service cimserverd restart
c. Verify that the SSL certificate was updated by running the following command:
/opt/ibm/icc/bin/openssl x509 -in $KEYSTORE_PATH/server.cert -noout -text
The SSL certificate is updated if the date and time (in GMT) returned as the Not Before values match the date and time (in GMT) that the new certificate was generated in step 4.a.
d. Verify that the SSL key is updated. Run the following command:
openssl s_client -connect localhost:15989 -showcerts
The key is updated if the previous command returns Server public key is 2048 bit.
Note: To close the previous command, enter Ctrl+C.
5. Install the update on the standby management node by repeating steps 3 and 4.
6. Reset the CAS configuration and rediscover the standby management node.
a. Reset the CAS configuration on the standby management node by running the following commands on the standby management node:
rm -rf /opt/ibm/director/agent/data/collectioncache.txt
rm -rf /opt/ibm/director/agent/data/providercache.txt
rm -rf /opt/ibm/director/agent/conf/.settings/instancecache/*
rm -rf /opt/ibm/director/agent/runtime/agent/subagents/eclipse/plugins/com.ibm.usmi.agent.coreagent.usmiimpl*/data/coreagent.dat
/opt/ibm/director/agent/runtime/agent/toolkit/bin/configure.sh -unmanaged -force
/opt/ibm/platform/bin/genuid
LANG=C;/opt/tivoli/guid/tivguid -Write -New
b. Run the configure.sh script with the following command on the standby management node, where <mgmt_IP> is the management node IP address and <root_pwd> is the password for the root user:
/opt/ibm/director/agent/runtime/agent/toolkit/bin/configure.sh -amhost <mgmt_IP> -passwd <root_pwd> -force
c. Discover the standby management node and its IMM.
i.) Run the following command on the management node to discover the standby management node by specifying its IP address:
/opt/ibm/director/bin/smcli discover -i <standby_mgmt_node_IP>
where <standby_mgmt_node_IP> represents the IP address of the standby management node.
ii.) On the management node, repeat running the same command from the previous step, this time to discover the IMM of the standby management node:
/opt/ibm/director/bin/smcli discover -i <standby_mgmt_node_IP>
where <standby_mgmt_node_IP> represents the IP address of the standby management node. Do not use the IMM IP address.
iii.) Check if the standby management node and its IMM are locked by running the following command on the management node:
smcli lssys -w AccessState=Locked
The following example output is a result of running the previous command and indicates if the standby management node or its IMM are locked:
host0102
host0102imm
iv.) If the standby management node is locked, run the following command on the management node to unlock it:
smcli lssys -w AccessState=Locked -t OperatingSystem | smcli accesssys -u root -p <root_pwd> -f -
The following example output is a result of running the previous command and successfully unlocking the standby management node:
DNZCLI0727I : Waiting for request access to complete on... host0106
Result Value: DNZCLI0734I : Request access was successful on : host0106
The following example output is a result of running the previous command on the standby management node that was already unlocked:
Usage error: The following file - was not found. Verify if the file exists at the specified path.
Note: You can ignore this error message.
v.) Verify that the standby management node was successfully unlocked by running the following command on the management node:
smcli lssys -w AccessState=Locked -t OperatingSystem
vi.) If the IMM is locked, run the following command on the management node to unlock it:
Note: The 0 specified in the command is the number zero.
smcli lssys -w AccessState=Locked -t Server | smcli accesssys -u USERID -p PASSW0RD -f -
The following example output is a result of running the previous command and successfully unlocking the IMM:
DNZCLI0727I : Waiting for request access to complete on... host0106imm
Result Value: DNZCLI0734I : Request access was successful on : host0106imm
The following example output is a result of running the previous command on an IMM that was already unlocked:
Usage error: The following file - was not found. Verify if the file exists at the specified path.
Note: You can ignore this error message.
vii.) Verify that the IMM was successfully unlocked by running the following command on the management node:
smcli lssys -w AccessState=Locked -t Server
7. Verify that the IBM Systems Director software can collect inventory of the management node and the standby management node by running the following commands on the management node:
/opt/ibm/director/bin/smcli collectinv -p "All Inventory" -n <mgmt_host_name>
/opt/ibm/director/bin/smcli collectinv -p "All Inventory" -n <standby_mgmt_host_name>
The following example output is a result of running one of the previous two commands:
Inventory collection percentage 0%
Inventory collection percentage 58%
Inventory collection percentage 86%
Inventory collection percentage 96%
Inventory collection completed: 100%
C. Installing the updated bcucore image for core warehouse nodes
Use the Extreme Cloud Administration Toolkit (xCAT) to install the updated bcucore image on the IBM Smart Analytics System 5600 V3.
Procedure
Run all of the steps of this procedure as the root user.
1. Import the updated bcucore image using the imgimport tool from xCAT. As root on the management node:
- For DB2 v9.7 systems, run the following command:
imgimport /install/coresw/bcucore_D97R1B19.tgz
- For DB2 v10.1 systems, run the following command:
imgimport /install/coresw/bcucore_D101R1B19.tgz
2. List all of the images imported with xCAT by running the following command on the management node:
tabdump osimage
The following example output is a result of running the previous command:
#imagename,profile,imagetype,provmethod,rootfstype,osname,osvers,osdistro,osarch,synclists,postscripts,postbootscripts,comments,disable
"sles11.2-x86_64-netboot-bcucore_D97R1B18","bcucore_D97R1B18","linux","netboot",,"Linux","sles11.2",,"x86_64",,,,,
"sles11.2-x86_64-netboot-bcucore_D97R1B19","bcucore_D97R1B19","linux","netboot",,"Linux","sles11.2",,"x86_64",,,,,
3. Assign the appropriate bcucore image to all core warehouse nodes.
- For DB2 v9.7 systems, run the following command on the management node:
- For DB2 v10.1 systems, run the following command on the management node:
nodeset bcucore osimage=sles11.2-x86_64-netboot-bcucore_D97R1B19
nodeset bcucore osimage=sles11.2-x86_64-netboot-bcucore_D101R1B19
The following example output is a result of running the previous command:
host0101: netboot sles11.2-x86_64-netboot-bcucore_D101R1B19
host0102: netboot sles11.2-x86_64-netboot-bcucore_D101R1B19
host0103: netboot sles11.2-x86_64-netboot-bcucore_D101R1B19
host0106: netboot sles11.2-x86_64-netboot-bcucore_D101R1B19
4. Validate that the bcucore image was successfully installed on each core warehouse node. Run the following command on the management node:
lsdef -l |grep "Object\|profile"
The following example output is a result of running the previous command:
Object name: host0101
profile=bcucore_D101R1B19
Object name: host0102
profile=bcucore_D101R1B19
Object name: host0103
profile=bcucore_D101R1B19
Object name: host0106
profile=bcucore_D101R1B19
5. Update the xCAT configuration.
a. Create a network boot root image for node discovery by running the following command on the management node as root:
mknb x86_64
The following example output is a result of running the previous command:
Creating genesis.fs.x86_64.gz in /tftpboot/xcat
b. Define all of the nodes to the DHCP server by running the following command on the management node as root:
makedhcp -a
c. Create a new DHCP configuration file by running the following command on the management node as root:
makedhcp -n
The following example output is a result of running the previous command:
Renamed existing dhcp configuration file to /etc/dhcpd.conf.xcatbak
6. Reboot all of the core warehouse nodes.
a. Stop IBM InfoSphere Optim Performance Manager, IBM InfoSphere Warehouse, and the core warehouse resources. Run steps 1 to 4 in the procedure "Stopping the IBM Smart Analytics System" of the IBM Smart Analytics System 5600 V3 User Guide version -03.
b. Stop GPFS and unmount the GPFS file systems. Run step 6 in the procedure "Stopping the IBM Smart Analytics System" of the IBM Smart Analytics System 5600 V3 User Guide version -03.
c. Reboot all of the core warehouse nodes by running the following command on the management node as root:
xdsh bcucore shutdown -r now
7. Verify that the OpenSSL version is 1.0.1g by running the following command on the management node:
xdsh bcucore "/opt/ibm/icc/bin/openssl version"
The following example output is a result of running the previous command:
host0102: OpenSSL 1.0.1g 7 Apr 2014
host0106: OpenSSL 1.0.1g 7 Apr 2014
host0103: OpenSSL 1.0.1g 7 Apr 2014
host0101: OpenSSL 1.0.1g 7 Apr 2014
8. Update the OpenSSL certificates and keys on each core warehouse node and rediscover each core warehouse node. Run steps 8.a to 8.j on each core warehouse node.
a. Set the OpenSSL environment variables. Run the following commands on the core warehouse node:
KEYSTORE_PATH=/etc/opt/ibm/icc/keystore
ICC_PATH=/opt/ibm/icc
HOSTNAME=`hostname`
b. Remove and reset the CIMOM server configuration file by running the following commands on the core warehouse node:
rm -f /etc/opt/ibm/icc/keystore/*
service cimlistenerd restart
service cimserverd restart
c. Generate a new SSL certificate and key by running the following command on the core warehouse node:
Note: Enter the command on a single line. It might display on multiple lines in this document.
echo -e "US\nNORTH CAROLINA\nRTP\nIBM\nSTG\n$HOSTNAME\n.\n.\n" | $ICC_PATH/bin/openssl req -x509 -nodes -sha256 -days 3650 -newkey 2048 -keyout $KEYSTORE_PATH/server.key -out $KEYSTORE_PATH/server.cert
The following example output is a result of running the previous command:
Generating a 2048 bit RSA private key
.....................................+++
....................................+++
writing new private key to '/etc/opt/ibm/icc/keystore/server.key'
-----
You are about to be asked to enter information that will be incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:State or Province Name (full name) []:Locality Name (eg, city) []:Organization Name (eg, company) []:Organizational Unit Name (eg, section) []:Common Name (eg, your websites domain name) []:Email Address []:
d. Restart the CIMOM server so that it uses the new key and certificate by running the following commands on the core warehouse node:
service cimlistenerd restart
service cimserverd restart
e. Verify that the SSL certificate was updated by running the following command on the core warehouse node:
/opt/ibm/icc/bin/openssl x509 -in $KEYSTORE_PATH/server.cert -noout -text
The SSL certificate is updated if the date and time returned as the Not Before values match the date and time that the new certificate was generated in step 8.b.
f. Verify that the SSL key is updated by running the following commands on the core warehouse node:
sleep 120
openssl s_client -connect localhost:15989 -showcerts
The key is updated if the previous command returns Server public key is 2048 bit.
g. Reset the CAS configuration by running the following commands on the core warehouse node:
rm -rf /opt/ibm/director/agent/data/collectioncache.txt
rm -rf /opt/ibm/director/agent/data/providercache.txt
rm -rf /opt/ibm/director/agent/conf/.settings/instancecache/*
rm -rf /opt/ibm/director/agent/runtime/agent/subagents/eclipse/plugins/com.ibm.usmi.agent.coreagent.usmiimpl*/data/coreagent.dat
/opt/ibm/director/agent/runtime/agent/toolkit/bin/configure.sh -unmanaged -force
/opt/ibm/platform/bin/genuid
LANG=C;/opt/tivoli/guid/tivguid -Write -New
h. Run the configure.sh script with the following command on the core warehouse node, where <mgmt_IP> is the management node IP address and <root_pwd> is the password for the root user:
/opt/ibm/director/agent/runtime/agent/toolkit/bin/configure.sh -amhost <mgmt_IP> -passwd <root_pwd> -force
i. Discover a core warehouse node and its IMM.
i.) Run the following command on the management node to discover a core warehouse node by specifying its IP address:
/opt/ibm/director/bin/smcli discover -i <core_node_IP>
where <core_node_IP> represents the IP address of the core warehouse node.
ii.) Run the following command again on the management node to discover the IMM of the core warehouse node:
/opt/ibm/director/bin/smcli discover -i <core_node_IP>
where <core_node_IP> represents the IP address of the core warehouse node.
iii.) Check if the core warehouse node and its IMM are locked by running the following command on the management node:
smcli lssys -w AccessState=Locked
The following example output is a result of running the previous command and indicates if the core warehouse node or its IMM are locked:
host0106
host0106imm
iv.) If the core warehouse node is locked, run the following command on the management node to unlock it:
smcli lssys -w AccessState=Locked -t OperatingSystem | smcli accesssys -u root -p <root_pwd> -f -
The following example output is a result of running the previous command and successfully unlocking the core warehouse node:
DNZCLI0727I : Waiting for request access to complete on... host0106
Result Value: DNZCLI0734I : Request access was successful on : host0106
The following example output is a result of running the previous command on a core warehouse node that was already unlocked:
Usage error: The following file - was not found. Verify if the file exists at the specified path.
Note: You can ignore this error message.
v.) Verify that the core warehouse node was successfully unlocked by running the following command on the management node:
smcli lssys -w AccessState=Locked -t OperatingSystem
vi.) If the IMM is locked, run the following command on the management node to unlock it:
Note: The 0 specified in the command is the number zero.
smcli lssys -w AccessState=Locked -t Server | smcli accesssys -u USERID -p PASSW0RD -f -
The following example output is a result of running the previous command and successfully unlocking the IMM:
DNZCLI0727I : Waiting for request access to complete on... host0106imm
Result Value: DNZCLI0734I : Request access was successful on : host0106imm
The following example output is a result of running the previous command on an IMM that was already unlocked:
Usage error: The following file - was not found. Verify if the file exists at the specified path.
Note: You can ignore this error message.
vii.) Verify that the IMM was successfully unlocked by running the following command on the management node:
smcli lssys -w AccessState=Locked -t Server
j. Log in to the discovered core warehouse node and store some specific files by running the following commands:
Note: Enter each command on a single line. It might display on multiple lines in this document.
/usr/IBM/analytics/diskless/bcuspecific push /etc/ibm/director/twgagent/twgagent.uid
/usr/IBM/analytics/diskless/bcuspecific push /opt/ibm/director/agent/runtime/agent/config/endpoint.properties
/usr/IBM/analytics/diskless/bcuspecific push /opt/ibm/director/agent/runtime/agent/subagents/eclipse/plugins/com.ibm.usmi.agent.coreagent.usmiimpl*/data/coreagent.dat
/usr/IBM/analytics/diskless/bcuspecific push /etc/profile.local
9. Reboot all of the nodes in the system.
a. Reboot all of the core warehouse nodes by running the following command on the management node as root:
xdsh bcucore shutdown -r now
b. Reboot the management node by running the following command on the management node as root:
shutdown -r now
Important: Wait for the core warehouse nodes to complete the reboot before you continue to the next step.
c. Reboot the standby management node. Log in to the standby management node as root and run the following command:
shutdown -r now
10. Mount the GPFS file systems, start GPFS, and start IBM Systems Director.
- If any nodes return a GPFScheck value of '0', start the GPFS resources by running the following command on the management node as root:
a. Verify that the Linux operating system has started on each node.
b. Verify that the networks have started correctly and that the storage is accessible.
i.) Verify that you can connect to all of the core warehouse nodes (administration, data, and standby) by running the following command on the management node as root:
xdsh bcucore 'echo "Mountcheck:";mount | egrep "dev |shm |tmp |persist |crash " | sed "s|.*on /||"' 2>&1 | xdshbak -c
The following example output is a result of running the previous command:
HOSTS:
-------------------------------------------------------------------------
host0101, host0102, host0103, host0106
-------------------------------------------------------------------------------
Mountcheck:
dev type devtmpfs (rw,mode=0755)
dev/shm type tmpfs (rw)
tmp type ext3 (rw,acl,user_xattr)
persist type ext3 (rw,acl,user_xattr)
var/crash type ext3 (rw,acl,user_xattr)
- Verify that each of the core warehouse nodes have mounted the following GPFS file systems:
/
/dev
/dev/shm
/tmp
/persist
/var/crash
c. Start the core warehouse GPFS resources.
i.) Verify that the core warehouse GPFS resources are active (designated by a GPFScheck value of '1' in the returned output) by running the following command on the management node as root:
xdsh bcucore 'echo "GPFScheck:";/usr/lpp/mmfs/bin/mmgetstate | grep -c active' 2>&1 | xdshbak -c
The following example output is a result of running the previous command:
HOSTS:
-------------------------------------------------------------------------
host0101, host0102, host0103, host0106
-------------------------------------------------------------------------------
GPFScheck:
1
xdsh bcucore 'echo "GPFSStart:";/usr/lpp/mmfs/bin/mmstartup' 2>&1 | xdshbak -c
Run the following command to verify that the GPFS resources are active:
xdsh bcucore 'echo "GPFScheck:";/usr/lpp/mmfs/bin/mmgetstate | grep -c active' 2>&1 | xdshbak -c
ii.) Verify that the GPFS resources of the management nodes are active by running the following command on the management node as root:
/usr/lpp/mmfs/bin/mmgetstate -a
The following example output is a result of running the previous command:
Node number Node name GPFS state
------------------------------------------
1 host0101 active
2 host0106 active
3 host01 active
4 host02 active
5 host0102 active
6 host0103 active
iii.) Start the core warehouse bcudomain resource domain by running the
following command:
startrpdomain bcudomain
iv.) Verify that the bcudomain resource domain is online by running the
following command:
lsrpdomain bcudomain
v.) Verify that all of the core warehouse nodes are online by running the
following command on the management node as root:
lsrpnode
vi.) Verify that the GPFS resources are online by running the following
command:
lssam
vii.) Start the DB2 database partition resources with the following command:
hastartdb2
The hastartdb2 command also runs the hals command to verify that
resources are online.
viii.) Verify that resources are not in the "Failed Offline" state. If any resources are in a "Failed Offline" state, verify that the associated hardware is available
and functioning correctly, and then reset the resource by running the
following command:
hareset
ix.) Verify that all of the GPFS file systems are mounted on all of the nodes. The
commands that are used here give a count of the mounted file systems
versus the total count of file systems. Verify that the two rows are the same
for each node and note that the counts are different for the nodes in
different HA groups.
- From the management node, run the following command to obtain the
counts of the file systems across the core warehouse nodes:
xdsh bcucore "echo 'MOUNTCHECK:';/usr/lpp/mmfs/bin/mmlsconfig | grep -i clustername;mount | grep -c gpfs;grep -c gpfs /etc/fstab" 2>&1 | xdshbak -c
The following example output is a result of running the previous command:
HOSTS:
-------------------------------------------------------------------------
host0101, host0102, host0103, host0106
-------------------------------------------------------------------------------
MOUNTCHECK:
clusterName hagroup01.host0101
54
54
- From the management node, run the following command to obtain the
counts of the file systems across the management node:
mount | grep -c gpfs;grep -c gpfs /etc/fstab
From the standby management node, run the following command to obtain the counts of the file systems across the standby management node:
mount | grep -c gpfs;grep -c gpfs /etc/fstab
d. Start the IBM Systems Director.
i.) Log in to the management node as root.
ii.) Verify that the IBM Systems Director is in an Active state by running the
following command:
/opt/ibm/director/bin/smstatus
If the IBM Systems Director is not in an Active state, start the IBM Systems
Director by running the following command:
/opt/ibm/director/bin/smstart
11. Verify that the IBM Systems Director software can collect inventory from all of the nodes by running the following commands on the management node:
/opt/ibm/director/bin/smcli collectinv -p "All Inventory" -t Server
/opt/ibm/director/bin/smcli collectinv -p "All Inventory" -t OperatingSystem
The following example output is a result of running either of the previous commands:
Inventory collection percentage 0%
Inventory collection percentage 25%
Inventory collection percentage 35%
Inventory collection percentage 37%
Inventory collection percentage 40%
Inventory collection percentage 60%
Inventory collection percentage 81%
Inventory collection percentage 94%
Inventory collection percentage 95%
Inventory collection percentage 97%
Inventory collection percentage 99%
Inventory collection completed:
100%
D. Updating the DB2 software on all nodes
Update the DB2 software on all of the nodes.
Procedure
1. Verify that all of the nodes are active.
a. Check that the GPFS state of the management nodes, represented here as host01 and host02, are active by running the following command on the management node as root:
mmgetstate -n host01,host02
The following example output is a result of running the previous command:
Node number Node name GPFS state
------------------------------------------
3 host01 active
4 host02 active
b. Check the GPFS state on the rest of the nodes. This command checks all of the servers in the cluster even if there are multiple GPFS clusters. If any node returns a ‘0’, then log in to that node and start problem determination. Run the following command on the management node as root:
xdsh bcucore '/usr/lpp/mmfs/bin/mmgetstate | grep -c active' | xdshbak -c
The following example output is a result of running the previous command:
HOSTS:
-------------------------------------------------------------------------
host0101, host0102, host0103, host0106
-------------------------------------------------------------------------------
1
2. Verify that all of the GPFS file systems are mounted.
- If the returned output is not consistent with expectations, you must mount the GPFS file systems on all of the nodes.
a. Check the GPFS file systems on the management nodes. Each should have three GPFS file systems mounted.
i.) Run the following command on the management node as root:
mount | grep -c gpfs
The following example output is a result of running the previous command:
3
ii.) Run the following command through an ssh session to the standby management node represented by host02:
ssh host02 mount | grep -c gpfs
The following example output is a result of running the previous command:
3
b. Check the GPFS file systems on the core warehouse nodes by running the following command on the management node as root:
xdsh bcucore 'mount | grep -c gpfs ' | xdshbak -c
The following example output is a result of running the previous command:
HOSTS:
-------------------------------------------------------------------------
host0101, host0102, host0103, host0106
-------------------------------------------------------------------------------
54
The returned entries as a result of the previous command are based on HA groups. All nodes within the same HAGROUP have the same number of mounted disks. The number of mounted disks in an HAGROUP varies depending on the number and type of nodes in that HAGROUP. The example output shows 1 HA group with 54 mounts. That is 3 mounts for cross cluster file systems and 51 mounts for the database file systems.
The 51 mounts can be calculated as follows:
3 mounts per partition.
3 mounts contributed per administration node in the HA group.
24 mounts contributed per data node in the HA group.
0 mounts contributed per standby node in the HA group.
The first HA group includes 1 administration node, 2 data nodes, and 1 standby node.
3 + 2(24) = 51
To determine the HA group assignments, run the following command on the management node as root:
xdsh bcucore 'mmlsconfig | grep clusterName' | xdshbak -c
The following example output is a result of running the previous command:
HOSTS:
-------------------------------------------------------------------------
host0101, host0102, host0103, host0106
-------------------------------------------------------------------------------
clusterName hagroup01.host0101
Note: The first HA group also contains the management nodes which will not be listed as part of the xdsh command.
i. Mount the GPFS file systems on all of the core warehouse nodes by running the following command on the management node as root:
xdsh bcucore "/usr/lpp/mmfs/bin/mmmount all"
ii. Mount the GPFS file systems on the management node by running the following command on the management node as root:
/usr/lpp/mmfs/bin/mmmount all
iii. Mount the GPFS file systems on the standby management node by running the following command on the standby management node as root:
/usr/lpp/mmfs/bin/mmmount all
iv. Rerun the checks in step 2 to verify that all of the file system counts are as expected. If the file system counts are not as expected, then start problem determination procedures.
3. Verify that all of the core warehouse nodes are running the correct fix pack version of the DB2 software.
- For DB2 v9.7 systems, verify that the DB2 level is 9.7.0.9 and the fix pack is 9a by running the following command on the management node as root:
xdsh bcucore "/usr/local/bin/db2ls -q -p -b /opt/IBM/dwe/db2/V9.7/ 2>&1" | xdshbak -c
The following example output is a result of running the previous command:
HOSTS:
-------------------------------------------------------------------------
host0101, host0102, host0103, host0106
-------------------------------------------------------------------------------
Install Path : /opt/IBM/dwe/db2/V9.7/
Product Response File ID Level Fix Pack Product Description
---------------------------------------------------------------------------------------------------------------------
ENTERPRISE_SERVER_EDITION 9.7.0.9 9a DB2 Enterprise Server Edition
II_RELATIONAL_WRAPPERS 9.7.0.9 9a InfoSphere Federation Server Relational Wrappers
- For DB2 v10.1 systems, verify that the DB2 level is 10.1.0.4 and the fix pack is 4 by running the following command on the management node as root:
xdsh bcucore "/usr/local/bin/db2ls -q -p -b /opt/ibm/db2/V10.1 2>&1" | xdshbak -c
The following example output is a result of running the previous command:
HOSTS:
-------------------------------------------------------------------------
host0101, host0102, host0103, host0106
-------------------------------------------------------------------------------
Install Path : /opt/ibm/db2/V10.1
Product Response File ID Level Fix Pack Product Description
---------------------------------------------------------------------------------------------------------------------
ENTERPRISE_SERVER_EDITION 10.1.0.4 4 DB2 Enterprise Server Edition
II_RELATIONAL_WRAPPERS 10.1.0.4 4 InfoSphere Federation Server Relational Wrappers
4. Update the DB2 instance on the administration node as root.
- For DB2 v9.7 systems, run the following command:
/opt/IBM/dwe/db2/V9.7/instance/db2iupdt -u bcufenc bculinux
- For DB2 v10.1 systems, run the following command:
/opt/ibm/db2/V10.1/instance/db2iupdt -u bcufenc bculinux
The following example output is a result of running the db2iupdt command:
DBI1446I The db2iupdt command is running.
DB2 installation is being initialized.
Total number of tasks to be performed: 4
Total estimated time for all tasks to be performed: 309 second(s)
Task #1 start
Description: Setting default global profile registry variables
Estimated time 1 second(s)
Task #1 end
Task #2 start
Description: Initializing instance list
Estimated time 5 second(s)
Task #2 end
Task #3 start
Description: Configuring DB2 instances
Estimated time 300 second(s)
Task #3 end
Task #4 start
Description: Updating global profile registry
Estimated time 3 second(s)
Task #4 end
The execution completed successfully.
For more information see the DB2 installation log at "/tmp/db2iupdt.log.10937".
DBI1070I Program db2iupdt completed successfully.
Verify if there is no Error/warning on db2iupdt.log file under /tmp.
5. Verify that the bcudomain HA domain is Online by running the following command on the administration node as root:
lsrpdomain
The following example output is a result of running the previous command:
Name OpState RSCTActiveVersion MixedVersions TSPort GSPort
bcudomain Online 3.1.4.3 No 12347 12348
- If the bcudomain is Offline, start the HA domain by running the following command as root:
startrpdomain bcudomain
6. Start the HA DB2 resources by running the following command on the administration node as root:
hastartdb2
The following example output is a result of running the previous command:
Starting DB2..........................DB2 resources online
Activating DB BCUDB
CORE DOMAIN
+============+============+============+=================+=================+=============+
| PARTITIONS | CURRENT | STANDBY | OPSTATE | HA STATUS | RG REQUESTS |
+============+============+============+=================+=================+=============+
| 0 | host0101 | host0106 | Online | Normal | - |
| 1-8 | host0102 | host0106 | Online | Normal | - |
| 9-16 | host0103 | host0106 | Online | Normal | - |
+============+============+============+=================+=================+=============+
7. Verify that the DB2 core warehouse instance update was successfully installed on the administration node.
a. To run the following steps, make yourself the bculinux DB2 instance owner by running the following command:
su - bculinux
b. To verify that the DB2 instance update was successful, run the following command to determine that the DB2 level and fix pack is correct for your system:
db2level
The following example output is a result of running the previous command on a DB2 v9.7 system:
DB21085I This instance or install (instance name, where applicable: "bculinux") uses "64" bits and DB2 code release "SQL09079" with level identifier "080A0107".
Informational tokens are "DB2 v9.7.0.9", "s140512", "AIP23561", and Fix Pack "9a".
Product is installed at "/opt/IBM/dwe/db2/V9.7".
The following example output is a result of running the previous command on a DB2 v10.1 system:
DB21085I This instance or install (instance name, where applicable: "bculinux") uses "64" bits and DB2 code release "SQL10014" with level identifier "0205010E".
Informational tokens are "DB2 v10.1.0.4", "s140509", "IP23584", and Fix Pack "4".
Product is installed at "/opt/ibm/db2/V10.1".
c. Verify that you can connect to the bcudb core warehouse database by running the following command:
db2 connect to bcudb
The following example output is a result of running the previous command on a DB2 v10.1 system:
Database Connection Information
Database server = DB2/LINUXX8664 10.1.4
SQL authorization ID = BCULINUX
Local database alias = BCUDB
d. Verify that you can list the DB2 syscat schema tables by running the following command:
db2 list tables for schema syscat
The following example output is a result of running the previous command on a DB2 v10.1 system:
Table/View Schema Type Creation time
------------------------------- --------------- ----- --------------------------
ATTRIBUTES SYSCAT V 2014-02-27-10.37.19.600978
…
151 record(s) selected.
8. Update the DB2 instances (db2opm and dweadmin) on the management node and the standby management node.
- For the management node, run the following command as root:
- For the standby management node, run the following command as root:
- For DB2 v9.7 systems, run the following command as root:
- For DB2 v10.1 systems, run the following command as root:
- For DB2 v9.7 systems, run the following command as root:
- For DB2 v10.1 systems, run the following command as root:
- For DB2 v9.7 systems, run the following commands:
- For DB2 v10.1 systems, run the following commands:
a. Change the working directory on the management node and the standby management node.
cd /install/mgmtsw/
cd /tmp/mgmtsw/
b. Decompress the fix pack file on the management node and on the standby management node:
tar xzvf v9.7fp9a_linuxx64_ese.tar.gz
tar xzvf v10.1fp4_linuxx64_ese.tar.gz
c. Stop the db2opm instance by running the following command on the management node as the db2opm user:
db2stop
d. Install the DB2 fix pack on the management node and on the standby management node.
i.) Change the working directory by running the following command as root:
cd ese/
./installFixPack -n -b /opt/IBM/dwe/mgmt_db2/V9.7/
./installFixPack -n -b /opt/IBM/dwe/mgmt_db2/V10.1/
e. Update the DB2 instances. The DB2 instances are updated on both the management node and the standby management node by running the DB2 version-appropriate commands on only the management node as root.
/opt/IBM/dwe/mgmt_db2/V9.7/instance/db2iupdt -u db2fopm db2opm
/opt/IBM/dwe/mgmt_db2/V9.7/instance/db2iupdt dweadmin
/opt/IBM/dwe/mgmt_db2/V10.1/instance/db2iupdt -u db2fopm db2opm
/opt/IBM/dwe/mgmt_db2/V10.1/instance/db2iupdt dweadmin
f. Start the IBM InfoSphere Warehouse and IBM InfoSphere Optim Performance Manager software, and the warehouse tools resources. Run steps 6 and 7 in the procedure "Starting the IBM Smart Analytics System" of the IBM Smart Analytics System 5600 V3 User Guide version -03.
9. Verify that the DB2 instances updates were successfully installed on the management node.
For the db2opm DB2 instance, complete the following steps:
a. To run the following steps, make yourself the db2opm DB2 instance owner by running the following command:
su - db2opm
b. To verify that the DB2 instance update was successful, run the following command to determine that the DB2 level and fix pack is correct for your system:
db2level
The following example output is a result of running the previous command on a DB2 v9.7 system:
DB21085I This instance or install (instance name, where applicable: "db2opm") uses "64" bits and DB2 code release "SQL09079" with level identifier "080A0107".
Informational tokens are "DB2 v9.7.0.9", "s140512", "AIP23561", and Fix Pack "9a".
Product is installed at "/opt/IBM/dwe/mgmt_db2/V9.7".
The following example output is a result of running the previous command on a DB2 v10.1 system:
DB21085I This instance or install (instance name, where applicable: "db2opm") uses "64" bits and DB2 code release "SQL10014" with level identifier "0205010E".
Informational tokens are "DB2 v10.1.0.4", "s140509", "IP23584", and Fix Pack "4".
Product is installed at "/opt/ibm/mgmt_db2/V10.1".
c. Verify that you can connect to the opmdb database by running the following command:
db2 connect to opmdb
The following example output is a result of running the previous command on a DB2 v10.1 system:
Database Connection Information
Database server = DB2/LINUXX8664 10.1.4
SQL authorization ID = DB2OPM
Local database alias = OPMDB
d. Verify that you can list the DB2 syscat schema tables by running the following command:
db2 list tables for schema syscat
The following example output is a result of running the previous command on a DB2 v9.7 system:
Table/View Schema Type Creation time
------------------------------- --------------- ----- --------------------------
ATTRIBUTES SYSCAT V 2014-05-14-15.24.48.365965
AUDITPOLICIES SYSCAT V 2014-05-14-15.24.48.373387
AUDITUSE SYSCAT V 2014-05-14-15.24.48.376827
BUFFERPOOLDBPARTITIONS SYSCAT V 2014-05-14-15.24.48.380641
For the dweadmin DB2 instance, complete the following steps:
e. To run the following steps, make yourself the dweadmin DB2 instance owner by running the following command:
su - dweadmin
f. To verify that the DB2 instance update was successful, run the following command to determine that the DB2 level and fix pack is correct for your system:
db2level
The following example output is a result of running the previous command on a DB2 v9.7 system:
DB21085I This instance or install (instance name, where applicable: "dweadmin") uses "64" bits and DB2 code release "SQL09079" with level identifier "080A0107".
Informational tokens are "DB2 v9.7.0.9", "s140512", "AIP23561", and Fix Pack "9a".
Product is installed at "/opt/IBM/dwe/mgmt_db2/V9.7".
The following example output is a result of running the previous command on a DB2 v10.1 system:
DB21085I This instance or install (instance name, where applicable: "dweadmin") uses "64" bits and DB2 code release "SQL10014" with level identifier "0205010E".
Informational tokens are "DB2 v10.1.0.4", "s140509", "IP23584", and Fix Pack "4".
Product is installed at "/opt/ibm/mgmt_db2/V10.1".
g. Verify that you can connect to the iswmeta database by running the following command:
db2 connect to iswmeta
The following example output is a result of running the previous command on a DB2 v10.1 system:
Database Connection Information
Database server = DB2/LINUXX8664 10.1.4
SQL authorization ID = DWEADMIN
Local database alias = ISWMETA
h. Verify that you can list the DB2 syscat schema tables by running the following command:
- db2 list tables for schema syscat
The following example output is a result of running the previous command on a DB2 v9.7 system:
Table/View Schema Type Creation time
------------------------------- --------------- ----- --------------------------
ATTRIBUTES SYSCAT V 2014-05-14-15.40.43.949088
AUDITPOLICIES SYSCAT V 2014-05-14-15.40.43.957066
AUDITUSE SYSCAT V 2014-05-14-15.40.43.960923
BUFFERPOOLDBPARTITIONS SYSCAT V 2014-05-14-15.40.43.965245
BUFFERPOOLNODES SYSCAT V 2014-05-14-15.40.43.968030
10. Fail over the IBM InfoSphere Warehouse resources to the standby management node by running the following command on the management node as root:
hafailover <mgmt_host_name> APP
where <mgmt_host_name> represents the name of the management node.
11. After the failover completes successfully, verify that the dweadmin DB2 instance update was successfully installed on the standby management node.
a. To run the following steps, make yourself the dweadmin DB2 instance owner by running the following command on the standby management node:
su - dweadmin
b. To verify that the DB2 instance update was successful, run the following command to determine that the DB2 level and fix pack is correct for your system:
db2level
The following example output is a result of running the previous command on a DB2 v9.7 system:
DB21085I This instance or install (instance name, where applicable: "dweadmin") uses "64" bits and DB2 code release "SQL09079" with level identifier "080A0107".
Informational tokens are "DB2 v9.7.0.9", "s140512", "AIP23561", and Fix Pack "9a".
Product is installed at "/opt/IBM/dwe/mgmt_db2/V9.7".
The following example output is a result of running the previous command on a DB2 v10.1 system:
DB21085I This instance or install (instance name, where applicable: "dweadmin") uses "64" bits and DB2 code release "SQL10014" with level identifier "0205010E".
Informational tokens are "DB2 v10.1.0.4", "s140509", "IP23584", and Fix Pack "4".
Product is installed at "/opt/ibm/mgmt_db2/V10.1".
c. Verify that you can connect to the iswmeta database by running the following command:
db2 connect to iswmeta
The following example output is a result of running the previous command on a DB2 v10.1 system:
Database Connection Information
Database server = DB2/LINUXX8664 10.1.4
SQL authorization ID = DWEADMIN
Local database alias = ISWMETA
d. Verify that you can list the DB2 syscat schema tables by running the following command:
db2 list tables for schema syscat
The following example output is a result of running the previous command on a DB2 v9.7 system:
Table/View Schema Type Creation time
------------------------------- --------------- ----- --------------------------
ATTRIBUTES SYSCAT V 2014-05-14-15.40.43.949088
AUDITPOLICIES SYSCAT V 2014-05-14-15.40.43.957066
AUDITUSE SYSCAT V 2014-05-14-15.40.43.960923
BUFFERPOOLDBPARTITIONS SYSCAT V 2014-05-14-15.40.43.965245
BUFFERPOOLNODES SYSCAT V 2014-05-14-15.40.43.968030
12. Fail back the IBM InfoSphere Warehouse resources to the management node by running the following command on the standby management node as root:
hafailover <standby_mgmt_host_name> APP
where <standby_mgmt_host_name> represents the name of the standby management node.
Was this topic helpful?
Document Information
Modified date:
16 June 2018
UID
swg21673439