IBM Support

How should the storage call-home be setup in an IBM PureData System for Operational Analytics environment.

Troubleshooting


Problem

How do I know if storage (Flash900 and V7000) call-home is setup correctly on an IBM PureData System for Operational Analytics environment.

Symptom

Call home does not seem to be setup on my Flash900 or V7000 storage enclosures.

Diagnosing The Problem




      • Introduction:

        The IBM PureData System for Operational Analytics appliance is configured to call home in the event of an issue with the included the V7000 and Flash900 (V1.1 only) storage enclosures. This document will describe how to verify the call home setup and if necessary to fix the call home setup for the storage.


        Call Home Configuration (How it is shipped):

        The call home configuration as shipped is configured as follows:

        Steps Performed At The CSC:
        --------------------------------------------
        Prior to deployment an XML file is created for each rack shipped under the same Solution MTM. This XML file will include the solution serial number, the solution group name, the solution MTM and a listing of all of the storage enclosure MTMs and serial numbers within that MTM. There can be multiple MTMs within the same environment.

        For each ordered MTM there is one XML file. This XML file is then pre-loaded into each storage enclosure within the same MTM.

        The CSC then sets up the call home e-mails to send mail to a local server to verify the call home details are configured correctly such that the solution variables are populated.


        Steps Performed At The Customer Site:
        ----------------------------------------------------------

        Sendmail is setup to relay e-mail from the management host and management standby hosts to the customers SMTP server.

        The Storage Enclosures are setup to send mail to root on the management and management standby hosts.

        The root user's .forward is setup to forward mail to the storage call home servers.


        Summary of the original design:
        -----------------------------------------------------------
        Load a solution.xml file into the storage enclosures describing the solution. This allows the IBM Call Home processing to identify these storage enclosures as part of a solution.

        Storage Enclosures are then setup to send mail to the root account on the management hosts.

        The management and management standby hosts are setup to relay mail to a corporate e-mail server.

        Storage alerts sent to root are forwarded to the Storage Call Home processing.



        Known Issues In The Call Home Setup:
        ------------------------------------------------------------
        1. In some cases the XML file was not loaded or not loaded on a storage enclosure correctly. The challenge is that the only way to modify this is to re-initialize the storage enclosure.

        2. Using root's .forward mechanism can bypass sendmail rules and send unnecessary e-mail to IBM.

        3. The default sendmail configuration did not allow mail to be relayed directly from the storage enclosures through the management and management standby hosts.




        Updated Call Home Configuration: [ From early 2017 ]:

        In the updated design released in this technote the following changes were made to improve the call home setup and address some known issues.

        1. The sendmail configuration was modified to allow the storage to use the management hosts as relay servers.

        2. The e-mail setup in each storage enclosure was setup to send mail to the appropriate e-mail address for that storage type instead of sending mail directly to root on the management hosts. In this way customers could follow the e-mail configuration instructions as instructed for each of the storage enclosure types.

        3. It was no longer necessary to use root's .forward file to bypass sendmail rules.

        What was not addressed were cases where the solution XML files were either missing or incorrectly configured such that solution entitlement failed during call home processing.


        Updated Call Home Configuration: [ As of February 2018 ]:

        In order to address the cases of missing or incorrectly configured call home solution XML files, we added the following method to add solution information into the storage configuration and updated the call home processing for V7000 and Flash900 enclosures to ensure that solution entitlement would work even without the solution XML file being present or correct.

        This updated configuration is only necessary if the solution XML file is missing or corrupt on an enclosure, although it will work and will override the existing solution XML file that may have been previously loaded.

        In this model, we update the 'LOCATION' field in the cluster definition of a storage enclosure to include the important solution information in the following format depending upon the type of storage.

        @@@SMT:<SOLUTIONMTM>@SSN:<SOLUTION SERIAL NUMBER>@SG:<SOLUTION GROUP NAME>@@@

        The Solution MTM represents the Machine Type [ 8279 or 8280 ] and Model of the orderable solution.
        The Solution Serial number is the unique identifier for the ordered solution.
        The Solution Group Name is ISASV7K or ISASTMS depending on the storage enclosure type.



        Checking The Environment

        1. Verify that the solution parameters are correctly setup in the enclosures. See SECTION 4 in 'Modifying the environment.'

        2. Verify that the management and management standby hosts are setup for sendmail to relay e-mail from the enclosures. See 'SECTION 1 in 'Modifying the environment.'

        3. Verify that the enclosures are setup to send e-mail to IBM. See 'SECTION 3' in 'Modifying the environment.'



        Modifying The Environment

         

        SECTION 1: Updating the sendmail setup on the appliance.

        1. Sendmail reference documentation.

        Reference Name URL
        aixsample.mc AIX 6.1 Readme: http://www-01.ibm.com/support/docview.wss?uid=isg1610readme037d0227p_sendmail
        sendmail.cf in AIX Knowledge Center: https://www.ibm.com/support/knowledgecenter/ssw_aix_71/com.ibm.aix.files/sendmail.cf.htm


        2. Find the file aixsample.mc:



        $ ls -l /usr/samples/tcpip/sendmail/cf/aixsample.mc
        -rw-r--r--    1 root     system          997 Sep 22 2009  /usr/samples/tcpip/sendmail/cf/aixsample.mc


        3. Create a location on the management host to hold the working files.

        mkdir /BCU_share/sendmail


         

        4. Copy the file aixsample.mc to the working directory.

        cp  /usr/samples/tcpip/sendmail/cf/aixsample.mc /BCU_share/sendmail

        5. Create a new file pdoa_sendmail.mc in the working directory.


        cd /BCU_share/sendmail
        cp aixsample.mc pdoa_sendmail.mc

        6. Edit the file pdoa_sendmail.mc with the vi editor. Perform the following 2 operations.


        a .Remove the line 'FEATURE(promiscuous_relay)dnl'
        b. add the line: 'FEATURE(access_db)' after the line 'DOMAIN(generic)dnl'

        7. Verfiy the edit by comparing it against the sample:


        $ diff pdoa_sendmail.mc aixsample.mc
        25a26
        > FEATURE(promiscuous_relay)dnl
        30d30
        < FEATURE(access_db)

        8. Check to the order of the components in the file.


        $ grep -v "^#" pdoa_sendmail.mc
        divert(0)dnl
        OSTYPE(aixsample)dnl
        FEATURE(genericstable)dnl
        FEATURE(mailertable)dnl
        FEATURE(virtusertable)dnl
        FEATURE(domaintable)dnl
        FEATURE(allmasquerade)dnl
        FEATURE(accept_unresolvable_domains)dnl
        FEATURE(accept_unqualified_senders)dnl
        FEATURE(no_default_msa)
        DOMAIN(generic)dnl
        FEATURE(access_db)
        define(`confSMTP_LOGIN_MSG', `$j Sendmail $b')
        MAILER(local)dnl
        MAILER(smtp)dnl
        MAILER(uucp)

        9. Use the M4 tool to generate a new pdoa_sendmail.cf file.


        (cd /usr/samples/tcpip/sendmail/m4;m4 cf.m4 /BCU_share/sendmail/pdoa_sendmail.mc > /BCU_share/sendmail/pdoa_sendmail.cf)

        10. Verify that the pdoa_sendmail.cf file was generated. It should be substantially bigger than the pdoa_sendmail.mc file.


        $ ls -la /BCU_share/sendmail
        total 152
        drwxr-xr-x    2 root     system          256 Mar 10 04:33 .
        drwxr-xr-x   43 root     system         4096 Mar 10 04:20 ..
        lrwxrwxrwx    1 root     system           43 Mar 10 04:21 aixsample.mc -> /usr/samples/tcpip/sendmail/cf/aixsample.mc
        -rw-r--r--    1 root     system        65792 Mar 10 05:00 pdoa_sendmail.cf
        -rw-r--r--    1 root     system          986 Mar 10 04:59 pdoa_sendmail.mc

        11. Update the 'DS' line in the pdoa_sendmail.cf file using sed and create a new pdoa_sendmail.cf file in /etc/mail. The example below replaces DS with "DSna.relay.ibm.com" where "na.relay.ibm.com" is the valid value to use inside of IBM. Your organization will have a different SMTP relay site. Consult your networking team on the proper value to use in the command below.


        cd /BCU_share/sendmail
        sed "s|^DS.*|DSna.relay.ibm.com|" pdoa_sendmail.cf  > /etc/mail/pdoa_sendmail.cf

        12. Verify that /etc/sendmail.cf is a link to /etc/mail/sendmail.cf.


        ls -la /etc/sendmail.cf
        lrwxrwxrwx    1 root     system           21 Nov 24 2015  /etc/sendmail.cf -> /etc/mail/sendmail.cf

        13. Copy the current /etc/mail/sendmail.cf to a backup file.


        cp /etc/mail/sendmail.cf /etc/mail/sendmail.cf.$(date +%s)

        14. Remove the old sendmail.cf file and create a new link to the newly generated pdoa_sendmail.cf file.



        rm /etc/mail/sendmail.cf
        ln -s /etc/mail/pdoa_sendmail.cf /etc/mail/sendmail.cf

        15. Create the dummy database input files and database files.


        for f in local-host-names relay-domains genericstable mailertable virtusertable domaintable;do echo "" > /etc/mail/${f};makemap hash /etc/mail/${f}.db < /etc/mail/${f};done


         

        16. Create the access file and database file. 172.23.1 will limit sendmail access to only systems with ip addresses that matches this pattern. This is the default for the internal network or IAN shipped in the appliance.


        echo "Connect:172.23.1        RELAY" > /etc/mail/access
        makemap hash /etc/mail/access.db < /etc/mail/access

        17. Refresh the sendmail daemon.

        refresh -s sendmail

        18. Check /var/log/syslog.out for a line similar to the following. There should be no other errors in the log, however if mail was setup then you may see messages indicating actual sendmail activity.


        Mar 10 05:42:59 flashdancehostname01 mail:info sendmail[5964530]: starting daemon (AIX7.1/8.14.4): SMTP+queueing@00:30:00

        19. Copy the /etc/mail/pdoa_sendmail.cf from the management host to the standby management host and repeat steps 13 through 18.

        20. Proceed to the next section to configure the Storage Enclosure e-mail notifications.

        SECTION 2: Updating the e-mail notification settings for call home on the appliance storage enclosures.

        There are two models for the storage call home that are in use in the field today.

        a. Send mail to root, and use root's .forward file to send to the call home e-mail address.

        b. Send mail to non-root users, one for each storage type in the appliance, and use their .forward files to send to the call home e-mail address.

        These models are due to the use of a sendmail configuration file that did not allow relaying. The use the the .forward file in a specific way allowed sendmail on the management hosts to forward mail. Using the above changes to the sendmail configuration allows the appliance to use the call home features of the storage as documented.

        The following steps describe how to setup the email notification features on the enclosures to take advantage of the updated sendmail configuration. The commands use the CLI interface. It is possible, but tedious, to do the same operations below through the browser based interfaces available through the appliance console service level access page. Configuring call home through that method is the beyond the scope of this document.

        1. It is important to note the call home e-mail address for the storage types in each of the appliance versions. The following table describes this relationship and will be used later on in the instructions.





         
        Appliance Version Enclosure type For systems in North America, Latin America, South America, or the Caribbean Islands For systems elsewhere.
        V1.0 / V1.1 V7000 callhome1@de.ibm.com callhome0@de.ibm.com
        V1.1 Flash900 flash-sc1@vnet.ibm.com flash-sc2@vnet.ibm.com

        2. The storage enclosures must have proper e-mail attributes. These are included with the e-mail to call home. Some fields are mandatory and must be filled in for e-mail notifications to be allowed.

        a. Verify that the e-mail settings are setup in the enclosures. This command is run as the root user on the management host.

        Command:

        -----------------


        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} lssystem | grep -i email;done


        Sample output: This will vary based on the size of the appliance.

        -----------------



        **** 172.23.1.181****
        email_reply adminname@my.company.com
        email_contact
        email_contact_primary 5554443333
        email_contact_alternate
        email_contact_location PDOA LAB
        email_contact2
        email_contact2_primary
        email_contact2_alternate
        email_state running
        email_organization IBM
        email_machine_address 11501 Burnet Road
        email_machine_city Austin
        email_machine_state TX
        email_machine_zip 78758
        email_machine_country US
        **** 172.23.1.182****
        email_reply adminname@my.company.com
        email_contact
        email_contact_primary 5554443333
        email_contact_alternate
        email_contact_location PDOA LAB
        email_contact2
        email_contact2_primary
        email_contact2_alternate
        email_state running
        email_organization IBM
        email_machine_address 11501 Burnet Road
        email_machine_city Austin
        email_machine_state TX
        email_machine_zip 78758
        email_machine_country US

        Notes:

        -----------------

        Field Notes
        email_reply Must be in the format of a valid e-mail address. Will show up as the return e-mail address.
        email_contact_primary Contact Phone. May be used by call-home support.
        email_contact_alternate
        email_contact_location Descriptive text about the location of the appliance. For multiple appliances can be used to include additional text like dev, test or prod. This may be important for the customer to understand which appliance actually called home.
        email_state This indicates the current state of the e-mail service on this storage enclosure. It can be 'running' or 'stopped'.
        email_organization Organization where the appliance is kept.
        email_machine_address
        email_machine_city
        email_machine_state
        email_machine_zip
        email_machine_country
        Address where the appliance is kept.

      • b. If the e-mail settings are incorrect use the following command to modify them for all of the enclosures in the appliance. It is a one line command run as root on the management host. Replace the fields with values appropriate to your organization.

        Command:

        ------------------


        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask chemail -contact "Admin Name" -location "PDOA LAB" -primary "5554443333" -reply "adminname@my.company.com" -state "TX" -address "11501 Burnet Road" -city "Austin" -zip 78758 -organization IBM -country US';done

         

        Output: Blank from each of the devices unless there are errors. Rerun the command in step 2a to check the values are properly assigned.

        ------------------


        **** 172.23.1.181****
        **** 172.23.1.182****
        **** 172.23.1.183****
        **** 172.23.1.184****
        **** 172.23.1.185****
        **** 172.23.1.186****

        Notes:

        ------------------

        3. Setup the e-mail servers on the enclosures.

        a. To check the e-mail server settings on the storage enclosures run the following.

        Command:

        ------------------


        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailserver;done

        Sample Output:

        ------------------


        **** 172.23.1.181****
        id name         IP_address port
        0  emailserver0 172.23.1.1 25
        1  emailserver1 172.23.1.3 25
        **** 172.23.1.182****
        id name         IP_address port
        0  emailserver0 172.23.1.1 25
        1  emailserver1 172.23.1.3 25
        **** 172.23.1.183****
        id name         IP_address port
        0  emailserver0 172.23.1.1 25
        1  emailserver1 172.23.1.3 25
        **** 172.23.1.184****
        id name         IP_address port
        0  emailserver0 172.23.1.1 25
        1  emailserver1 172.23.1.3 25
        **** 172.23.1.185****
        id name         IP_address port
        0  emailserver0 172.23.1.1 25
        1  emailserver1 172.23.1.3 25
        **** 172.23.1.186****
        id name         IP_address port
        0  emailserver0 172.23.1.1 25
        1  emailserver1 172.23.1.3 25

        Notes:

        ------------------

        The storage enclosures in the appliance only have network access to the appliance internal network (IAN). The above settings show that each enclosure will send alerts to the management (172.23.1.1) and management standby (172.23.1.3) hosts. While these are the default internal IP addresses for the appliance, it is possible for these ips to be different than expected.

        b. If there are no e-mail servers setup then add e-mail servers by using the following command. These commands assume the ip addresses are 172.23.1.1 and 172.23.1.3. Note that these commands will stop the e-mail service on the enclosures if it was enabled.

        Commands:

        ------------------


        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailserver -ip 172.23.1.1 -port 25';done

        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailserver -ip 172.23.1.3 -port 25';done


        Sample Output:

        ------------------


        $ grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailserver -ip 172.23.1.3 -port 25';done
        **** 172.23.1.181****
        Email Server id [1] successfully created
        **** 172.23.1.182****
        Email Server id [1] successfully created
        **** 172.23.1.183****
        Email Server id [1] successfully created
        **** 172.23.1.184****
        Email Server id [1] successfully created
        **** 172.23.1.185****
        Email Server id [1] successfully created
        **** 172.23.1.186****
        Email Server id [1] successfully created


        Notes:

        ------------------


        Verify the settings by running the command in 3a.

        c. If there are e-mail servers setup but they are incorrect.

        For each enclosure with an invalid e-mail server do the following. The following assumes that there is an e-mail server with id=1 on enclosure 172.23.181 and it needs to be removed. It may be necessary to run this scenario multiple times on multiple enclosures replacing the ip address and the email server id each time.

        Commands:
        ------------------
        ssh -n superuser@172.23.1.181 svcinfo lsemailserver
        ssh -n superuser@172.23.1.181 rmemailserver 1
        ssh -n superuser@172.23.1.181 svcinfo lsemailserver

        Sample Output:
        ------------------
        $ ssh -n superuser@172.23.1.181 svcinfo lsemailserver
        id name         IP_address port
        0  emailserver0 172.23.1.1 25
        1  emailserver1 172.23.1.3 25

        $ ssh -n superuser@172.23.1.181 rmemailserver 1

        $ ssh -n superuser@172.23.1.181 svcinfo lsemailserver
        id name         IP_address port
        0  emailserver0 172.23.1.1 25


        Notes:

        ------------------

        Verify the settings using the command in step 3a. It is possible to modify the large grep commands to modify all enclosures with all ids until there are no e-mail servers left on any of the devices.

        Running this command will disable the mail server on any enclosure it is run on.

        4. Setup the e-mail users on the enclosures. The following steps only add the support users to the enclosures. It is possible to add additional external users can be added such as system admins or internal users on the management hosts. First we want to clean up any existing users.

        a. First check to check the current e-mail users on the enclosures in the appliance. The e-mail users may be similar to the following where the target is a user on the management or management standby nodes. The most common target is root, but other options have been used.



        Commands:
        ------------------

        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done




        Sample Output:
        ------------------
        **** 172.23.1.181****
        id name       address           user_type error warning info inventory
        0  emailuser0 v7ksupp@127.0.0.1 local     on    on      on   off
        **** 172.23.1.182****
        id name       address           user_type error warning info inventory
        0  emailuser0 f9csupp@127.0.0.1 local     on    on      on   off
        **** 172.23.1.183****
        id name       address           user_type error warning info inventory
        0  emailuser0 v7ksupp@127.0.0.1 local     on    on      on   off
        **** 172.23.1.184****
        id name       address           user_type error warning info inventory
        0  emailuser0 f9csupp@127.0.0.1 local     on    on      on   off
        **** 172.23.1.185****
        id name       address           user_type error warning info inventory
        0  emailuser0 v7ksupp@127.0.0.1 local     on    on      on   off
        **** 172.23.1.186****
        id name       address           user_type error warning info inventory
        0  emailuser0 f9csupp@127.0.0.1 local     on    on      on   off


        Notes:

        ------------------

        V1.0.

        On a version 1.0 environment all enclosures should have the same output where the target is the root user on the management and management standby hosts. The above output shows an alternative model that can be used on a V1.1 environment.

        V1.1.

        On a version 1.1 environment the default method was to use the root user. In some environments two new users were used, one for each type of storage. This slightly improved the call home model, but still relies on the .forward model to bypass sendmail policies.

        All:

        The goal of this new approach is to follow the call home model for each of the storage enclosure types more closely. So it will be important to remove any existing e-mail users in the storage if they don't match the model described in step c.

        Also, if additional users are added to send call home alerts to local addresses, there are some rules that should be followed if sending to users on the management host.

        Using an ip address, localhost or leaving off the @<host> are invalid and will be denied either by the storage enclosure or by the sendmail.cf file included with the appliance. What is possible is to use the internal hostname as listed in /etc/hosts.

        In this example: the management host is flashgordonhostname01 and the management standby is flashgordonhostname03. For each user there will be two email user entries in the enclosure.

        user@flashgordonhostname01

        user@flashgordonhostname03

        Proceed to step b to learn how to remove email user entries.

        Proceed to step c to learn how to create email user entries.


        b. If there are e-mail users setup but they are incorrect.

        For each enclosure with an invalid e-mail server do the following. The following assumes that there is an e-mail server with id=1 on enclosure 172.23.181 and it needs to be removed. It may be necessary to run this scenario multiple times on multiple enclosures replacing the ip address and the email server id each time.

        Commands:
        ------------------
        ssh -n superuser@172.23.1.181 svcinfo lsemailuser
        ssh -n superuser@172.23.1.181 rmemailuser 0
        ssh -n superuser@172.23.1.181 svcinfo lsemailserver

        Sample Output:
        ------------------
        $ ssh -n superuser@172.23.1.181 svcinfo lsemailuser
        id name       address                          user_type error warning info inventory
        0  emailuser0 dweadmin@flashgordonemailsupport local     on    on      on   off


        $ ssh -n superuser@172.23.1.181 svctask rmemailuser 0

        $ ssh -n superuser@172.23.1.181 svcinfo lsemailuser

        $

        Notes:

        ------------------

        Repeat the steps for each ip address and e-mail user as needed. Changing the e-mail user will stop the e-mail service. Proceed to step 5a to check the status of the email users on the environment.

        c. Creating support e-mail users.

        Creating the support e-mail. Consult the table below for and find the row that matches the version of the appliance.




         
        Appliance Version Enclosure type For systems in North America, Latin America, South America, or the Caribbean Islands For systems elsewhere.
        V1.0 / V1.1 V7000 callhome1@de.ibm.com callhome0@de.ibm.com
        V1.1 Flash900 flash-sc1@vnet.ibm.com flash-sc2@vnet.ibm.com

        i. For V1.0 customers there are only V7000 type enclosures. Find the column that matches the appliance location and the e-mail address in that column

        To configure e-mail to root use the following command. Updating the -address field as appropriate. In the example 'callhome1@de.ibm.com' is used.


        Commands:
        ------------------


        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailuser -address "callhome1@de.ibm.com" -usertype support -error on -warning off -info off -inventory on';done

        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done

         


        Sample Output:
        ------------------

        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailuser -address "callhome1@de.ibm.com" -usertype support -error on -warning off -info off -inventory on';done

        **** 172.23.1.181****
        User, id [0], successfully created
        **** 172.23.1.182****
        User, id [0], successfully created
        **** 172.23.1.183****
        User, id [0], successfully created
        **** 172.23.1.184****
        User, id [0], successfully created
        **** 172.23.1.185****
        User, id [0], successfully created
        **** 172.23.1.186****
        User, id [0], successfully created



        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done
        **** 172.23.1.181****
        id name       address              user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
        **** 172.23.1.182****
        id name       address              user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
        **** 172.23.1.183****
        id name       address              user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
        **** 172.23.1.184****
        id name       address              user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
        **** 172.23.1.185****
        id name       address              user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
        **** 172.23.1.186****
        id name       address              user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com support   on    off     off  on


        Notes:

        ------------------

        This command can be used to add other users as well. Simply replace the -address and it will create new user records.

        For local users user 'user@<shorthostname>'. Do not use 'user@localhost' or 'user@127.0.0.1' or just 'user' as they will not work with this setup.

        ii. For V1.1 customers there are V7000 type enclosures and Flash900 type enclosures. Find the column that matches the appliance location and the e-mail address in that column.

        1. To configure e-mail use the following command. Updating the -address field as appropriate. In the example 'callhome1@de.ibm.com' is used for V7000 and flash-sc1@vnet.ibm.com for Flash900.


        Commands:
        ------------------
        # Setup V7000
        grep "SAN_FRAME[0-9]*[13579]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailuser -address "callhome1@de.ibm.com" -usertype support -error on -warning off -info off -inventory on';done

        # Setup Flash900
        grep "SAN_FRAME[0-9]*[24680]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailuser -address "flash-sc1@vnet.ibm.com" -usertype support -error on -warning off -info off -inventory on';done


        # Check
        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done


        Sample Output:
        ------------------

        # grep "SAN_FRAME[0-9]*[13579]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailuser -address "callhome1@de.ibm.com" -usertype support -error on -warning off -info off -inventory on';done
        **** 172.23.1.181****
        User, id [0], successfully created
        **** 172.23.1.183****
        User, id [0], successfully created
        **** 172.23.1.185****
        User, id [0], successfully created



        # grep "SAN_FRAME[0-9]*[24680]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailuser -address "flash-sc1@vnet.ibm.com" -usertype support -error on -warning off -info off -inventory on';done
        **** 172.23.1.182****
        User, id [0], successfully created
        **** 172.23.1.184****
        User, id [0], successfully created
        **** 172.23.1.186****
        User, id [0], successfully created



        # grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done
        **** 172.23.1.181****
        id name       address              user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
        **** 172.23.1.182****
        id name       address                user_type error warning info inventory
        0  emailuser0 flash-sc1@vnet.ibm.com support   on    off     off  on
        **** 172.23.1.183****
        id name       address              user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
        **** 172.23.1.184****
        id name       address                user_type error warning info inventory
        0  emailuser0 flash-sc1@vnet.ibm.com support   on    off     off  on
        **** 172.23.1.185****
        id name       address              user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
        **** 172.23.1.186****
        id name       address                user_type error warning info inventory
        0  emailuser0 flash-sc1@vnet.ibm.com support   on    off     off  on






        Notes:

        ------------------

        d. Creating local test e-mail users.

        This will create an e-mail user for the root user that is local to the environment. This will be used for testing purposes. These instructions can also be used to setup notifications in general.


        Commands:
        ------------------


        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailuser -address "root@flashgordonhostname01" -usertype support -error on -warning off -info off -inventory on';done

        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done

         


        Sample Output:
        ------------------

        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'svctask mkemailuser -address "root@flashgordonhostname01" -usertype support -error on -warning off -info off -inventory

        **** 172.23.1.181****
        User, id [1], successfully created
        **** 172.23.1.182****
        User, id [1], successfully created
        **** 172.23.1.183****
        User, id [1], successfully created
        **** 172.23.1.184****
        User, id [1], successfully created
        **** 172.23.1.185****
        User, id [1], successfully created
        **** 172.23.1.186****
        User, id [1], successfully created



        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done
        **** 172.23.1.181****
        id name       address                    user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com       support   on    off     off  on
        1  emailuser1 root@flashgordonhostname01 support   on    off     off  on
        **** 172.23.1.182****
        id name       address                    user_type error warning info inventory
        0  emailuser0 flash-sc1@vnet.ibm.com     support   on    off     off  on
        1  emailuser1 root@flashgordonhostname01 support   on    off     off  on
        **** 172.23.1.183****
        id name       address                    user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com       support   on    off     off  on
        1  emailuser1 root@flashgordonhostname01 support   on    off     off  on
        **** 172.23.1.184****
        id name       address                    user_type error warning info inventory
        0  emailuser0 flash-sc1@vnet.ibm.com     support   on    off     off  on
        1  emailuser1 root@flashgordonhostname01 support   on    off     off  on
        **** 172.23.1.185****
        id name       address                    user_type error warning info inventory
        0  emailuser0 callhome1@de.ibm.com       support   on    off     off  on
        1  emailuser1 root@flashgordonhostname01 support   on    off     off  on
        **** 172.23.1.186****
        id name       address                    user_type error warning info inventory
        0  emailuser0 flash-sc1@vnet.ibm.com     support   on    off     off  on
        1  emailuser1 root@flashgordonhostname01 support   on    off     off  on

        Notes:

        ------------------

        The sample output shows a setup for a V1.1 customer with a mixture of V7000 and Flash900 storage. The important thing is that the root@<managementhost> user is listed as id #1 which should be the case for all scenarios.

        Proceed to starting the e-mail service.

        5. Starting the e-mail service. The step checks the service status and starts the service if it is stopped. Changes to the email user will always disable the service and require it to be started again.


        Commands:
        ------------------
        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} lssystem | grep email_state;done

        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svctask startemail;done

        grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} lssystem | grep email_state;done



        Sample Output:
        ------------------
        $ grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} lssystem | grep email_state;done
        **** 172.23.1.181****
        email_state stopped
        **** 172.23.1.182****
        email_state stopped
        **** 172.23.1.183****
        email_state stopped
        **** 172.23.1.184****
        email_state stopped
        **** 172.23.1.185****
        email_state stopped
        **** 172.23.1.186****
        email_state stopped

        $ grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svctask startemail;done
        **** 172.23.1.181****
        **** 172.23.1.182****
        **** 172.23.1.183****
        **** 172.23.1.184****
        **** 172.23.1.185****
        **** 172.23.1.186****

        $ grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} lssystem | grep email_state;done
        **** 172.23.1.181****
        email_state running
        **** 172.23.1.182****
        email_state running
        **** 172.23.1.183****
        email_state running
        **** 172.23.1.184****
        email_state running
        **** 172.23.1.185****
        email_state running
        **** 172.23.1.186****
        email_state running


        Notes:

        ------------------

        SECTION 3: If necessary, update the LOCATIONS field in the storage enclosure to include solution variables.

        1. Determine if your enclosure includes the solution parameters. See Section 4.

        2. If not, determine the solution information. The following file will query the console database which should contain the solution serial numbers. The query will also calculate the appropriate MTM for each enclosure as appropriate. NOTE: This script will not work on PDOA environments updated to V1.0.0.6 or V1.1.0.2 as the console database is removed by those fixpacks. For those systems it will be necessary to generate the storagecallhome.txt file by hand. The Solution Serial numbers are located in the /pschome/config/xcluster.cfg file.

        a. Retrieve the file StorageCallHome.db2 from Developer Works. You will need to agree to the Developer Works terms of use but will not need an IBM ID to download this file. Copy this file to the management host in the directory ~db2psc/. Modify the file permissions so that the user db2psc owns the file.

        b. Login to the management host as the 'db2psc' user.

        c. db2 connect to pscdb

        d. db2 -xtf StorageCallHome.db2 | cut -d"|" -f 11,12 > storagecallhome.txt

        EXAMPLES:

        ============

        PDOA V1.1:


        • $ db2 -xtf StorageCallHome.db2 | cut -d"|" -f 11,12
          ip=172.23.1.181|location=@@@SMT:8280A03@SSN:Rack1SolutionSerial@SG:ISASV7K@@@
          ip=172.23.1.182|location=@@@SMT:8280A03@SSN:Rack1SolutionSerial@SG:ISASTMS@@@
          ip=172.23.1.183|location=@@@SMT:8280A03@SSN:Rack2SolutionSerial@SG:ISASV7K@@@
          ip=172.23.1.184|location=@@@SMT:8280A03@SSN:Rack1SolutionSerial@SG:ISASTMS@@@
          ip=172.23.1.185|location=@@@SMT:8280A03@SSN:Rack2SolutionSerial@SG:ISASV7K@@@
          ip=172.23.1.186|location=@@@SMT:8280A03@SSN:Rack2SolutionSerial@SG:ISASTMS@@@

          PDOA V1.0:


          $ db2 -xtf StorageCallHome.db2 | cut -d"|" -f 11,12
          ip=172.23.1.181|location=@@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          ip=172.23.1.182|location=@@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          ip=172.23.1.183|location=@@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          ip=172.23.1.184|location=@@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          ip=172.23.1.185|location=@@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@

          3. For each IP address update the location field with the text in the location='' second of the output.

          # Login as root on the management host. Verify file exists.

          $ cat ~db2psc/storagecallhome.txt
          ip=172.23.1.181|location=@@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          ip=172.23.1.182|location=@@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          ip=172.23.1.183|location=@@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          ip=172.23.1.184|location=@@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          ip=172.23.1.185|location=@@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@

          # Loop the text and change the location field.
          cat ~db2psc/storagecallhome.txt | grep "ip" | while read line;do ip=$(echo "$line" | cut -d"|" -f1 | cut -d"=" -f2);location=$(echo "$line" | cut -d"|" -f2 | cut -d"=" -f2);echo "${ip} ... ${location}";ssh -n superuser@${ip} "chemail -location='${location}'";done

          # Verify the setting has been updated.


          $ cat ~db2psc/storagecallhome.txt | grep "ip" | while read line;do ip=$(echo "$line" | cut -d"|" -f1 | cut -d"=" -f2);echo "${ip}";ssh -n superuser@${ip} "lssystem" | grep -i location;done
          172.23.1.181
          location local
          total_overallocation 108
          email_contact_location @@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          172.23.1.182
          location local
          total_overallocation 97
          email_contact_location @@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          172.23.1.183
          location local
          total_overallocation 97
          email_contact_location @@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          172.23.1.184
          location local
          total_overallocation 97
          email_contact_location @@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@
          172.23.1.185
          location local
          total_overallocation 97
          email_contact_location @@@SMT:8279A03@SSN:PARADISE@SG:ISASV7K@@@

          SECTION 4: Testing the call home e-mail.

          1. At this point the email servers, email users, contact information and email service are all setup and complete.

          2. Verify that you can su to the users that own the local e-mail addresses if local users are e-mail targets.

          3. Optional: To track sendmail messages login to each host in a separate session and run the following command. It will allow you to see sendmail processes transactions from the storage enclosures.


          tail -f /var/log/syslog.out | grep sendmail

          4. Issue a test email from the storage enclosures. This command will send a test email to the user with id 1. This was the test user that we had created after creating the support user. Before running this command check to see if the root user has a .forward file. Move the file to .forward.back for this test.


          Commands:
          ------------------

          grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'testemail 1';done

          Sample Output:
          ------------------
          **** 172.23.1.181****
          **** 172.23.1.182****
          **** 172.23.1.183****
          **** 172.23.1.184****
          **** 172.23.1.185****
          **** 172.23.1.186****


          Notes:

          ------------------

          If you see errors they can original from the storage enclosure or they can be messages from the sendmail daemon.

          You may see the following message.


          CMMVC6272E Sendmail error EX_NOUSER. The sendmail command could not recognize a specified user.

          5. Verify that the e-mails were sent to the local address. And verify the contents of the e-mail include the solution information.

          As root:


          Commands:
          ------------------
          mail


          Sample Output:
          ------------------
          mail
          ...
          >N 45 adminname@my.com  Thu Mar  9 20:38  36/1330 "2076 Email Test (flashgordon"
           N 46 adminname@my.com  Thu Mar  9 20:38  36/1342 "9840 Email Test (flashgordon"
           N 47 adminname@my.com  Thu Mar  9 20:38  36/1330 "2076 Email Test (flashgordon"
           N 48 adminname@my.com  Thu Mar  9 20:39  36/1342 "9840 Email Test (flashgordon"
           N 49 adminname@my.com  Thu Mar  9 20:39  36/1330 "2076 Email Test (flashgordon"
           N 50 adminname@my.com  Thu Mar  9 20:39  36/1342 "9840 Email Test (flashgordon"


          ? 45
          Message 45:
          From adminname@my.company.com Thu Mar  9 20:38:41 2017
          Date: Thu, 9 Mar 2017 20:38:36 -0500
          From: adminname@my.company.com
          To: root@flashgordonhostname01.torolab.ibm.com
          Subject: 2076 Email Test (flashgordonhostnameV7_00)

          # Organization = IBM
          # Machine Address = 11510 Burnet Road
          # Machine City = Austin
          # Machine State = TX
          # Machine Zip = 78758
          # Machine Country = US
          # Contact Name = Admin Name
          # Alternate Contact Name = N/A
          # Contact Phone Number = 5554443333
          # Alternate Contact Phone Number = N/A
          # Offshift Phone Number = N/A
          # Alternate Offshift Phone Number = N/A
          # Contact Email = adminname@my.company.com
          # Machine Location = PDOA LAB
          # Record Type = 4
          # Machine Type = 2076524
          # Serial Number = 78207FC
          # Machine Part Number =
          # System Name = flashgordonhostnameV7_00

          This is a test email. Please notify the contact, listed above, that you have received this email.


          Notes:

          ------------------

          a. Verify that each enclosure successfully sent e-mail.

          b. For each enclosure, verify that one of the following is true.

          i. Machine Type and Serial Number match the Solution MTM and Solution Serial number.

          ii. The Machine Location includes the pattern: @@@SMT:<SOLUTIONMTM>@SSN:<SOLUTIONNSERIAL>@SG:<ISASV7K|ISASTMS>@@@

          c. If 'i' is true, then the XML file was correctly installed during deployment. If 'i' is not true, then check 'ii'. If both are not true, then follow Section 3 to update the Location field in the enclosure.

          6. Issue a test email for the support users. These should go through the relay and can be verified by looking at the syslog messages.


          Commands:
          ------------------

          grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} 'testemail 0';done

          Sample Output:
          ------------------
          **** 172.23.1.181****
          **** 172.23.1.182****
          **** 172.23.1.183****
          **** 172.23.1.184****
          **** 172.23.1.185****
          **** 172.23.1.186****


          Notes:

          ------------------

          If you see errors they can originate from the storage enclosure or they can be messages from the sendmail daemon.

          You may see the following message.


          CMMVC6272E Sendmail error EX_NOUSER. The sendmail command could not recognize a specified user.

          In the syslog you may see the following. This indicates that sendmail is able to relay however the relay server is not accepting the connection. Verify that the relay server as specified by the DS variable in in the pdoa_sendmail.cf file is a valid STMP relay server and that the server is online. Note each pair of messages is related to an attempt from each of the enclosures.


          Mar  9 20:42:46 flashgordonhostname01 mail:info sendmail[5636838]: v2A1gkUQ5636838: from=<adminname@my.company.com>, size=1013, class=0, nrcpts=1, msgid=4294967295.20170309204241@flashgordonhostnameV7_00, proto=ESMTP, daemon=MTA, relay=flashgordonhostnameV7_00 [172.23.1.181]
          Mar  9 20:42:46 flashgordonhostname01 mail:info sendmail[4653278]: v2A1gkUQ5636838: to=<callhome1@de.ibm.com>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=121013, relay=b03ledav005.gho.boulder.ibm.com. [9.17.130.236], dsn=4.0.0, stat=Deferred: Connection refused by b03ledav005.gho.boulder.ibm.com.

          Mar  9 20:42:53 flashgordonhostname01 mail:info sendmail[9371968]: v2A1gqrb9371968: from=<adminname@my.company.com>, size=1026, class=0, nrcpts=1, msgid=4294967295.20170309204247@flashgordonhostnameFlash_00, proto=ESMTP, daemon=MTA, relay=flashgordonhostnameFlash_00 [172.23.1.182]
          Mar  9 20:42:53 flashgordonhostname01 mail:info sendmail[7602396]: v2A1gqrb9371968: to=<flash-sc1@vnet.ibm.com>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=121026, relay=b01ledav004.gho.pok.ibm.com. [9.57.199.109], dsn=4.0.0, stat=Deferred: Connection refused by b01ledav004.gho.pok.ibm.com.

          Mar  9 20:42:59 flashgordonhostname01 mail:info sendmail[3146528]: v2A1gxwI3146528: from=<adminname@my.company.com>, size=1013, class=0, nrcpts=1, msgid=4294967295.20170309204254@flashgordonhostnameV7_01, proto=ESMTP, daemon=MTA, relay=flashgordonhostnameV7_01 [172.23.1.183]
          Mar  9 20:42:59 flashgordonhostname01 mail:info sendmail[5309214]: v2A1gxwI3146528: to=<callhome1@de.ibm.com>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=121013, relay=b01ledav004.gho.pok.ibm.com. [9.57.199.109], dsn=4.0.0, stat=Deferred: Connection refused by b01ledav004.gho.pok.ibm.com.

          Mar  9 20:43:06 flashgordonhostname01 mail:info sendmail[5309216]: v2A1h5NO5309216: from=<adminname@my.company.com>, size=1026, class=0, nrcpts=1, msgid=4294967295.20170309204300@flashgordonhostnameFlash_01, proto=ESMTP, daemon=MTA, relay=flashgordonhostnameFlash_01 [172.23.1.184]
          Mar  9 20:43:06 flashgordonhostname01 mail:info sendmail[8520264]: v2A1h5NO5309216: to=<flash-sc1@vnet.ibm.com>, delay=00:00:01, xdelay=00:00:00, mailer=relay, pri=121026, relay=b03ledav004.gho.boulder.ibm.com. [9.17.130.235], dsn=4.0.0, stat=Deferred: Connection refused by b03ledav004.gho.boulder.ibm.com.

          Mar  9 20:43:12 flashgordonhostname01 mail:info sendmail[4194524]: v2A1hC2R4194524: from=<adminname@my.company.com>, size=1013, class=0, nrcpts=1, msgid=4294967295.20170309204307@flashgordonhostnameV7_02, proto=ESMTP, daemon=MTA, relay=flashgordonhostnameV7_02 [172.23.1.185]
          Mar  9 20:43:12 flashgordonhostname01 mail:info sendmail[7733700]: v2A1hC2R4194524: to=<callhome1@de.ibm.com>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=121013, relay=b01ledav006.gho.pok.ibm.com. [9.57.199.111], dsn=4.0.0, stat=Deferred: Connection refused by b01ledav006.gho.pok.ibm.com.

          Mar  9 20:43:18 flashgordonhostname01 mail:info sendmail[5767852]: v2A1hISG5767852: from=<adminname@my.company.com>, size=1026, class=0, nrcpts=1, msgid=4294967295.20170309204313@flashgordonhostnameFlash_02, proto=ESMTP, daemon=MTA, relay=flashgordonhostnameFlash_02 [172.23.1.186]
          Mar  9 20:43:18 flashgordonhostname01 mail:info sendmail[6751174]: v2A1hISG5767852: to=<flash-sc1@vnet.ibm.com>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=121026, relay=b01ledav004.gho.pok.ibm.com. [9.57.199.109], dsn=4.0.0, stat=Deferred: Connection refused by b01ledav004.gho.pok.ibm.com.

           

          7. Verifying root is cleaned.

          As part of the original call home setup, a .forward file was placed in the root home directory. Unless used for other purposes this file should be removed on the management and management standby hosts.

          8. Cleaning up.

          If desired remove the test e-mail user.


          Commands:
          ------------------
          # Check the current users.
          grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done

          # Remove the user with id 1. this was the test user we created.
           grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svctask rmemailuser 1;done

          # Verify the user is removed.
          grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done

          # Restart the mail services on the enclosures as the change will stop them.
           grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svctask startemail;done


          Sample Output:
          ------------------
          $ grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done
          **** 172.23.1.181****
          id name       address                    user_type error warning info inventory
          0  emailuser0 callhome1@de.ibm.com       support   on    off     off  on
          1  emailuser1 root@flashgordonhostname01 support   on    off     off  on
          **** 172.23.1.182****
          id name       address                    user_type error warning info inventory
          0  emailuser0 flash-sc1@vnet.ibm.com     support   on    off     off  on
          1  emailuser1 root@flashgordonhostname01 support   on    off     off  on
          **** 172.23.1.183****
          id name       address                    user_type error warning info inventory
          0  emailuser0 callhome1@de.ibm.com       support   on    off     off  on
          1  emailuser1 root@flashgordonhostname01 support   on    off     off  on
          **** 172.23.1.184****
          id name       address                    user_type error warning info inventory
          0  emailuser0 flash-sc1@vnet.ibm.com     support   on    off     off  on
          1  emailuser1 root@flashgordonhostname01 support   on    off     off  on
          **** 172.23.1.185****
          id name       address                    user_type error warning info inventory
          0  emailuser0 callhome1@de.ibm.com       support   on    off     off  on
          1  emailuser1 root@flashgordonhostname01 support   on    off     off  on
          **** 172.23.1.186****
          id name       address                    user_type error warning info inventory
          0  emailuser0 flash-sc1@vnet.ibm.com     support   on    off     off  on
          1  emailuser1 root@flashgordonhostname01 support   on    off     off  on



          $ grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svctask rmemailuser 1;done
          **** 172.23.1.181****
          **** 172.23.1.182****
          **** 172.23.1.183****
          **** 172.23.1.184****
          **** 172.23.1.185****
          **** 172.23.1.186****

          (0) root @ flashgordonhostname01: 7.1.0.0: /etc/mail
          $ grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svcinfo lsemailuser;done
          **** 172.23.1.181****
          id name       address              user_type error warning info inventory
          0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
          **** 172.23.1.182****
          id name       address                user_type error warning info inventory
          0  emailuser0 flash-sc1@vnet.ibm.com support   on    off     off  on
          **** 172.23.1.183****
          id name       address              user_type error warning info inventory
          0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
          **** 172.23.1.184****
          id name       address                user_type error warning info inventory
          0  emailuser0 flash-sc1@vnet.ibm.com support   on    off     off  on
          **** 172.23.1.185****
          id name       address              user_type error warning info inventory
          0  emailuser0 callhome1@de.ibm.com support   on    off     off  on
          **** 172.23.1.186****
          id name       address                user_type error warning info inventory
          0  emailuser0 flash-sc1@vnet.ibm.com support   on    off     off  on

          $ grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo "**** ${ip}****";ssh -n superuser@${ip} svctask startemail;done
          **** 172.23.1.181****
          **** 172.23.1.182****
          **** 172.23.1.183****
          **** 172.23.1.184****
          **** 172.23.1.185****
          **** 172.23.1.186****





        •  

Related Information

Rename this file to StorageCallHome.db2.

[{"Product":{"code":"SSH2TE","label":"PureData System for Operational Analytics"},"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Component":"--","Platform":[{"code":"PF002","label":"AIX"}],"Version":"1.0;1.1","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
13 December 2019

UID

swg22000050