Upgrading the HA Environment

To upgrade an IBM Aspera Shares HA deployment, you must upgrade each Shares node individually and then reconfigure them to run in an HA environment.

Warning:

Review the following prior to performing any upgrade:

  1. Perform a full environment back up and ensure the back up is successful. In case the upgrade fails, the only reliable, short-term fix is to roll back the environment using the back up.
  2. Test the upgrade in a test environment comparable to the production environment.
  3. If upgrading the test environment is successful, upgrade the production environment, but do not bring the production environment back online.
  4. Prior to bringing the production environment back online, the customer must test the application to determine if an immediate rollback is needed. Otherwise, customers risk losing all data generated between upgrade and rollback.
Warning:

The standard, non-HA upgrade does not account for the many variations in customer network configurations needed for HA installation. The non-HA upgrade may alter configuration settings required for ACM-based Aspera HA.

Every upgrade of an ACM-based HA installation should include, after upgrade, a manual re-check of all the HA configuration by a deployment engineer that is knowledgeable about Aspera ACM-based HA and also knowledgeable about IP networking. The deployment engineer needs to understand and be able to configure the particular network configuration of the individual customer, including load balancers, firewalls, and so on.

Stopping the Cronjobs for Upgrade

You need to stop Shares and MySQL services before performing the upgrade.
On both nodes, stop the cronjob by commenting out the jobs.
# crontab -e
# * * * * * /opt/aspera/acm/bin/acm 10.0.71.21 20 > /dev/null 2>&1
# 30 3 * * * /opt/aspera/acm/bin/acmtl -b > /dev/null 2>&1
# 45 3 * * 7 echo -n "" > /opt/aspera/common/asctl/log/asctl.log > /dev/null 2>&1

Back Up Shares on Both Nodes

  1. Back up Shares files on both nodes:
    # /opt/aspera/shares/u/setup/bin/backup /backup_dir 
    Note: The rake task runs as an unprivileged user. Ensure ensure the destination directory is writable by all users. Use /tmp.
    For example:
    # /opt/aspera/shares/u/setup/bin/backup /tmp
    
    Creating backup directory /tmp/20130627025459 ...
    Checking status of aspera-shares ...
    Status is running
    mysqld is alive
    Backing up the Shares database and config files ...
    Backing up the SSL certificates ...
    Done
  2. Make a note of the ID of the created backup directory for future use. In the above example: 20130627025459.

Upgrade Shares on the Active Node

  1. On the active node, disable ACM.
    # /opt/aspera/acm/bin/acmctl -D
    
    ACM is disabled globally
  2. Check the node status.
    # /opt/aspera/acm/bin/acmctl -i
    Checking current ACM status...
    
    Aspera Cluster Manager status
    -----------------------------
    Local hostname:         hashares2
    Active node:            hashares2 (me)
    Status of this node:    active
    Status file:            current
    Disabled globally:      yes
    Disabled on this node:  no
    
    Database configuration file
    ---------------------------
    Database host:        10.0.115.102
    
    Shares active/active services status
    ------------------------------------
    nginx:      running
    crond:      running
    
    
    Shares active/passive services status
    -------------------------------------
    mysqld:                      running
    shares-background-default-0: running
    shares-background-nodes-0:   running
    shares-background-users-0:   running
    shares-background-users-1:   running
    shares-background-users-2:   running
  3. Unpack the installer.
    Run the following command as root, where version is the package version:
    # rpm -Uvh aspera-shares-version.rpm
    The following is an example of the output generated:
      Preparing...                ########################################### [100%]
    
      Switching to the down runlevel ...
      runsvchdir: down: now current.
      Switched runlevel
      
      Checking status of aspera-shares ...
      Status is running
      Stopping aspera-shares ...
      Stopped
      
         1:aspera-shares          ########################################### [100%]
      
      To complete the upgrade, please run this script as the root user:
      
          [root]$ /opt/aspera/shares/u/setup/bin/upgrade
  4. Run the upgrade script.
    # /opt/aspera/shares/u/setup/bin/upgrade 
    The following is an example of the output generated during the upgrade:
      Starting aspera-shares ...
      Started
      Waiting for MySQL server to answer
      mysqld is alive
      Migrating the Shares database ...
      Initializing the Shares database ...
      Clearing background jobs ...
      Migrating the stats collector database ...
      Done
  5. Stop all Shares services.
    # service shares stop

Manually Fail Over to the Passive Node and Upgrade Shares

  1. On the passive node, enable ACM locally .
    # /opt/aspera/acm/bin/acmctl -e
    
    ACM is enabled locally
  2. Check the node status.
    # /opt/aspera/acm/bin/acmctl -i
    Checking current ACM status...
    
    Aspera Cluster Manager status
    -----------------------------
    Local hostname:         hashares1
    Active node:            hashares1 (me)
    Status of this node:    active
    Status file:            current
    Disabled globally:      no
    Disabled on this node:  no
    
    Database configuration file
    ---------------------------
    Database host:        10.0.115.102
    Shares active/active services status
    ------------------------------------
    nginx:      running
    crond:      running
    
    
    Shares active/passive services status
    -------------------------------------
    mysqld:                      running
    shares-background-default-0: running
    shares-background-nodes-0:   running
    shares-background-users-0:   running
    shares-background-users-1:   running
    shares-background-users-2:   running
    

Stop services to perform Shares upgrade.

  1. Disable ACM locally.
    # /opt/aspera/acm/bin/acmctl -d
     
    ACM is disabled locally
  2. Stop all Shares services.
    # service shares stop
  3. Unpack the installer.
    Run the following command as root, where version is the package version:
    # rpm -Uvh aspera-shares-version.rpm
    The following is an example of the output generated:
      Preparing...                ########################################### [100%]
    
      Switching to the down runlevel ...
      runsvchdir: down: now current.
      Switched runlevel
      
      Checking status of aspera-shares ...
      Status is running
      Stopping aspera-shares ...
      Stopped
      
         1:aspera-shares          ########################################### [100%]
      
      To complete the upgrade, please run this script as the root user:
      
          [root]$ /opt/aspera/shares/u/setup/bin/upgrade
  4. Run the upgrade script.
    # /opt/aspera/shares/u/setup/bin/upgrade 
    The following is an example of the output generated during the upgrade:
      Starting aspera-shares ...
      Started
      Waiting for MySQL server to answer
      mysqld is alive
      Migrating the Shares database ...
      Initializing the Shares database ...
      Clearing background jobs ...
      Migrating the stats collector database ...
      Done
  5. Enable ACM locally .
    # /opt/aspera/acm/bin/acmctl -e
    
    ACM is enabled locally
  6. Check the node status of the two nodes to make sure one is active and one is passive.
    # /opt/aspera/acm/bin/acmctl -i
    Aspera Cluster Manager status
    -----------------------------
    Local hostname:         hashares1
    Active node:            hashares1 (me)
    Status of this node:    active
    Status file:            current
    Disabled globally:      no
    Disabled on this node:  no
    
    Database configuration file
    ---------------------------
    Database host:        10.0.115.102
    
    Shares active/active services status
    ------------------------------------
    nginx:      running
    crond:      running
    
    
    Shares active/passive services status
    -------------------------------------
    mysqld:                      running
    shares-background-default-0: running
    shares-background-nodes-0:   running
    shares-background-users-0:   running
    shares-background-users-1:   running
    shares-background-users-2:   running
    
    # /opt/aspera/acm/bin/acmctl -i
    Checking current ACM status...
    
    Aspera Cluster Manager status
    -----------------------------
    Local hostname:         hashares2
    Active node:            hashares2 (me)
    Status of this node:    passive
    Status file:            current
    Disabled globally:      no
    Disabled on this node:  no
    
    Database configuration file
    ---------------------------
    Database host:        10.0.115.102
    
    Shares active/active services status
    ------------------------------------
    nginx:      running
    crond:      running
    
    
    Shares active/passive services status
    -------------------------------------
    mysqld:                      not running
    shares-background-default-0: not running
    shares-background-nodes-0:   not running
    shares-background-users-0:   not running
    shares-background-users-1:   not running
    shares-background-users-2:   not running
  7. Restart the cronjobs on both the nodes by uncommenting the jobs.
    # crontab -e
    * * * * * /opt/aspera/acm/bin/acm 10.0.71.21 20 > /dev/null 2>&1
    30 3 * * * /opt/aspera/acm/bin/acmtl -b > /dev/null 2>&1
    45 3 * * 7 echo -n "" > /opt/aspera/common/asctl/log/asctl.log > /dev/null 2>&1