Patch 11

The description and installation procedures for the enhancements and fixes for patch 11 are provided.

Patch details for wsl-v1231-x86-patch-11

This patch includes the following enhancements and fixes:
Setting time period for inactive sessions
Admins can set the time period for which a session can be inactive before a user is logged out. Learn more
Notebooks and Environments (Jupyter 3.5 and GPU only)
Data scientists can reconnect to their running Jupyter notebook and view the progress and cell output after navigating away from the running notebook or after being signed out due to inactivity. Learn more

Prerequisites

IBM Watson Studio Local 1.2.3.1 or any 1.2.3.1 patch on x86. To download patch 11, go to Fix Central and select wsl_app_patch_v1231_11_v1.0.0.

Patch files

The patch contains the following files:
Note: The platform patch was available as of Patch 10. Install the platform patch only if you didn't install patch 10.
  • Platform patch: patch_x86_64_CVE_2019_1002100_v1.0.0.tar
  • Application patch: wsl_app_patch_v1231_11_v1.0.0

Pre-installation

The installation requires a quiet period to patch the Watson Studio Local cluster.

  1. Confirm that all jobs are stopped and that jobs aren't scheduled during the patch installation.
  2. Stop all running environments by using the following commands:
    kubectl delete deploy -n dsx -l type=jupyter-gpu-py35
    kubectl delete deploy -n dsx -l type=jupyter-py35
    kubectl delete svc -n dsx -l type=jupyter-gpu-py35
    kubectl delete svc -n dsx -l type=jupyter-py35
  3. It is recommended that you delete customized images from Admin Console > Image Management. After you apply the patch, the Image Management page will only show the new base images. If you do not delete existing customized images based on old base images, you can still use them from the environment page after installing the patch. However, you cannot delete them from the Image Management. To manually delete the images after installing the patch, follow the steps to manually delete custom images .
  4. Ensure that all the nodes of the cluster are running before installing this patch. Also kubelet and docker services should be running on all the nodes.
  5. If the cluster is using Gluster, ensure that the gluster file system is clean before installing this patch by running the following command:
    gluster volume status sysibm-adm-ibmdp-assistant-pv detail | grep Online | grep ": N" | wc -l

    If the resulting count is larger than 0, then one or more bricks for the volume are not healthy and must be fixed before continuing to install the patch.

  6. Set up the network manager. Run systemctl status NetworkManager to check if the NetworkManager service is running. If the output does not show that the service is running, this step can be skipped. If the service is running, perform the following steps on each node of the cluster.
    1. Create a file named /etc/NetworkManager/conf.d/calico.conf with the following content:
      [keyfile]
      unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:flannel.*
    2. Run systemctl restart NetworkManager to restart the NetworkManager service.

Installing the patch for OpenShift clusters

To install the patch for OpenShift clusters, contact your IBM representative.

Installing the patch for non-OpenShift clusters

There are two installers for this patch: platform and application.

To install the platform patch
Note:
  • This patch cannot be rolled back.
  • Install the platform patch only if you didn't install patch 10.
  1. Download the patch tar file patch_x86_64_CVE_2019_1002100_v1.0.0.tar. The preferred location is the install path name from /wdp/config, such as /ibm.
  2. Log in as the root or system administrator who has read/write permissions in the install directory. This script runs the remote scripts by using SSH.
  3. Use tar to extract the patch scripts. It will create a new directory in the install directory and install the patch files there.
    tar xvf patch_x86_64_CVE_2019_1002100_v1.0.0.tar 
  4. Change to the patch directory and run the patch_master.sh script by using the following command:
    cd <install_dir>/patch_CVE_2019_1002100
    ./patch_master.sh
    If you have sudo privileges for installing the patch, ensure that the sudo user is created on all nodes. Log in as <sudo_user>, and run the patch_master.sh script by using the following command:
    cd <install_dir>/patch_CVE_2019_1002100
    sudo ./patch_master.sh

    Optionally you can create a private key for this user in the ~/.ssh dir to use instead of a user password.

    To get a list of all available options and examples of usage, run
    cd <install_dir>/patch_CVE_2019_1002100
    ./patch_master.sh --help
  5. Monitor the progress of the installation. If any issues are encountered, check the logs file. The remote nodes keep log files in the <install_dir>/patch_CVE_2019_1002100 directory.

To install the application patch

  1. Download the patch tar file wsl_app_patch_v1231_11_v1.0.0 to the Watson Studio node. The preferred location is the install path name from /wdp/config, such as /ibm.
  2. Log in as the root or system administrator who has read/write permissions in the install directory. This script runs the remote scripts by using SSH.
  3. Use tar to extract the patch scripts. It will create a new directory in the install directory and install the patch files there.
    tar xvf wsl_app_patch_v1231_10_v1.0.0.tar
  4. Change to the patch directory and run the patch_master.sh script by using the following command:
    cd <install_dir>/wsl_app_patch_v1231
    ./patch_master.sh
    If you have sudo privileges for installing the patch, ensure that the sudo user is created on all nodes. Log in as <sudo_user>, and run the patch_master.sh script by using the following command:
    cd <install_dir>/wsl_app_patch_v1231
    sudo ./patch_master.sh

    Optionally you can create a private key for this user in the ~/.ssh dir to use instead of a user password.

    To get a list of all available options and examples of usage, run
    cd <install_dir>/wsl_app_patch_v1231
    ./patch_master.sh --help
  5. Monitor the progress of the installation. If any issues are encountered, check the logs file. The remote nodes keep log files in the <install_dir>/wsl_app_patch_v1231/logs directory.
  6. The patch installation script displays the following message at the end: Patch installation has been successfully completed!. However, the script leaves docker pull jobs running in the background on compute nodes. These jobs might take up to 1 hour to complete. You can check the status of pull jobs with
    ps -ef | grep pull
    Do not restart docker or reboot compute nodes until all of the pull jobs are complete.

Rolling back the application patch

Note: Only the application patch can be rolled back. The platform patch cannot be rolled back.
Roll back the patch
  • Run
    ./patch_master.sh --rollback

Post-installation

Verifying the installation and cluster

To verify that the install is successful, run:
cat /wdp/patch/current_patch_level 
A successful install should display:
patch_number=11
patch_version=1.0.0
To verify that your cluster is healthy, check that all of the nodes are in ready state and all the pods are running by running the following commands.
kubectl get node
kubectl get po --all-namespaces

Manually delete environment definitions

The patch installation does not delete your environment definitions that were previously saved before the patch was installed. You can manually delete the definitions.

To delete the environment definitions:

  1. Run the following command to get the image management pods:
    kubectl get pods -n dsx | grep imagemgmt | grep -v Completed
  2. Run the following command to execute into the pod:
    kubectl exec -it -n dsx <podname> sh
    
  3. Delete runtime definitions files for your old environments by using the following command:
    cd /user-home/_global_/config/.runtime-definitions/custom
    rm <files_no_longer_needed>
  4. Go to each node and delete the docker image from that node by using the following commands:
    ssh <node-ip>
    docker images | grep your-image-name (or docker images | grep customimages)
    docker rmi <image-hex>