Patch 11
The description and installation procedures for the enhancements and fixes for patch 11 are provided.
Patch details for wsl-v1231-x86-patch-11
- Setting time period for inactive sessions
- Admins can set the time period for which a session can be inactive before a user is logged out. Learn more
- Notebooks and Environments (Jupyter 3.5 and GPU only)
- Data scientists can reconnect to their running Jupyter notebook and view the progress and cell output after navigating away from the running notebook or after being signed out due to inactivity. Learn more
Prerequisites
IBM Watson Studio Local 1.2.3.1 or any 1.2.3.1 patch on x86. To download patch 11, go to Fix Central and select wsl_app_patch_v1231_11_v1.0.0.
Patch files
- Platform patch: patch_x86_64_CVE_2019_1002100_v1.0.0.tar
- Application patch: wsl_app_patch_v1231_11_v1.0.0
Pre-installation
The installation requires a quiet period to patch the Watson Studio Local cluster.
- Confirm that all jobs are stopped and that jobs aren't scheduled during the patch installation.
- Stop all running environments by using the following
commands:
kubectl delete deploy -n dsx -l type=jupyter-gpu-py35 kubectl delete deploy -n dsx -l type=jupyter-py35 kubectl delete svc -n dsx -l type=jupyter-gpu-py35 kubectl delete svc -n dsx -l type=jupyter-py35 - It is recommended that you delete customized images from . After you apply the patch, the Image Management page will only show the new base images. If you do not delete existing customized images based on old base images, you can still use them from the environment page after installing the patch. However, you cannot delete them from the Image Management. To manually delete the images after installing the patch, follow the steps to manually delete custom images .
- Ensure that all the nodes of the cluster are running before installing this patch. Also kubelet and docker services should be running on all the nodes.
- If the cluster is using Gluster, ensure that the gluster file system is clean before installing
this patch by running the following
command:
gluster volume status sysibm-adm-ibmdp-assistant-pv detail | grep Online | grep ": N" | wc -lIf the resulting count is larger than 0, then one or more bricks for the volume are not healthy and must be fixed before continuing to install the patch.
- Set up the network manager. Run
systemctl status NetworkManagerto check if the NetworkManager service is running. If the output does not show that the service is running, this step can be skipped. If the service is running, perform the following steps on each node of the cluster.- Create a file named /etc/NetworkManager/conf.d/calico.conf with the
following
content:
[keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:flannel.* - Run
systemctl restart NetworkManagerto restart the NetworkManager service.
- Create a file named /etc/NetworkManager/conf.d/calico.conf with the
following
content:
Installing the patch for OpenShift clusters
To install the patch for OpenShift clusters, contact your IBM representative.
Installing the patch for non-OpenShift clusters
There are two installers for this patch: platform and application.
- This patch cannot be rolled back.
- Install the platform patch only if you didn't install patch 10.
- Download the patch tar file
patch_x86_64_CVE_2019_1002100_v1.0.0.tar. The preferred location is the install path name from /wdp/config, such as /ibm. - Log in as the root or system administrator who has read/write permissions in the install directory. This script runs the remote scripts by using SSH.
- Use tar to extract the patch scripts. It will create a new directory in the install directory
and install the patch files
there.
tar xvf patch_x86_64_CVE_2019_1002100_v1.0.0.tar - Change to the patch directory and run the patch_master.sh script by using the following
command:
cd <install_dir>/patch_CVE_2019_1002100 ./patch_master.shIf you have sudo privileges for installing the patch, ensure that the sudo user is created on all nodes. Log in as<sudo_user>, and run thepatch_master.shscript by using the following command:cd <install_dir>/patch_CVE_2019_1002100 sudo ./patch_master.shOptionally you can create a private key for this user in the ~/.ssh dir to use instead of a user password.
To get a list of all available options and examples of usage, runcd <install_dir>/patch_CVE_2019_1002100 ./patch_master.sh --help - Monitor the progress of the installation. If any issues are encountered, check the logs file. The remote nodes keep log files in the <install_dir>/patch_CVE_2019_1002100 directory.
To install the application patch
- Download the patch tar file
wsl_app_patch_v1231_11_v1.0.0to the Watson Studio node. The preferred location is the install path name from /wdp/config, such as /ibm. - Log in as the root or system administrator who has read/write permissions in the install directory. This script runs the remote scripts by using SSH.
- Use tar to extract the patch scripts. It will create a new directory in the install directory
and install the patch files
there.
tar xvf wsl_app_patch_v1231_10_v1.0.0.tar - Change to the patch directory and run the patch_master.sh script by using the following
command:
cd <install_dir>/wsl_app_patch_v1231 ./patch_master.shIf you have sudo privileges for installing the patch, ensure that the sudo user is created on all nodes. Log in as<sudo_user>, and run thepatch_master.shscript by using the following command:cd <install_dir>/wsl_app_patch_v1231 sudo ./patch_master.shOptionally you can create a private key for this user in the ~/.ssh dir to use instead of a user password.
To get a list of all available options and examples of usage, runcd <install_dir>/wsl_app_patch_v1231 ./patch_master.sh --help - Monitor the progress of the installation. If any issues are encountered, check the logs file. The remote nodes keep log files in the <install_dir>/wsl_app_patch_v1231/logs directory.
- The patch installation script displays the following message at the end:
Patch installation has been successfully completed!. However, the script leaves docker pull jobs running in the background on compute nodes. These jobs might take up to 1 hour to complete. You can check the status of pull jobs with
Do not restart docker or reboot compute nodes until all of the pull jobs are complete.ps -ef | grep pull
Rolling back the application patch
- Run
./patch_master.sh --rollback
Post-installation
Verifying the installation and cluster
cat /wdp/patch/current_patch_level A successful install should
display:patch_number=11
patch_version=1.0.0kubectl get node
kubectl get po --all-namespacesManually delete environment definitions
The patch installation does not delete your environment definitions that were previously saved before the patch was installed. You can manually delete the definitions.
To delete the environment definitions:
- Run the following command to get the image management
pods:
kubectl get pods -n dsx | grep imagemgmt | grep -v Completed - Run the following command to execute into the
pod:
kubectl exec -it -n dsx <podname> sh - Delete runtime definitions files for your old environments by using the
following
command:
cd /user-home/_global_/config/.runtime-definitions/custom rm <files_no_longer_needed> - Go to each node and delete the docker image from that node by using the following
commands:
ssh <node-ip> docker images | grep your-image-name (or docker images | grep customimages) docker rmi <image-hex>