Patch 06
This topic provides installation procedures for patch 06 and descriptions for the enhancements and fixes included in the patch.
Patch details for wsl-v1231-x86-patch-06
This patch includes the following enhancements and fixes:
Enhancements
- The Github/Bitbucket integration now lets data scientists create projects from a specific branch in a repository.
- An admin can now control the creation of the Spark Context by default in Jupyter 2.7 and 3.5 environments.
- The pyarrow and the compatible pandas package are included in Jupyter 2.7 and 3.5 environments.
- The version of the sparkmagics package included with the Jupyter 2.7 and 3.5 environments is upgraded.
- TS001906969 - All collaborators in a project can view the jobs that are scheduled by the project admins or editors.
Fixed defects
- TS002051550/TS002578020 – Error reported for certain Github/Bitbucket tokens
- An "Invalid Access Token” error is returned when a Github/Bitbucket access token includes a “/” or other special characters.
- Issue with updating the tag for a project release in Watson Machine Learning
- Updating the tag for a project release that is created from a Github/Bitbucket repository causes your browser to hang.
- TS002329694–Graphviz package fix
- The following error occurs when you use the graphviz package within a Jupyter notebook:
FileNotFoundError: [Errno 2] No such file or directory: 'dot' ExecutableNotFound: failed to execute ['dot', '-Tpng'], make sure the Graphviz executables are on your systems' PATH
- Issue with user-sensitive information logged for certain failed operations
- When certain operations within a project fail, user-sensitive information is logged in the error message.
- Credentials found in scripts within a docker image
- One of the docker images includes scripts that have hardcoded credentials in the script files.
- TS002491483 – Certain copy operations within a user’s pod are performed as root
- The startup scripts of certain user pods run copy operations as root and can lead to a security vulnerability.
- TS002561905 – Error returned for certain user name formats when creating a token for Github/Bitbucket access
- An authentication error is returned when the user name provided when a token is created for Github/Bitbucket access and it contains “.”, “@” or “\”
Prerequisites
WSL 1.2.3.1 x86 patch01, patch02, patch03, and patch05 must all be installed. To download patch 06, go to Fix Central and select wsl-x86-v1231-patch06. Previous patches are also available in Fix Central.
Patch files
The patch contains the following files:
- wsl-x86-v1231-patch06-part01.tar.gz
- wsl-x86-v1231-patch06-part02.tar.gz
- wsl-x86-v1231-patch06-part03.tar.gz
- wsl-x86-v1231-patch06-part04.tar.gz
After you extract the files, the following files are available under a new directory named
wsl-x86-v1231-patch06:
- dsx-core.v3.13.1319-x86_64-20190814.220044.tgz
- dsx-scripted-ml.v0.01.232-x86_64-20190618.013350.tgz
- jupyter-d8a2rls2x-shell.v1.0.347-x86_64-20190807.211456.tgz
- jupyter-d8a3rls2x-shell.v1.0.338-x86_64-20190807.211604.tgz
- jupyter-gpu-py36.v1.0.4-x86_64-20190814.020206.tgz
- spawner-go-api.v3.13.1039-x86_64-20190722.204345.tgz
- usermgmt.v3.13.1598-x86_64-20190808.193735.tgz
- wdp-dashboard-frontend.1.3.4-x86_64-20190807.230148.tgz
- dsx-scripted-ml-job-v1231-patch06.yaml
- dsx-scripted-ml-job-v1231-patch06-preexistingk8s.yaml
Pre-installation
If you're applying the patch on Watson Studio Local that is running on a
pre-existing Kubernetes cluster like OpenShift, you must perform these tasks:
- Identify the docker registry that is used by the cluster by running
You must change the commands based on the docker registry that is used by the installation. Change the following docker commands to use the docker registry used by the cluster.idp-registry.sysibm-adm.svc.cluster.local:31006 - Authenticate to kubectl.
- Authenticate to docker.
The following are general pre-installation tasks, applicable in all cases:
- Back up the config map
kubectl get configmap -n dsx runtimes-def-configmap -o yaml > configmap.backup.patch06.yaml- Run
and note the value of the image key.kubectl get configmap -n dsx runtimes-def-configmap -o yaml | grep jupyter-d8a2rls2x-shell - Run
and note the value of the image key.kubectl get configmap -n dsx runtimes-def-configmap -o yaml | grep jupyter-d8a3rls2x-shell
- Run
- Note the key values of the images on your system now by running the following docker image
commands. You need these values if you decide to roll back the patch.
- Run
and note the value of the image key.kubectl get deploy -n dsx dsx-core -o yaml | grep image: - Run
and note the value of the image key.kubectl get deploy -n dsx spawner-api -o yaml | grep image: - Run
and note the value of the image key.kubectl get deploy -n dsx usermgmt -o yaml | grep image: - Run
and note the value of the image key.kubectl get deploy -n dsx dash-front-deploy -o yaml | grep image: - Run
and note the value of the image key.kubectl get deploy -n dsx dsx-scripted-ml-python2 -o yaml | grep image: - Run
and note the value of the image key.kubectl get deploy -n dsx dsx-scripted-ml-python3 -o yaml | grep image: - Run
and note the value of the image keykubectl get deploy -n dsx zen-scripted-data-python2 -o yaml | grep image: - Run
and note the value of the image keykubectl get deploy -n dsx jupyter-notebooks-nbviewer -o yaml | grep image: - Run
and note the value of the image keykubectl get deploy -n dsx jupyter-notebooks-nbviewer-dev -o yaml | grep image:
- Run
- This section applies for standalone installations only. To roll back the image management
files:
- Run
to get the image management pods.cd /user-home/_global_/.custom-images/ - Run
to execute into the pod. You can pick any of the three pods that will be running.cp -ar builtin-metadata builtin-metadata-patch06-backup - Run
cp -ar metadata metadata-patch06-backup - Back up the builtin-metadata directory.
- Back up the metadata directory.
- Run
Installing patches
To install the dsx-core patch
- Open the dsx-core image by running
A directory that is calledtar -xzvf dsx-core.v3.13.1319-x86_64-20190814.220044.tgz
is created and contains files in it.dsx-core-artifact - Run the command
cd dsx-core-artifact - Run the command
to load the image to the docker registry.docker load < dsx-core_v3.13.1319-x86_64.tar.gz - Run
to tag the image.docker tag 88d127bc7643 idp-registry.sysibm-adm.svc.cluster.local:31006/dsx-core:v3.13.1319-x86_64_v1231-patch06 - Run
to push the image to the docker registry.docker push idp-registry.sysibm-adm.svc.cluster.local:31006/dsx-core:v3.13.1319-x86_64_v1231-patch06 - Run
kubectl -n dsx edit deploy dsx-core - Look for the image key, and then change the value to
idp-registry.sysibm-adm.svc.cluster.local:31006/dsx-core:v3.13.1319-x86_64_v1231-patch06
To install the dsx-scripted-ml image patch
- Open the dsx-scripted-ml by running
A directory that is called dsx-scripted-ml-artifact is created and contains files in it.tar -xzvf dsx-scripted-ml.v0.01.232-x86_64-20190618.013350.tgz - Run the command
cd dsx-scripted-ml-artifact - Run the
command to load the image.docker load < privatecloud-dsx-scripted-ml_v0.01.232-x86_64.tar.gz - Run
to tag the image.docker tag 214e98095b79 idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-dsx-scripted-ml:v0.01.232-x86_64_v1231-patch06 - Run
to push the image to the docker registry.docker push idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-dsx-scripted-ml:v0.01.232-x86_64_v1231-patch06 - Use the command
to go to the parent directory.cd .. - Run the following command to create a Kubernetes job:
- If running on a standalone installation of Watson Studio, create a new job by running
kubectl apply -f dsx-scripted-ml-job-v1231-patch06.yaml - If running on a Watson Studio deployment on an existing Kubernetes platform like Openshift, edit
dsx-scripted-ml-job-v1231-patch06-preexistingk8s.yaml and do these tasks:
- Update with the image pushed in step 1 of installing the dsx-scripted-ml image patch and create
a new job by running
kubectl apply -f dsx-scripted-ml-job-v1231-patch06-preexistingk8s.yaml - Change the value of the namespace attribute to match the namespace in which Watson Studio is deployed.
- Update with the image pushed in step 1 of installing the dsx-scripted-ml image patch and create
a new job by running
-
Note: If you run into a “
field is immutable“ error while running
runkubectl apply -f dsx-scripted-ml-job-v1231-patch06.yaml
to delete the existing job, and then runkubectl delete job -n dsx dsx-scripted-ml-patch-06kubectl apply -f dsx-scripted-ml-job-v1231-patch06.yaml
- If running on a standalone installation of Watson Studio, create a new job by running
- Wait until the job pod reaches a Completed state by using the query
kubectl get pods -n dsx | grep dsx-scripted-ml-patch-06
To install the Python 2.7 image patch
- Open the Python 2.7 image by running
A directory that is called jupyter-d8a2rls2x-shell-artifact is created and contains files.tar -xzvf jupyter-d8a2rls2x-shell.v1.0.347-x86_64-20190807.211456.tgz - Run the command
cd jupyter-d8a2rls2x-shell-artifact - Run
command to load the image.docker load < jupyter-d8a2rls2x-shell_v1.0.347-x86_64.tar.gz - Run
to tag the image.docker tag 3a0319ecdde6 idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06 - Run
to push the image to the docker registry.docker push idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06 - ssh to each compute node in the cluster and run
to download the image to the node. This step can take a long time.docker pull idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06 - Run
kubectl -n dsx edit configmap runtimes-def-configmap- Look for the following sections:
jupyter-server.jsondsx-scripted-ml-python2-server.jsonpython27-script-as-a-service-server.json
- In each of the sections, change the image key value to
idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06 - Add the following environment snippet (in bold) after the
resourcessection. Note the `,` that is required after the resources section:"resources": { . . . "duration": { "value": -1, "units": "unix" } }, "env": [ { "name": "AUTOSTART_JUPYTER_SC", "value": "autoStartJupyterSC", "source": "GlobalConfig" } ] }
- Look for the following sections:
- Run
kubectl edit deploy -n dsx dsx-scripted-ml-python2 - Look for the image key, and then change the value to
idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06 - Run
kubectl edit deploy -n dsx zen-scripted-data-python2 - Look for the image key, and then change the value to
idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06
To install the Python 3.5 image patch
- Open the Python 3.5 image by running
A directory that is called jupyter-d8a3rls2x-shell-artifact is created and contains files.tar -xzvf jupyter-d8a3rls2x-shell.v1.0.338-x86_64-20190807.211604.tgz - Run the command
cd jupyter-d8a3rls2x-shell-artifact - Run the command
to load the image.docker load < jupyter-d8a3rls2x-shell_v1.0.338-x86_64.tar.gz - Run
to tag the image.docker tag 61bc2e8259e7 idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06 - Run
to push the image to the docker registry.docker push idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06 - ssh to each compute node in the cluster and run
to download the image to the node. This step can take a long time.docker pull idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06 - Run
kubectl -n dsx edit configmap runtimes-def-configmap- Look for the following sections:
jupyter-py35-server.jsondsx-scripted-ml-python3-server.jsonpython35-script-as-a-service-server.jsonsshd-server.json
- In each of the sections, change the image key value to
idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06 - Add the following environment snippet (in bold) after the
resourcessection. Note the `,` that is required after the following resources section:"resources": { . . . "duration": { "value": -1, "units": "unix" } }, "env": [ { "name": "AUTOSTART_JUPYTER_SC", "value": "autoStartJupyterSC", "source": "GlobalConfig" } ] }
- Look for the following sections:
- Run
kubectl edit deploy -n dsx dsx-scripted-ml-python3 - Look for the image key, and then change the value to
idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06 - Run
kubectl edit deploy -n dsx jupyter-notebooks-nbviewer - Look for the image key, and then change the value to
idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06 - Run
kubectl edit deploy -n dsx jupyter-notebooks-nbviewer-dev - Look for the image key, and then change the value to
idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06
Install the spawner image patch
- Open the spawner image by running
A directory that is called spawner-go-api-artifact is created and contains files.tar -xzvf spawner-go-api.v3.13.1039-x86_64-20190722.204345.tgz - Run the command
cd spawner-go-api-artifact - Run the command
to load the image to the docker registry.docker load < privatecloud-spawner-api-k8s_v3.13.1039-x86_64.tar.gz - Run the command
to get the unique image ID of the images that were loaded to docker in the previous step.docker images - Run
docker tag ae37b8429267 idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-spawner-api-k8s:v3.13.1039-x86_64_v1231-patch06 - Run
to push the image to the docker registry.docker push idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-spawner-api-k8s:v3.13.1039-x86_64_v1231-patch06 - Run
kubectl -n dsx edit deploy spawner-api - Look for the image key, and then change the value to
idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-spawner-api-k8s:v3.13.1039-x86_64_v1231-patch06
To install the usermgmt patch
- Open the usermgmt image by running
tar -xzvf usermgmt.v3.13.1598-x86_64-20190808.193735.tgz - Run the command
cd usermgmt-artifact - Run the command
to load the image.docker load < privatecloud-usermgmt_v3.13.1598-x86_64.tar.gz - Run
to tag the image.docker tag 5028f420c9e9 idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-usermgmt:v3.13.1598-x86_64_v1231-patch06 - Run
to push the image to the docker registry.docker push idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-usermgmt:v3.13.1598-x86_64_v1231-patch06 - Run
kubectl -n dsx edit deploy usermgmt - Look for the image key, and then change the value to
idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-usermgmt:v3.13.1598-x86_64_v1231-patch06
To install the wdp-dashboard-frontend patch
- Open the dashboard frontend image by running
A directory that is called wdp-dashboard-frontend-artifact is created and contains files.tar -xzvf wdp-dashboard-frontend.1.3.4-x86_64-20190807.230148.tgz - Run the command
cd wdp-dashboard-frontend-artifact - Run the command
to load the image. You can ignore the following error message, "The image localhost:5000/dashboard-frontend:1.8.7-x86_64 exists, renaming the old one with ID."docker load < dashboard-frontend_1.8.7-x86_64.tar.gz - Run
to tag the image.docker tag f644ea5cba36 idp-registry.sysibm-adm.svc.cluster.local:31006/dashboard-frontend:1.8.7-x86_64_v1231-patch06 - Run
to push the image to the docker registry.docker push idp-registry.sysibm-adm.svc.cluster.local:31006/dashboard-frontend:1.8.7-x86_64_v1231-patch06 - Run
kubectl -n dsx edit deploy dash-front-deploy - Look for the image key, and then change the value to
idp-registry.sysibm-adm.svc.cluster.local:31006/dashboard-frontend:1.8.7-x86_64_v1231-patch06
Post-installation
Update image management
This section applies for standalone installations only. Ensure that you followed the
pre-installation steps to back up the files for image management before running the following
commands.
- Run
to get the image management pods.kubectl get pods -n dsx | grep imagemgmt | grep -v Completed - Run
to execute into the pod. You can pick any of the three pods that will be running.kubectl exec -it -n dsx <podname> sh - Run
cd /user-home/_global_/.custom-images/builtin-metadata - Run
rm * - Run
cd /user-home/_global_/.custom-images/metadata - Run
rm * - Run
cd /scripts - Run
retag_images.shnode ./builtin-image-info.js - (Optional) To restore custom images, run the following commands:Note: Old custom images could be based on vulnerable images. It is recommended that you create newer custom images based on the latest environment images.
- Run
cp -ar /user-home/global/.custom-images/metadata-patch06-backup/* /user-home/global/.custom-images/metadata/ - Run
cp -ar /user-home/global/.custom-images/builtin-metadata-patch06-backup/* /user-home/global/.custom-images/builtin-metadata/
- Run
Restart user environments
- Jupyter 2.7 user environments
- Jupyter 3.5 user environments
- To restart all Jupyter 2.7 user environments:
- Run
to view any deployments with the old Jupyter 2.7 image.kubectl get deployment -n dsx -l type=jupyter - Run
to delete any deployments running with the old Jupyter 2.7 image.kubectl delete deployment -n dsx -l type=jupyter - Rebuild all custom images that were built with the Jupyter 2.7 image.
- Run
- To restart all Jupyter 3.5 user environments:
- Run
to view any deployments with the old Jupyter 3.5 image.kubectl get deployment -n dsx -l type=jupyter-py35 - Run
to delete any deployments running with the old Jupyter 3.5 image.kubectl delete deployment -n dsx -l type=jupyter-py35 - Rebuild all custom images that were built with the Jupyter 3.5 image.
- Run
Rolling back the patch
Roll back the patch
- Run
and then look for the image keys that containkubectl edit configmaps -n dsx runtimes-def-configmap -o yamljupyter-d8a2rls2x-shellChange the keys to the value noted in the Pre-installation section, step 1.a.. - Run
and then look for the image keys that containkubectl edit configmaps -n dsx runtimes-def-configmap -o yamljupyter-d8a3rls2x-shellChange the keys to the value noted in the Pre-installation section, step 1.b.. - Run
and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.a..kubectl -n dsx edit deploy dsx-core - Run
and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.b..kubectl edit deploy -n dsx spawner-api - Run
and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.c..kubectl edit deploy -n dsx usermgmt - Run
and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.d.kubectl edit deploy -n dsx dash-front-deploy - Run
and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.e.kubectl edit deploy -n dsx dsx-scripted-ml-python2 - Run
and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.f.kubectl edit deploy -n dsx dsx-scripted-ml-python3 - Run
and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.g.kubectl edit deploy -n dsx zen-scripted-data-python2 - Run
and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.h.kubectl edit deploy -n dsx jupyter-notebooks-nbviewer - Run
and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.i.kubectl edit deploy -n dsx jupyter-notebooks-nbviewer-dev
Roll back the image management files
This section applies for standalone installations only.
- Run
to get the image management pods.kubectl get pods -n dsx | grep imagemgmt | grep -v Completed - Run
to execute into the pod. You can pick any of the three pods that will be running.kubectl exec -it -n dsx <podname> sh - Run
cd /user-home/_global_/.custom-images/ - Restore the builtin-metadata directory that was backed up in 3.d in the pre-installation steps.
- Restore the metadata directory that was backed up in 3.e in the pre-installation steps.