Post installation notes
Instructions and pointers to additional tasks you can do after you have installed Red Hat OpenShift Container Platform (OpenShift).
Create HTPasswd authentication provider
It is not recommended to do all administration tasks with the system:admin
account. Furthermore, you might want to create users and manage the access for them. One of the
simplest authentication providers is HTPasswd.
-
On the bastion guest run the following to create a HTPasswd file with the user admin and password "pass4you".
htpasswd -c -B -b users.htpasswd admin pass4you -
Add this file as a secret to OpenShift:
oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd -n openshift-config -
Create the HTPasswd authentication provider:
oc apply -f <(echo '{ "apiVersion": "config.openshift.io/v1", "kind": "OAuth", "metadata": { "name": "cluster" }, "spec": { "identityProviders": [ { "name": "my_htpasswd_provider", "mappingMethod": "claim", "type": "HTPasswd", "htpasswd": { "fileData": { "name": "htpass-secret" } } } ] } }')Note: Ignore the warning about a missing annotation that is patched automatically. -
Now give the new admin the cluster-admin role to allow the new admin to do everything:
oc adm policy add-cluster-role-to-user cluster-admin adminThis creates a
clusterrolebindingbetween the Cluster-Rolecluster-adminand the Subject User admin. You should be able to find this binding in the list with all clusterrolebindings:oc describe clusterrolebinding.rbac|less -
Log in with the new user admin:
oc login -u adminAs soon as you log in with the new user the file
/openshift-installation/auth/kubeconfigupdates to make the new user the current user. Even when you create a new terminal session and runexport KUBECONFIG=/openshift-installation/auth/kubeconfigagain it will stick to the new user context. -
To change the user back to
system:adminor any other user.First view all users:
oc config viewChange the user context to the user context you want, for example, back to
system:adminwith the following command:oc config use-context openshift-image-registry/api-ocp0-sa-boe:6443/system:admin
NFS dynamic storage provisioning
OpenShift allows you to provision storage:
-
PersistentVolume/PersistentVolumeClaims (Kubernetes persistent volume (PV) framework): Define external storage as a PersistentVolume that then can be claimed by, for example, a developer with a PersistentVolumeClaim.
-
statically provisioned: The admin creates a PersistentVolume manually and a Developer creates a PersistentVolumeClaim manually.
-
dynamically provisioned: A developer creates a PersistentVolumeClaim and if no fitting PersistentVolume is found it is created by an automatic provisioner. A configuration file of type "StorageClass" is used to define the rules for this process.
-
-
Direct Usage: As long as a container can reach the storage server you can attach the storage directly to the container without creating a PersistentVolume.
A table with all currently supported storage that can be used with persistent volumes can be found here.
To set up NFS dynamic provisioning you check out the following reference links or follow the instructions followed after that:
-
Link to the nfs provisioner on GitHub
-
Get and create the required RBAC permission file with a terminal session on the bastion guest:
curl https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/rbac.yaml \ > nfs-subdir-external-provisioner-rbac.yml # system:admin or any other cluster-admin export KUBECONFIG=/openshift-installation/auth/kubeconfig oc create -f nfs-subdir-external-provisioner-rbac.yml -
Add a required policy:
oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:default:nfs-client-provisioner -
Create the nfs subdir external provisioner deployment:
oc create -f <(echo '{ "apiVersion": "apps/v1", "kind": "Deployment", "metadata": { "name": "nfs-client-provisioner", "labels": { "app": "nfs-client-provisioner" }, "namespace": "default" }, "spec": { "replicas": 1, "strategy": { "type": "Recreate" }, "selector": { "matchLabels": { "app": "nfs-client-provisioner" } }, "template": { "metadata": { "labels": { "app": "nfs-client-provisioner" } }, "spec": { "serviceAccountName": "nfs-client-provisioner", "containers": [ { "name": "nfs-client-provisioner", "image": "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2", "volumeMounts": [ { "name": "nfs-client-root", "mountPath": "/persistentvolumes" } ], "env": [ { "name": "PROVISIONER_NAME", "value": "nfs-subdir-external-provisioner-1" }, { "name": "NFS_SERVER", "value": "nfskvm.example.com" }, { "name": "NFS_PATH", "value": "/nfs_share" } ] } ], "volumes": [ { "name": "nfs-client-root", "nfs": { "server": "nfskvm.example.com", "path": "/nfs_share" } } ] } } } }') -
Create the required StorageClass object:
oc create -f <(echo '{ "apiVersion": "storage.k8s.io/v1", "kind": "StorageClass", "metadata": { "name": "managed-nfs-storage", "annotations": { "storageclass.kubernetes.io/is-default-class": "true" } }, "provisioner": "nfs-subdir-external-provisioner-1", "parameters": { "pathPattern": "${.PVC.namespace}-${.PVC.name}", "archiveOnDelete": "false" } }') -
You might have to change the SecurityContext of the Deployment to use the group required to access the NFS shared folder. For example:
securityContext: fsGroup: 1000340000For supplemental groups check out the next chapter.
Use supplemental groups for NFS
Previously we adjusted the rights of the NFS server to fit the rights of the container. This might not work out however as soon as multiple different containers are mounting NFS. Then you might have to adjust the rights that the user has on the container to be able to access the NFS folder.
For example If you have an arbitrary group id as the owner of a second nfs share
/nfs/share_2:
chown root:5555 /nfs_share_2
chmod 2770 /nfs_share_2
Then we have to add this group to the supplementalGroups as securityContext of the spec of the pod. For example the image registry pod (replace XXXXX with the value of your image registry pod):
oc edit pods/image-registry-XXXXX -n openshift-image-registry
# add the following:
spec:
securityContext:
supplementalGroups: [5555]
There are however rules in place (Security Context Constraints) that does not allow to run pods
with arbitrary groups. For this you might have to set: supplementalGroup to
RunAsAny.
To learn more about Security Context Constraints (SCC) check out the official OpenShift documentation
When you cannot edit the POD you might want to check this solution article here. Changing the Pod yaml might change the SecurityContext just temporarily. To make it permanent make the change in the Deployment.
Push custom images to the internal image registry
Previously we set up an internal image registry with all the rights required to push images into the registry. In the following there are some example steps to build a custom image on the bastion and push it to the internal image registry.
Following steps are performed on the Bastion.
-
Install podman.
dnf install podman -y -
Add a route to the internal image registry to make it available to the outside (in this case the bastion) as explained here:
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge -
To be able to access the repository we have to trust the TLS certificate on bastion to be able to push images. The following commands are getting the TLS certificate from the newly created route, downloads it to the local machine (bastion) and updates the trust store on the bastion.
HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') oc get secret -n openshift-ingress router-certs-default -o go-template='{{index .data "tls.crt"}}' | base64 -d | sudo tee /etc/pki/ca-trust/source/anchors/${HOST}.crt > /dev/null update-ca-trust enable -
Login to the image registry. When you don't want to use kubeadmin you might have to add the
registry-viewerandregistry-editorrole to the user as described here.# use any user here with the required permissions oc login -u admin HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') podman login -u admin -p $(oc whoami -t) ${HOST} -
To quickly create a custom docker image the following steps are performed:
podman run -it fedora:latest bash # make any changes exit # figure out the container id that you want to create an image from podman container ls --all # create the image podman container commit 05ab0f4405cb -
Name and tag the newly created image:
# locate your newly created image and get it's IMAGE ID podman image ls # name and tag the image podman tag 22eafc5328e6 custom-fedora-image:latest -
Push the image to the image registry (under the openshift namespace):
HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') NAMESPACE="openshift" IMAGE_NAME="custom-fedora-image:latest" LOCAL_IMAGE_NAME="localhost/${IMAGE_NAME}" REMOTE_IMAGE_NAME="${IMAGE_NAME}" podman push ${LOCAL_IMAGE_NAME} ${HOST}/${NAMESPACE}/${REMOTE_IMAGE_NAME}