Increase docker storage size
If your Red Hat operating system uses device mapper as the docker
storage driver, the base size limits the size of image and container to 10G. This topic describes
how to increase docker storage size of a specific container. You do not have to increase the size
for overlay or overlay2, which have a default base size of
500GB.
As an example, you can enter the following command to see that the size of a spark worker pod:
kubectl exec -it spark-worker-7b447945d4-6gdhn -n dsx -- bash -c "df -h"
In this example, the size of the spark worker is 25G by default.
Increase docker storage size
Complete the following steps on each node:
- Stop the kubelet service:
systemctl stop kubelet.service - Inside the node, run
systemctl stop docker.serviceto stop docker service. - Change the docker base size to 110G in the config files. You can either modify
DOCKER_STORAGE_OPTIONSin /etc/sysconfig/docker-storage ordm.basesizein /etc/docker/daemon.json. But keep in mind that you can only modify one and remove another. It depends on which one you mostly use. For example, entervi /etc/sysconfig/docker-storageand removeDOCKER_STORAGE_OPTIONS, then entervi /etc/docker/daemon.jsonand changedm.basesizeto 110G. - Restart the docker service:
systemctl start docker.service. Verify that all services are up.
If you do not remove the current docker image, the container size will not change even though you
increase the size. You also need to remove all deployments using that image. In the example, by
checking docker image docker images | grep spark, you can see spark-worker is using
image: idp-registry.sysibm-adm.svc.cluster.local:31006/spark:1.5.528-x86_64, with
Image ID -- 88886464062c. So you must remove all deployments (spark-master,
spark-worker, spark-history) using that image:
- Run
docker ps -a | grep <spark-image-id>, and remove the unused/use containers using the spark image. - Remove the image by running
docker rmi <spark-image-id>. - Run docker pull on the spark image:
docker pull idp-registry.sysibm-adm.svc.cluster.local:31006/spark:1.5.528-x86_64
Sometimes different images are sharing the same layers, so make sure the image is pulling
entirely new. If you see some layers are showing Already exist, you must also remove other related
images. In this example, you must delete the images for wdp-dashboard-back and
wdp-dashboard-calculator.
On all of the nodes:
docker rmi <dash-back-image>
docker rmi <dash-calc-image>
After you remove those images, run the docker pull again, we will see the image is being entirely pulled.
docker system prune -a command to clean all unused containers. Then docker pull the
image again and check the size.Now start the kubelet service: systemctl start kubelet.service, and check the
new container. You should see the spark worker pod size is increased to 110G.