IBM Support

Deploying a Platform Symphony cluster with Kubernetes and Docker containers

Technical Blog Post


Abstract

Deploying a Platform Symphony cluster with Kubernetes and Docker containers

Body

imageimageKubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications. Kubernetes is a good platform to deploy master-slave mode applications. You can deploy Platform Symphony with Kubernetes and run Platform Symphony in Docker containers. Platform Symphony supports running in Docker containers.

 

Kubernetes can manage and schedule Docker containers across hosts, without impact to performance: it requires minimal time to create a new Platform Symphony cluster and to start up.
 

Kubernetes offers features to create or manage applications; two worth highlighting for this blog are as follows:

  • Replication controller: A replication controller ensures that a specified number of pod “replicas” are running at any one time. In other words, a replication controller ensures that a pod or homogeneous set of pods are always up and available. If there are too many pods, it will kill some; if there are too few, the replication controller will start more. We can define Platform Symphony compute nodes as replication controller so that Kubernetes can guarantee the number of compute nodes alive and working.
  • Service: A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them (this is sometimes called a micro-service). We can use the Platform Symphony master service as the Kubernetes service, so that we can access it from any Docker container or even outside Kubernetes hosts. For example, we can expose the Platform Symphony GUI service (called WEBGUI) using the Kubernetes service to access outside Kubernetes hosts.

 

Kubernetes and container networking

Kubernetes does not manage container networking and depends on other third-party network applications such as virtual networks Weave, Flannel, Open VSwitch, or Calico. Weave supports both support TCP and UDP communications so this blog article focusses on Weave for container networking.

 

Building a Platform Symphony Docker image

Docker provides a Dockerfile script to build a new Platform Symphony Docker image, called sym711:v1, to build a new Docker image with Platform Symphony 7.1.1 installed. Run the following Docker command:
$ docker build -t sym711:v1 -f Dockerfile .

 

The above Dockerfile contains the following information:

 

FROM rhel
MAINTAINER Jin Ming Lv <lvjinm@cn.ibm.com>
# Add user 'egoadmin' & install ssh-server and other basic tools
RUN useradd -m egoadmin \
    && echo "egoadmin:egoadmin" | chpasswd \
    && echo "egoadmin   ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers \
    && echo -e "[base] \nname=CentOS-7 - Base - centos.com\nbaseurl=http://mirror.centos.org/centos/7/os/\$basearch/\ngpgcheck=1\ngpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7&quot; > /etc/yum.repos.d/CentOS7-Base.repo \
    && yum clean all \
    && yum install -y openssh-server which net-tools sudo wget hostname tar openssh-clients gettext iputils \
    && sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config \
    && ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key \
    && ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key \
    && mkdir /var/run/sshd
ADD platform_sym_adv_entitlement.dat /opt/platform_sym_adv_entitlement.dat
# Download & Install Symphony package
RUN wget --no-check-certificate -O /opt/symsetup7.1.1_linux-x86_64.bin https://lweb.eng.platformlab.ibm.com/engr/pcc/release_eng/work/sym/sym_mainline/last/symsetup7.1.1_linux-x86_64.bin \
    && export CLUSTERADMIN=egoadmin \
    && export DERBY_DB_HOST=localhost \
    && export SIMPLIFIEDWEM=Y \
    && chmod +x /opt/* \
    && /opt/symsetup7.1.1_linux-x86_64.bin --quiet \
    && rm -f /opt/*.bin
USER egoadmin
CMD ["/usr/sbin/sshd", "-D"]

 

Note that this Dockerfile uses the wget command to copy the Platform Symphony installation package into the Docker image. Avoid using the ADD or COPY commands to copy large packages, because the ADD or COPY commands create a new layer in the Docker image, increasing the Docker image size. Even if you delete this package using a later command, the final image size will not reduce.

 

Setting up the Kubernetes and Weave environment

  1. Install Kubernetes, Docker, and etcd on all hosts:

$ yum install -y kubernetes etcd

 

Note: Kubernetes depends on Docker, so Docker will be installed automatically after Kubernetes is installed.

 

  1. Install virtual network Weave on all hosts:

$ sudo curl -L git.io/weave -o /usr/local/bin/weave

$ sudo chmod +x /usr/local/bin/weave

$ systemctl start docker

$ weave setup


Note: Weave will pull the Docker image from the public Docker hub, so ensure all Docker hosts have Internet access. Weave will start the service in the Docker container, so do not try to stop those containers already started by Weave.

 

  1. Start Weave on all hosts. You choose one host as the Weave master node and other as Weave slave nodes to join those hosts to one network:

Master node:
$ weave launch-proxy --rewrite-inspect --without-dns
$ weave launch-router
$ weave expose

 

Slave node:
$ weave launch-proxy --rewrite-inspect --without-dns
$ weave launch-router $Weave_master_host_name
$ weave expose

 

 

  1. Configure and start the Kubernetes master (in this case, we start the etcd server on the same host as the Kubernetes master host):

Configure the etcd server. Edit the /etc/etcd/etcd.conf file as follows:
ETCD_LISTEN_CLIENT_URLS=http://Kubernetes_master_host:4001
ETCD_ADVERTISE_CLIENT_URLS=http:/Kubernetes_master_host:4001

Configure the Kubernetes controller manager. Edit the /etc/kubernetes/config file as follows:
# How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://Kubernetes_master_host:8080"

 

Configure the Kubernetes api server. Edit /etc/kubernetes/apiserver as follows:

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master:4001&quot;
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Remove ServiceAccount from this line to run without API Tokens
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

 

Start the Kubernetes server:
$ systemctl restart etcd kube-apiserver kube-controller-manager kube-scheduler

 

  1. Configure and start the Kubernetes minion server:

Edit the /etc/kubernetes/config file as follows:
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://Kubernetes_master_host:8080"

Edit the /etc/kubernetes/kubelet file as follows:

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://Kubernetes_master_host:8080"
# Add your own!
KUBELET_ARGS="--docker-endpoint=unix:///var/run/weave/weave.sock"

Start the Kubernetes minion server:
$ systemctl restart kube-proxy kubelet

 

Check all minion nodes have started on the master host:
$ kubectl get nodes
NAME        LABELS                            STATUS
     AGE

a.ibm.com  kubernetes.io/hostname= a.ibm.com  Ready    4s

b.ibm.com  kubernetes.io/hostname= b.ibm.com  Ready    4s

 

Creating the Platform Symphony cluster using YAML files

Kubernetes uses a YAML file to deploy pods. We will use two YAML files (sym_master_cluster.yaml and sym_compute_cluster.yaml) to create the Platform Symphony master and compute nodes, respectively, and use the Platform Symphony Docker image sym711:v1.
 

  1. Create the Platform Symphony master node using the master node YAML file:
    $ kubectl create -f /tmp/sym_master_cluster.yaml

    The sym_master_cluster.yaml file contains the following information:

kind: ReplicationController
apiVersion: v1
metadata:
  name: sym-master-rc
spec:
  replicas: 1
  selector:
    component: sym-master
  template:
    metadata:
      labels:
        component: sym-master
    spec:
      containers:
        - name: sym-master
          image: sym711:v1
          command: ["/bin/sh", "-c", "source /opt/ibm/platformsymphony/profile.platform; egoconfig join `hostname` -f; egoconfig setentitlement /opt/platform_sym_adv_entitlement611.dat; egosh ego start; sudo /usr/sbin/sshd -D"]
          resources:
            requests:
              cpu: 100m
              memory: 4096M
            limits:
              memory: 8192M

 

When you create a Platform Symphony node with a YAML file, Kubernetes creates a Docker container from the YAML file and runs the container start command defined within the YAML file.

 

  1. Once the master pod has been created and is running, create at least one Platform Symphony compute node with using the compute node YAML file:
    $ kubectl create -f sym-compute-rc.yaml

 

The sym_compute_cluster.yaml file contains the following information:

 

kind: ReplicationController
apiVersion: v1
metadata:
  name: sym-compute-rc
spec:
  replicas: 3
  selector:
    component: sym-compute
  template:
    metadata:
      labels:
        component: sym-compute
    spec:
      containers:
        - name: sym-compute
          image: sym711:v1
          command: ["/bin/sh", "-c", "source /opt/ibm/platformsymphony/profile.platform; sudo chmod 777 /etc/hosts; echo '10.32.0.1 sym-master-rc-04yws' >> /etc/hosts; egoconfig join sym-master-rc-04yws -f; egosh ego start; sudo /usr/sbin/sshd -D"]
          resources:
            requests:
              cpu: 100m
              memory: 2048M

 

  1. Scale the compute node size after you have created the compute replication controller if necessary:

$ kubectl scale --replicas=10 replicationcontrollers sym-compute-rc

 

Accessing the management console outside of Docker containers

Access the Platform Symphony GUI (management console) outside of Docker containers by using the Kubernetes service to create a Platform Symphony GUI (WEBGUI) service using the WEBGUI YAML file.

 

  1. Create the WEBGUI service:
    $ kubectl create -f webgui.yaml


The webgui.yaml file contains the following information:
 

kind: Service
apiVersion: v1
metadata:
  name: sym-webgui
spec:
  ports:
    - port: 8443
      targetPort: 8443
      protocol: TCP
      name: https
  selector:
    component: sym-master
  type: NodePort

 

  1. Kubernetes provides a random available port for all Kubernetes hosts that map to the Platform Symphony WEBGUI service. Determine the port number:
    $ kubectl describe service sym-webgui

 

The following example output shows port 31439 for the WEBGUI port:

[root@a ~]# kubectl describe service sym-webgui
Name:                   sym-webgui
Namespace:              default
Labels:                 <none>
Selector:               component=sym-master
Type:                   NodePort
Port:                   https   8443/TCP
NodePort:               https   31439/TCP
Endpoints:              10.32.0.1:8443
Session Affinity:       None

 

  1. Access the management console from one of the using the port determined in the above step. The following screenshot shows logging onto the management console using port 31439:

image

 

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSZUMP","label":"IBM Spectrum Symphony"},"Component":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

UID

ibm16164127