Contents


Managing docker containers with orchestration

Comments

Docker orchestration is container scheduling, cluster management, and provisioning of more hosts in a docker environment.

Kubernetes

Kubernetes is an open source system for automating deployment, operations, and scaling of containerized applications. It groups containers in an application into logical units for easy management and discovery.
Kubernetes is an open source orchestration engine for docker containers and works on the master-slave concept. The following are the major components of a Kubernetes cluster:

  • Master: Cluster manager that oversees one or more nodes (minions).
  • Node or Minion or Slave: Cluster members that are responsible for starting containers.
  • Pod: Basic unit of operation in Kubernetes. It represents a group of one or more containers constituting an application (or part) that runs on a slave (minion).

Availability

The following table lists the location of the relevant packages for PowerPC LE (ppc64le) platforms:

Linux DistributionPackage Location
Fedora 24 Distro repository
Red Hat Enterprise Linux (RHEL) 7.XUnicamp[1]

[1] Unicamp Repo - http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/misc_ppc64el/

Setting up a Kubernetes cluster on RHEL 7.1 LE

Installation and setup of Kubernetes

Ensure that the following Unicamp repositories are added to all the systems that are going to be part of the Kubernetes cluster:

# cat > /etc/yum.repos.d/unicamp-docker.repo <<EOF
[unicamp-docker]
name=Unicamp Repo for Docker Packages
baseurl=http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/docker-ppc64el/
enabled=1
gpgcheck=0
EOF
 
# cat > /etc/yum.repos.d/unicamp-misc.repo <<EOF
[unicamp-misc]
name=Unicamp Repo for Misc Packages
baseurl=http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/misc_ppc64el/
enabled=1
gpgcheck=0
EOF

Installation and setup of Kubernetes master

  1. Install the required packages.
    # yum install kubernetes-client kubernetes-master etcd
  2. Open Network Ports. By default, the kubernetes apiserver listens on port 8080 for kubelets. Ensure that it is not blocked by the local firewall. If you are using firewalls, run the following commands to open a TCP port for the public zone.
    # firewall-cmd --zone=public --add-port=8080/tcp --permanent 
    # firewall-cmd --reload

    Additionally, the etcd server listens on port 2379 by default. Use the following instructions to open the respective port:
    # firewall-cmd --zone=public --add-port=2379/tcp --permanent 
    # firewall-cmd –reload
  3. Configure Kubernetes Master. For the remaining steps of the configuration, assume that the Kubernetes master has the IP 192.168.122.76, and the Kubernetes node has the IP address 192.168.122.236.
    Modify the /etc/kubernetes/config file according to the environment. Based on the above information, the modified file has the following content:
    # logging to stderr means we get it in the systemd journal                                                                                    KUBE_LOGTOSTDERR="--logtostderr=true"
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://192.168.122.76:8080"

    Modify the /etc/kubernetes/apiserver file according to the environment. Based on the above information, the modified file has the following content:
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--address=0.0.0.0"
    # The port on the  local server to listen on.
    # KUBE_API_PORT="--port=8080”
    # Port minions listen on
    # KUBELET_PORT="--kubelet-port=10250"
    #  Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.122.76:2379"
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    # default admission control policies
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    # Add your own!
    KUBE_API_ARGS=""
  4. Configure Etcd. Modify the following two parameters in the /etc/etcd/etcd.conf file as described:
    ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" 
    ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
  5. Start the services.
    # for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
        systemctl restart $SERVICES
        systemctl enable $SERVICES
        systemctl status $SERVICES
    done

Installation and setup of Kubernetes node (Minion)

  1. Install the required packages.
     # yum install docker-io kubernetes-client kubernetes-node
  2. Configure the Kubernetes node. Modify /etc/kubernetes/ kubelet according to the environment. Based on the above information, the modified file has the following content:
    # kubernetes kubelet (minion) config
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME=" "
    # location of the api-server
    KUBELET_API_SERVER="--api-servers=http://192.168.122.76:8080"
    # Add your own!
    KUBELET_ARGS="—pod-infra-container-image=gcr.io/google_containers/pause-ppc64le:2.0"
  3. Start the services.
    # for SERVICES in kube-proxy kubelet docker; do
      systemctl restart $SERVICES
      systemctl enable $SERVICES
      systemctl status $SERVICES
    done
  4. Verify the setup. Log in to the master and run kubectl get nodes to check the available nodes.
    [root@localhost ~]# kubectl get nodes
    NAME     LABELS                          STATUS AGE
    fed-node kubernetes.io/hostname=fed-node Ready  1h
  5. From any node in the cluster, log in to the private registry to get the registry authentication config file.
    # docker login https://registry-rhel71.kube.com:5000
    # cat /root/.docker/config.json 
    {
        "auths": {
            "https://registry-rhel71.kube.com:5000": {
                "auth": "cHJhZGlwdGE6cHJhZGlwdGE=",
                "email": "test@test.com"
            }
    }

Copy this config file (config.json) to all the nodes in the Kubernetes cluster to the path /root/.docker/config.json. The cluster is now set up to use the private registry server.

Docker swarm: native clustering for docker hosts

Docker Swarm is native clustering for docker. Swarm turns a pool of docker hosts into a single, virtual docker host. Because Docker Swarm serves the standard docker API, any tool that already communicates with a docker daemon can use Swarm to transparently scale to multiple hosts.
Conceptually, a swarm cluster looks like the following:

docker
docker

Availability

Currently, you must build Docker Swarm from the source in the https://github.com/docker/swarm website for Power platforms.

Setup on Power servers

Complete the following procedure to work with swarm on Power servers that are running Ubuntu LE:
Run the following commands.

$ mkdir ~/go.prj
$ export GOPATH=~/go.prj
$ export PATH=$PATH:~/go.prj/bin
$ go get github.com/tools/godep
$ go get github.com/docker/swarm
swarm binary will be available under $GOPATH/bin/

Getting started

  1. For each docker host that must be a part of the Swarm cluster, configure the docker daemon to expose docker API over TCP.
     docker -H tcp://0.0.0.0:2375 daemon

    It's advisable to setup TLS to secure the communication between docker, swarm and the client.
  2. Run Swarm and register the host.
    • If you are using the Docker Swarm image, run the following command:
       docker run -H tcp://0.0.0.0:2375 -d swarm join --addr=<node_ip:2375> <discovery-option>
    • If you are using the Swarm that is built from source, run the following command:
       swarm join --addr=<node_ip:2375> <discovery-option>
  3. Start the swarm manage service on the designated machine.
    • If you are using the Docker Swarm image, run the following command:
       docker run -d -p <swarm_cluster_mgr_port>:2375 swarm manage <discovery-option>
    • If you are using the Swarm that is built from source, run the following command:
       swarm manage <discovery-option>

Discovery and cluster membership

Swarm supports multiple mechanisms for discovering Swarm nodes and creating a cluster.

  • Hosted discovery
    1. Use token://cluster_id as the discovery-option
    2. Hosted at https://discovery-stage.hub.docker.com
  • File
    1. Add docker hosts in <ip>:<port> format to a file
    2. Use file://path_to_file as the discovery option
  • Comma separated list of details of the nodes (<ip>:<port>) directly in the command line interface
  • Etcd
    Use etcd://<etcd_ip>/<path> as the discovery option
  • Consul
    Use consul://<consul_ip>/<path> as the discovery option
  • Zookeeper
    Use the zk://<zookeeper_addr1>,<zookeeper_addr2>/<path> as the discovery option

Setting up a Mesos/Marathon cluster on RHEL 7.1 little endian

Refer to the Setting up a Mesos/Marathon cluster on RHEL 7.1 little endian article for the steps to set up a Mesos/Marathon cluster on OpenPOWER servers.

Mesos and Kubernetes on a hybrid (IBM Power and x86) architecture scenario

Refer to the Mesos and Kubernetes on a hybrid (IBM Power and x86) architecture scenario article for reference solutions about applying Mesos and Kubernetes into Linux on a hybrid architecture (including IBM Power and x86) environment.

Connect

The IBM Linux Technology Center (LTC) is a team of IBM open source software developers who work in cooperation with the Linux open source development community. The LTC serves as a center of technical competency for Linux. Connect with us.

Follow us on TwitterJoin the communityRead my blog


Downloadable resources


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Linux
ArticleID=1035405
ArticleTitle=Managing docker containers with orchestration
publish-date=08032016