Setting up a Kubernetes runtime environment
You can use Kubernetes containers to run your APIs and applications being managed by API Connect. NOTE: This article refers to third-party software that IBM does not control. As such, the software may change and this information may become outdated. For the latest information, refer to the Kubernetes documentation at https://kubernetes.io.
Before you begin
System and software requirements:
- Ubuntu version 16.04
- Kubernetes version 1.5.3
- etcd version 3.1.2
- Kubernetes CNI version 0.4.0
- CloudFlare PKI/TLS toolkit (CFSSL) version 1.2
Networking requirements: All nodes must be interconnected and reachable on the same network. There must be two additional network namespaces that do not overlap with either the service IP addresses or the pod IP addresses.
- Public network CIDR: 10.0.2.0/24
- Public facing NIC: enp0s2
- Private network CIDR: 172.28.127.0/24
- Private NIC: enp0s5
- Pod network: 10.2.0.1/24
- Service network CIDR: 10.3.0.0/24
| Hostname | FQDN | IP on private network | IP on public network |
|---|---|---|---|
| master1 | master1.k8s.myorg.com | 172.28.127.2 | 10.0.2.15 |
| master2 | master2.k8s.myorg.com | 172.28.127.3 | 10.0.2.16 |
| master3 | master3.k8s.myorg.com | 172.28.127.4 | 10.0.2.17 |
| worker1 | worker1.k8s.myorg.com | 172.28.127.5 | 10.0.2.18 |
| worker2 | worker2.k8s.myorg.com | 172.28.127.6 | 10.0.2.19 |
| worker3 | worker3.k8s.myorg.com | 172.28.127.7 | 10.0.2.20 |
- API Server: 10.3.0.1
- DNS Server: 10.3.0.10
apiserver.k8s.myorg.com (10.0.2.45)About this task
Kubernetes is a platform for automated deployment, scaling, and operation of application containers across clusters of hosts, providing container-centric infrastructure. For more information, see https://kubernetes.io.
See https://github.com/ibm-apiconnect/kubernetes-setup for the scripts used in this article.
Procedure
- Prepare machines by installing Python and requisite Python packages:
apt-get update apt-get install -y python python-simplejson python-pip - Prepare certificates
This procedure assumes self-signed certificates creating using CloudFlare's PKI toolkit (CFSSL). For more information on CFSSL, see https://cfssl.org/.You can also use
opensslto generate these certificates. Contact your company's security/compliance team to get the appropriate certificates.In general, you create certificates on your local machine and copy them to the remote machines later.
Note: Copy the certificates you create on one host to the other hosts, except for the peer certificatess.- Download cfssl and cfssljson. Download version 1.3 of
cfsslandcfssljsonfrom https://pkg.cfssl.org/ by entering one of the following commands:On Linux:
On MacOS:curl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl curl https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljsoncurl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64 -o /usr/local/bin/cfssl curl https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64 -o /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson - Add
/usr/local/bin/cfssland/usr/local/bin/cfssljsonto your PATH environment variable. - Create CA certificate.
In a new directory, create your CA certificates and signing profiles.
Create ca-config.json and add the following to it:
{ "signing": { "default": { "expiry": "168h" }, "profiles": { "server": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth" ] }, "client": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "client auth" ] }, "peer": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } }Create ca-csr.json and add the following to it:
{ "CN": "Dev CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] }Enter the following command to create the CA certificate and key files:cfssl gencert -initca ca-csr.json | cfssljson -bare ca - - Create Kubernetes API server certificate.
The Kubernetes API server makes outgoing calls to the Controller, Scheduler, and Kubelets and accepts incoming API calls from many clients. Thus, it uses both
server authandclient authcapabilities.Customize the file to include host names and IP addresses for your master servers, load balancer, and cluster internal API server IP.
Create apiserver.json and add the following to it:
{ "CN": "apiserver", "hosts": [ "master1.k8s.myorg.com", "master1", "172.28.127.2", "master2.k8s.myorg.com", "master2", "172.28.127.3", "master3.k8s.myorg.com", "master3", "172.28.127.4", "apiserver.k8s.myorg.com", "10.0.2.45", "10.3.0.1", "localhost", "127.0.0.1" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] }Create the API server certificate and key files by entering the following command:cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=peer apiserver.json | cfssljson \ -bare apiserver - Create Kubernetes Scheduler certificate.
The Kubernetes Scheduler is a client to the API server and requires only
client authcapabilities.Create scheduler.json and add the following to it:
{ "CN": "scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] }Run the following command to create the scheduler certificate and key files:cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=client scheduler.json | cfssljson \ -bare scheduler - Create Kubernetes proxy certificate.
The Kubernetes proxy is a client to the API server and requires only
client authcapabilities.Create proxy.json and add the following to it:
{ "CN": "proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] }Run the following command to create the proxy certificate and key files:
cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=client proxy.json | cfssljson \ -bare proxy - Create Kubernetes Controller certificate.
The Kubernetes Controller is a client to the API server and requires only
client authcapabilities.Create controller.json and add the following to it:
{ "CN": "controller", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] }Run the following command to create the controller certificate and key files:
cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=client controller.json | cfssljson \ -bare controller - Create Kubelet certificate.
The Kubelet service is both a client and server to the Kubernetes API server and thus uses both
server authandclient authcapabilities. You will create a single certificate for all nodes. However, you can create individual certificates if needed.Create a file named kubelet.json and customize the hostnames and IP addresses for your master and worker nodes (shown in bold in the examle below):
{ "CN": "kubelet", "hosts": [ "master1.k8s.myorg.com", "master1", "172.28.127.2", "master2.k8s.myorg.com", "master2", "172.28.127.3", "master3.k8s.myorg.com", "master3", "172.28.127.4", "worker1.k8s.myorg.com", "worker1", "172.28.127.5", "worker2.k8s.myorg.com", "worker2", "172.28.127.6", "worker3.k8s.myorg.com", "worker3", "172.28.127.7" ], "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] }Run the following command to create the Kubelet certificate and key files.
cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=peer kubelet.json | cfssljson \ -bare kubelet - Create Kubernetes admin user certificate, admin.json:
{ "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "O": "system:masters", "C": "US", "L": "CA", "ST": "San Francisco" } ] }Run the following command to create the admin user certificate and key files:
cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=client admin.json | cfssljson \ -bare admin - Create etcd server certificates.
You will create a single certificate for all nodes. However, you can create individual certificates if needed. Create etcd.json and customize the hostnames and IP addresses for your etcd nodes (shown in bold in the example below):
{ "CN": "etcd", "hosts": [ "etcd1.k8s.myorg.com", "etcd1", "172.28.127.8", "etcd2.k8s.myorg.com", "etcd2", "172.28.127.9", "etcd3.k8s.myorg.com", "etcd3", "172.28.127.10" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] }Run the following command to create the etcd certificate and key files:
cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=server etcd.json | cfssljson \ -bare etcd - Create etcd peer certificates.
Create peer certificates for each etcd server. Etcd uses the certificates to secure communicate between nodes.
Create a file for each of your etcd nodes with the name etcd-peer-hostname.json, where hostname is the name of each host. Customize the host name and IP address in each file (shown in bold in the example below):
{ "CN": "etcd", "hosts": [ "etcd1.k8s.myorg.com", "etcd1", "172.28.127.8" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] }Run the following command to create the etcd peer certificate and key files:
cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=peer etcd-peer-HOSTNAME.json | cfssljson \ -bare etcd-peer-HOSTNAME - Create etcd client certificate
Create a client certificate which API Server will use to communicate with etcd in etcd-client.json:
{ "CN": "etcd-client", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] }Run the following command to create the etcd client certificate and key files:
cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=client etcd-client.json | cfssljson \ -bare etcd-client
- Download cfssl and cfssljson. Download version 1.3 of
- Create etcd cluster
- Create SSL certificate directory.
Enter this command on each etcd server:
useradd -r -s /sbin/nologin etcd mkdir -p /etc/etcd/ssl - Copy certificates.
Copy the following certificates onto each etcd server in directory
/etc/etcd/ssl:- ca.pem
- etcd-peer-hostname.pem
- etcd-peer-hostname-key.pem
- etcd.pem
- etcd-key.pem
- etcd-client.pem
- etcd-client-key.pem
Set hostname above to be the name of each of the etcd nodes.
Enter this command to give the etcd user the required directory and file permissions:
chown root:etcd -R /etc/etcd/ssl - Install etcd binaries.
Download etcd binaries from https://github.com/coreos/etcd/releases/download/v3.1.2/etcd-v3.1.2-linux-amd64.tar.gz and extract
etcdandetcdctlto /usr/bin on each etcd server.curl -L https://github.com/coreos/etcd/releases/download/v3.1.2/etcd-v3.1.2-linux-amd64.tar.gz -o \ /tmp/etcd-v3.1.2-linux-amd64.tar.gz cd /tmp tar zxf etcd-v3.1.2-linux-amd64.tar.gz cp /tmp/etcd-v3.1.2-linux-amd64/etcd /usr/bin/ cp /tmp/etcd-v3.1.2-linux-amd64/etcdcctl /usr/bin/ - Create etcd data directory.
Create the data directory by entering this command on each etcd server:
mkdir -p /var/lib/etcd/ chown etcd:etcd /var/lib/etcd/ - Create and start etcd service.
Customize the following template for each etcd node and copy it to
/etc/systemd/system/etcd.service.The initial cluster is a comma-separated list of
HOSTNAME=IPfor each of the etcd nodes. Customize the rest of the file with the IP address of each etcd node. Values you need to set are shown in bold in the example below:[Unit] Description=Etcd Server After=network.target [Service] Type=simple WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf EnvironmentFile=-/etc/default/etcd User=etcd ExecStart=/usr/bin/etcd \ --name="HOSTNAME" \ --initial-cluster="https://HOST_IP:2380,https://HOST_IP:2380" \ --listen-peer-urls="https://HOST_IP:2380" \ --initial-advertise-peer-urls="https://HOST_IP:2380" \ --advertise-client-urls="https://HOST_IP:2379" \ --listen-client-urls="https://HOST_IP:2379,https://127.0.0.1:2379" \ --data-dir=/var/lib/etcd/ \ --trusted-ca-file="/etc/etcd/ssl/ca.pem" \ --cert-file="/etc/etcd/ssl/etcd.pem" \ --key-file="/etc/etcd/ssl/etcd-key.pem" \ --peer-cert-file="/etc/etcd/ssl/etcd-peer-hostname.pem" \ --peer-key-file="/etc/etcd/ssl/etcd-peer-hostname-key.pem" \ --peer-trusted-ca-file="/etc/etcd/ssl/ca.pem" \ --client-cert-auth \ --peer-client-cert-auth \ --initial-cluster-state=new Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.targetNote: You created the files etcd-peer-hostname.pem and etcd-peer-hostname-key.pem in step 2.Start etcd on each node:
systemctl enable etcd.service systemctl start etcd.service
- Create SSL certificate directory.
- Set up nodes
This is the common setup for both master and worker Kubernetes nodes.
- Create the SSL certificate directory.
On each master and worker node enter this command:
mkdir -p /etc/kubernetes/ssl - Copy certificates.
Copy the following certificates onto each master and worker node in directory
/etc/kubernetes/ssl/:- ca.pem
- etcd-client.pem
- etcd-client-key.pem
- kubelet.pem
- kubelet-key.pem
- proxy.pem
- proxy-key.pem
- Install flannel, a virtual network that gives a subnet to each host for use with Kubernetes.
Enter the following command to download flannel binaries from https://github.com/coreos/flannel/releases/download/v0.7.0/flanneld-amd64 and extract them to
/usr/binon each Kubernetes server.curl -L -o /usr/bin/flanneld https://github.com/coreos/flannel/releases/download/v0.7.0/flanneld-amd64 chmod +x /usr/bin/flanneld - Install flannel CNI binaries.
Download https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tgz and extract to /opt/cni/bin on each Kubernetes node.
curl -L https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tgz -o /tmp/cni-amd64-v0.4.0.tgz cd /tmp mkdir cni-amd64-v0.4.0 tar zxf cni-amd64-v0.4.0.tgz --directory cni-amd64-v0.4.0 mkdir -p /opt/cni/bin cp -ar /tmp/cni-amd64-v0.4.0/* /opt/cni/bin/ - Create the flannel service.
Copy the following to
/etc/systemd/system/flannel.service:[Unit] Description=flannel is an etcd backed overlay network for containers After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify EnvironmentFile=-/etc/default/flanneld ExecStart=/usr/bin/flanneld $FLANNEL_OPTIONS -logtostderr Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.serviceCustomize the following template with information about your etcd node (shown in bold below) and copy it to
/etc/default/flanneld:FLANNELD_ETCD_ENDPOINTS="HOST=IP:2379,HOST=IP:2379" FLANNELD_ETCD_PREFIX="coreos.com/network/" FLANNELD_ETCD_CERTFILE="/etc/kubernetes/ssl/etcd-client.pem" FLANNELD_ETCD_KEYFILE="/etc/kubernetes/ssl/etcd-client-key.pem" FLANNELD_ETCD_CAFILE="/etc/kubernetes/ssl/ca.pem" FLANNELD_IFACE="INTERNAL_NETWORK_NIC" FLANNEL_OPTIONS="-ip-masq"Change the value for
FLANNELD_IFACEto be the NIC for your internal traffic. - Initialize flannel configuration in etcd.
Customize the following command with the HOST and IP address of your etcd nodes and the POD Network CIDR.
Run the following command on one of the etcd servers.
etcdctl \ --cert-file /etc/kubernetes/ssl/etcd-client.pem \ --key-file=/etc/kubernetes/ssl/etcd-client-key.pem \ --ca-file /etc/kubernetes/ssl/ca.pem \ --endpoints "https://HOST_IP:2379,https://HOST_IP:2379" \ set \ -- coreos.com/network/config '{"Network":"POD_NETWORK","Backend":{"Type":"vxlan"}}' - Enable and start the flannel service.
On each of the master and worker nodes, run the following commands to enable and run the flannel overlay network.
systemctl enable flannel.service systemctl start flannel.service - Install Docker.
On each of the master and worker nodes, run the following commands to install Docker and stop the service.
apt-get install -y docker.io systemctl stop docker - Configure Docker to use flannel
Add the following lines to the /etc/default/docker file.
DOCKER_NOFILE=1000000 DOCKER_OPT_BIP="" DOCKER_OPT_IPMASQ=""Run the following command:mkdir -p /etc/cni/net.d/Copy the following lines to /etc/cni/net.d/10-flannel.conf
{ "name": "kubenet", "type": "flannel", "delegate": { "isDefaultGateway": true, "ipMasq": true } }Bring down the docker0 bridge network.
ip link set dev docker0 down brctl delbr docker0 iptables -t nat -F - Install the Kubernetes binaries
On each of the master and worker nodes, enter these commands to download
kubectlandkubeletto/usr/bin/:curl -o /usr/bin/kubelet http://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/kubelet curl -o /usr/bin/kubectl http://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/kubectl chmod +x /usr/bin/kubelet /usr/bin/kubectl - Create the Kubernetes service.
On each of the master and worker nodes, customize the template below and copy to
/lib/systemd/system/kubelet.service.[Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/sysconfig/kubelet ExecStart=/usr/bin/kubelet \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --tls-cert-file=/etc/kubernetes/ssl/kubelet.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kubelet-key.pem \ --config=/etc/kubernetes/manifests \ --register-node=true \ --api-servers="https://HOST:PORT" \ --cluster_dns=DNS_SERVICE_IP \ --cluster_domain=cluster.local \ --allow-privileged=true \ --enable-debugging-handlers=true \ --port=10250 \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --hostname-override=PUBLIC_IP Restart=always RestartSec=2s StartLimitInterval=0 KillMode=process [Install] WantedBy=multi-user.targetMake the following changes:- Set
api-serversto either the load-balanced host/port or a comma-separated list of all the API servers. - Set
cluster_dnsto the IP address of the Kubernetes DNS service. - For
hostname-override, specify the public IP address. - Set
server: https://HOST:PORTto the host name and port number of the load balancer.
- Set
-
Set up kubeconfig for kubelet.
On each of the master and worker nodes, customize the template below and copy to
/var/lib/kubelet/kubeconfig.Set
server: https://HOST:PORTto be the host name and port number of the load-balancer.apiVersion: v1 clusters: - cluster: certificate-authority: /etc/kubernetes/ssl/ca.pem server: https://HOST:PORT name: mycluster contexts: - context: cluster: mycluster user: kubelet name: mycontext current-context: mycontext kind: Config preferences: {} users: - name: kubelet user: client-certificate: /etc/kubernetes/ssl/kubelet.pem client-key: /etc/kubernetes/ssl/kubelet-key.pem -
Set up kubeconfig for kube-proxy.
On each of the master and worker nodes, customize the template below and copy to
/var/lib/kube-proxy/kubeconfig.Set
server: https://HOST:PORTto the host name and port number of the load-balancer.apiVersion: v1 clusters: - cluster: certificate-authority: /etc/kubernetes/ssl/ca.pem server: https://HOST:PORT name: mycluster contexts: - context: cluster: mycluster user: proxy name: mycontext current-context: mycontext kind: Config preferences: {} users: - name: proxy user: client-certificate: /etc/kubernetes/ssl/proxy.pem client-key: /etc/kubernetes/ssl/proxy-key.pem -
Set up manifest for kube-proxy.
On each of the master and worker nodes, customize the template below and copy to
/etc/kubernetes/manifests/proxy.manifest.Set
server: https://HOST:PORTto the host name and port number of the load-balancer.apiVersion: v1 kind: Pod metadata: name: kube-proxy namespace: kube-system # This annotation ensures that kube-proxy does not get evicted if the node # supports critical pod annotation based priority scheme. # Note that kube-proxy runs as a static pod so this annotation does NOT have # any effect on rescheduler (default scheduler and rescheduler are not # involved in scheduling kube-proxy). annotations: scheduler.alpha.kubernetes.io/critical-pod: '' labels: tier: node component: kube-proxy spec: hostNetwork: true containers: - name: kube-proxy image: "gcr.io/google_containers/hyperkube:v1.5.3" command: - /hyperkube - proxy - "--master=https://HOST:PORT" - "--kubeconfig=/var/lib/kube-proxy/kubeconfig" - "--cluster-cidr=POD_NETWORK_CIDR" - --proxy-mode=iptables - --masquerade-all securityContext: privileged: true volumeMounts: - mountPath: /etc/ssl/certs name: etc-ssl-certs readOnly: true - mountPath: /etc/kubernetes/ssl name: kubecerts readOnly: true - mountPath: /var/lib/kube-proxy/kubeconfig name: kubeconfig readOnly: false volumes: - hostPath: path: /etc/kubernetes/ssl name: kubecerts - hostPath: path: /etc/ssl/certs name: etc-ssl-certs - hostPath: path: /var/lib/kube-proxy/kubeconfig name: kubeconfig
- Create the SSL certificate directory.
- Set up master nodes
Run the following steps only on master nodes.
- Copy certificates.
Copy the following certificates onto each master server under
/etc/kubernetes/ssl/:- apiserver.pem
- apiserver-key.pem
- scheduler.pem
- scheduler-key.pem
- controller.pem
- controller-key.pem
- proxy.pem
- proxy-key.pem
- ca-key.pem
Copy the following certificates onto each master server under
/root/.kube/:- ca.pem
- admin.pem
- admin-key.pem
-
Set up the admin user kubeconfig.
On each of the master nodes, customize the template below and copy to
/root/.kube/config.Set
server: https://HOST:PORTto the host name and port number of the load-balancer.apiVersion: v1 clusters: - cluster: certificate-authority: ca.pem server: https://HOST:PORT name: mycluster contexts: - context: cluster: mycluster user: admin name: mycontext current-context: mycontext kind: Config preferences: {} users: - name: admin user: client-certificate: admin.pem client-key: admin-key.pem - Set up API Server ABAC permissions.
On each of the master nodes, copy the ABAC file to
/etc/kubernetes/abac-auth.jsonl:{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group":"system:authenticated", "nonResourcePath": "*", "readonly": true } } {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:unauthenticated", "nonResourcePath": "*", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"admin", "namespace": "*", "resource": "*", "apiGroup": "*" }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"controller","namespace": "*", "resource": "*", "apiGroup": "*" }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:serviceaccount:kube-system:default", "namespace":"*", "resource":"*", "apiGroup":"*"}} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:serviceaccount:kube-system:readonly-addon", "namespace":"*", "resource":"*", "apiGroup":"*", "readonly": true}} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "services", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "endpoints", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "secrets", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "healthz", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "configmaps", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "persistentvolumes", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "persistentvolumeclaims", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "events" }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "nodes" }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "pods" }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "nodes", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "pods", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "persistentvolumeclaims", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "persistentvolumes", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "replicationcontrollers", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "services", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "replicasets", "apiGroup": "*", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "endpoints" }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "bindings" }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "events" }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"proxy", "namespace": "*", "resource": "services", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"proxy", "namespace": "*", "resource": "endpoints", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"proxy", "namespace": "*", "resource": "nodes", "readonly": true }} {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"proxy", "namespace": "*", "resource": "events" }} - Create API Server kubelet.
On each of the master nodes, customize the template below and copy to
/etc/kubernetes/manifests/apiserver.manifest.Set
service-cluster-ip-rangeto the service CIDR. Foretcd-servers, provide a comma-separated list ofHOST:PORTof the etcd servers.--- kind: Pod apiVersion: v1 metadata: name: kube-apiserver namespace: kube-system labels: tier: control-plane component: kube-apiserver spec: hostNetwork: true containers: - name: apiserver image: "gcr.io/google_containers/hyperkube:v1.5.3" resources: requests: cpu: 250m command: - /hyperkube - apiserver - "--advertise-address=INTERNAL_IP" - "--secure-port=6443" - "--insecure-port=0" - "--service-cluster-ip-range=SERVICE_CIDR" - "--etcd-servers=https://HOST_IP:2379,https://HOST_IP:2379" - "--etcd-quorum-read" - "--cert-dir=/etc/kubernetes/ssl" - "--allow-privileged=true" - "--anonymous-auth=false" - "--tls-ca-file=/etc/kubernetes/ssl/ca.pem" - "--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem" - "--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem" - "--etcd-cafile=/etc/kubernetes/ssl/ca.pem" - "--etcd-certfile=/etc/kubernetes/ssl/etcd-client.pem" - "--etcd-keyfile=/etc/kubernetes/ssl/etcd-client-key.pem" - "--client-ca-file=/etc/kubernetes/ssl/ca.pem" - "--kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem" - "--kubelet-client-certificate=/etc/kubernetes/ssl/apiserver.pem" - "--kubelet-client-key=/etc/kubernetes/ssl/apiserver-key.pem" - "--kubelet-https" - "--service-account-key-file=/etc/kubernetes/ssl/apiserver.pem" - "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota" - "--runtime-config=extensions/v1beta1=true,batch/v1=true,rbac.authorization.k8s.io/v1alpha1=true" - --authorization-mode=ABAC,RBAC - --authorization-policy-file=/etc/kubernetes/abac-auth.jsonl - -v=6 ports: - name: https hostPort: 6443 containerPort: 6443 volumeMounts: - name: etckubernetes mountPath: /etc/kubernetes readOnly: true volumes: - name: etckubernetes hostPath: path: /etc/kubernetes -
Set up kubeconfig for scheduler.
On each of the master and worker nodes, customize the template below and copy to
/var/lib/kube-scheduler/kubeconfig.Set
server: https://HOST:PORTto the host name and port number of the load-balancer.apiVersion: v1 clusters: - cluster: certificate-authority: /etc/kubernetes/ssl/ca.pem server: https://HOST:PORT name: mycluster contexts: - context: cluster: mycluster user: scheduler name: mycontext current-context: mycontext kind: Config preferences: {} users: - name: scheduler user: client-certificate: /etc/kubernetes/ssl/scheduler.pem client-key: /etc/kubernetes/ssl/scheduler-key.pem -
Set up manifest for scheduler.
On each of the master and worker nodes, customize the template below and copy to
/etc/kubernetes/manifests/scheduler.manifest.Set
master=https://HOST:PORTto the host name and port number of the load-balancer.--- kind: Pod apiVersion: v1 metadata: name: kube-scheduler namespace: kube-system labels: tier: control-plane component: kube-scheduler spec: hostNetwork: true containers: - name: kube-scheduler image: "gcr.io/google_containers/hyperkube:v1.5.3" command: - /hyperkube - scheduler - "--algorithm-provider=ClusterAutoscalerProvider" - "--kubeconfig=/var/lib/kube-scheduler/kubeconfig" - --master=https://HOST:PORT - "--leader-elect=true" livenessProbe: httpGet: scheme: HTTP host: 127.0.0.1 port: 10251 path: /healthz initialDelaySeconds: 15 timeoutSeconds: 15 volumeMounts: - name: kubeconfig mountPath: /var/lib/kube-scheduler/kubeconfig readOnly: true - name: etckubernetesssl mountPath: /etc/kubernetes/ssl readOnly: true volumes: - name: kubeconfig hostPath: path: /var/lib/kube-scheduler/kubeconfig - name: etckubernetesssl hostPath: path: /etc/kubernetes/ssl - Set up kubeconfig for controller.
On each of the master and worker nodes, customize the template below and copy to
/var/lib/kube-controller/kubeconfig.Set
server: https://HOST:PORTto the host name and port number of the load balancer.apiVersion: v1 clusters: - cluster: certificate-authority: /etc/kubernetes/ssl/ca.pem server: https://HOST:PORT name: mycluster contexts: - context: cluster: mycluster user: controller name: mycontext current-context: mycontext kind: Config preferences: {} users: - name: controller user: client-certificate: /etc/kubernetes/ssl/controller.pem client-key: /etc/kubernetes/ssl/controller-key.pem -
Set up manifest for controller.
On each of the master and worker nodes, customize the template below and copy to
/etc/kubernetes/manifests/controller.manifest.--- kind: Pod apiVersion: v1 metadata: name: kube-controller-manager namespace: kube-system labels: tier: control-plane component: kube-controller-manager spec: hostNetwork: true containers: - name: kube-controller-manager image: "gcr.io/google_containers/hyperkube:v1.5.3" resources: requests: cpu: 200m command: - /hyperkube - controller-manager - "--cluster-name=CLUSTER_NAME" - "--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem" - "--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem" # to add ca.crt to service accounts - "--root-ca-file=/etc/kubernetes/ssl/ca.pem" # to sign service account token - "--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem" - "--kubeconfig=/var/lib/kube-controller/kubeconfig" - --leader-elect=true - "--cluster-cidr=POD_NETWORK_CIDR" - "--node-cidr-mask-size=24" volumeMounts: - name: etckubernetes mountPath: /etc/kubernetes readOnly: true - name: kubeconfig mountPath: /var/lib/kube-controller/kubeconfig readOnly: true livenessProbe: httpGet: host: 127.0.0.1 port: 10252 path: /healthz initialDelaySeconds: 15 timeoutSeconds: 15 volumes: - name: kubeconfig hostPath: path: /var/lib/kube-controller/kubeconfig - name: etckubernetes hostPath: path: /etc/kubernetes
- Copy certificates.
- Start kubelet on all nodes
- Enter these commands to start kubelet:
systemctl start docker systemctl enable kubelet.service systemctl start kubelet.service - Check cluster status.
On one of the master nodes, enter the following command:
kubectl get nodes
- Enter these commands to start kubelet:
- Add critical addons
-
On each master node, create the addons directory.
mkdir -p /etc/kubernetes/addons - Create Addon-manager manifest.
On each of the master, customize the template below and copy to
/etc/kubernetes/addons/addon-manager.yaml:--- apiVersion: v1 kind: ServiceAccount metadata: namespace: kube-system name: readonly-addon --- apiVersion: v1 kind: Pod metadata: name: kube-addon-manager namespace: kube-system labels: component: kube-addon-manager spec: hostNetwork: true containers: - name: kube-addon-manager image: gcr.io/google-containers/kube-addon-manager:v6.4-beta.1 command: - /bin/bash - -c - /opt/kube-addons.sh 1>>/var/log/kube-addon-manager.log 2>&1 resources: requests: cpu: 5m memory: 50Mi volumeMounts: - mountPath: /etc/kubernetes/ name: addons readOnly: true - mountPath: /var/log name: varlog readOnly: false volumes: - hostPath: path: /etc/kubernetes/ name: addons - hostPath: path: /var/log name: varlog - Create DNS addon manifest.
On each of the master nodes, customize the template below and copy to
/etc/kubernetes/addons/dns-addon.yaml.Set
DNS_SERVICE_IPto the IP address of the DNS service. This should match thecluster_dnssetting on the Kubelet services.--- apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: DNS_SERVICE_IP ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' spec: volumes: - name: kube-dns-config configMap: name: kube-dns containers: - name: kubedns image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 env: - name: PROMETHEUS_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP volumeMounts: - name: kube-dns-config mountPath: /kube-dns-config - name: dnsmasq image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 livenessProbe: httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --log-facility=- - --server=/cluster.local/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details resources: requests: cpu: 150m memory: 20Mi volumeMounts: - name: kube-dns-config mountPath: /etc/k8s/dns/dnsmasq-nanny - name: sidecar image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 livenessProbe: httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS. serviceAccountName: readonly-addon - Create Dashboard addon manifest.
On each of the master nodes, customize the template below and copy to
/etc/kubernetes/addons/dashboard.yaml:--- apiVersion: v1 kind: Service metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: k8s-app: kubernetes-dashboard ports: - port: 80 targetPort: 9090 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard annotations: scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' spec: containers: - name: kubernetes-dashboard image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi ports: - containerPort: 9090 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 - Add Grafana monitoring addon manifest.
On each of the master nodes, customize the template below and copy to
/etc/kubernetes/addons/grafana-influx-monitoring.yaml.--- apiVersion: v1 kind: Service metadata: name: monitoring-grafana namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "Grafana" spec: # On production clusters, consider setting up auth for grafana, and # exposing Grafana either using a LoadBalancer or a public IP. # type: LoadBalancer ports: - port: 80 targetPort: 3000 selector: k8s-app: influxGrafana --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: heapster-v1.3.0-beta.1 namespace: kube-system labels: k8s-app: heapster kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile version: v1.3.0-beta.1 spec: replicas: 1 selector: matchLabels: k8s-app: heapster version: v1.3.0-beta.1 template: metadata: labels: k8s-app: heapster version: v1.3.0-beta.1 annotations: scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' spec: containers: - image: gcr.io/google_containers/heapster-amd64:v1.3.0-beta.1 name: heapster livenessProbe: httpGet: path: /healthz port: 8082 scheme: HTTP initialDelaySeconds: 180 timeoutSeconds: 5 command: - /heapster - --source=kubernetes.summary_api:'' - --sink=influxdb:http://monitoring-influxdb:8086 - image: gcr.io/google_containers/heapster-amd64:v1.3.0-beta.1 name: eventer command: - /eventer - --source=kubernetes:'' - --sink=influxdb:http://monitoring-influxdb:8086 - image: gcr.io/google_containers/addon-resizer:1.7 name: heapster-nanny resources: limits: cpu: 50m memory: 90Mi requests: cpu: 50m memory: 90Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace command: - /pod_nanny - --cpu=80m - --extra-cpu=0.5m - --memory=140Mi - --extra-memory=4Mi - --threshold=5 - --deployment=heapster-v1.3.0-beta.1 - --container=heapster - --poll-period=300000 - --estimator=exponential - image: gcr.io/google_containers/addon-resizer:1.7 name: eventer-nanny resources: limits: cpu: 50m memory: 90Mi requests: cpu: 50m memory: 90Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace command: - /pod_nanny - --cpu=100m - --extra-cpu=0m - --memory=190Mi - --extra-memory=500Ki - --threshold=5 - --deployment=heapster-v1.3.0-beta.1 - --container=eventer - --poll-period=300000 - --estimator=exponential --- kind: Service apiVersion: v1 metadata: name: heapster namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "Heapster" spec: ports: - port: 80 targetPort: 8082 selector: k8s-app: heapster --- apiVersion: v1 kind: ReplicationController metadata: name: monitoring-influxdb-grafana-v4 namespace: kube-system labels: k8s-app: influxGrafana version: v4 kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: replicas: 1 selector: k8s-app: influxGrafana version: v4 template: metadata: labels: k8s-app: influxGrafana version: v4 kubernetes.io/cluster-service: "true" spec: containers: - image: gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 name: influxdb resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 500Mi requests: cpu: 100m memory: 500Mi ports: - containerPort: 8083 - containerPort: 8086 volumeMounts: - name: influxdb-persistent-storage mountPath: /data - image: gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 name: grafana env: resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 100Mi requests: cpu: 100m memory: 100Mi env: # This variable is required to setup templates in Grafana. - name: INFLUXDB_SERVICE_URL value: http://monitoring-influxdb:8086 # The following env variables are required to make Grafana accessible via # the kubernetes api-server proxy. On production clusters, we recommend # removing these env variables, setup auth for grafana, and expose the grafana # service using a LoadBalancer or a public IP. - name: GF_AUTH_BASIC_ENABLED value: "false" - name: GF_AUTH_ANONYMOUS_ENABLED value: "true" - name: GF_AUTH_ANONYMOUS_ORG_ROLE value: Admin - name: GF_SERVER_ROOT_URL value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ volumeMounts: - name: grafana-persistent-storage mountPath: /var volumes: - name: influxdb-persistent-storage emptyDir: {} - name: grafana-persistent-storage emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: monitoring-influxdb namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "InfluxDB" spec: ports: - name: http port: 8083 targetPort: 8083 - name: api port: 8086 targetPort: 8086 selector: k8s-app: influxGrafana - Add Docker registry addon manifest.
On each of the master nodes, customize the template below and copy to
/etc/kubernetes/addons/registry.yaml. This creates a Docker registry where you can push images.Note: If you already have a Docker registry, you can skip this step.This registry is set up with an
empty-dirmount which means that data will be lost if the pod is restarted or moved. To persist data, you will need to back theimage-storevolume with a clustered filesystem.apiVersion: extensions/v1beta1 kind: ReplicaSet metadata: name: kube-registry-v0 namespace: kube-system spec: replicas: 1 selector: matchLabels: k8s-app: kube-registry version: v0 template: metadata: labels: k8s-app: kube-registry version: v0 kubernetes.io/cluster-service: "true" spec: containers: - name: registry image: registry:2 resources: limits: cpu: 100m memory: 100Mi env: - name: REGISTRY_HTTP_ADDR value: :5000 - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY value: /var/lib/registry volumeMounts: - name: image-store mountPath: /var/lib/registry ports: - containerPort: 5000 name: registry protocol: TCP volumes: - name: image-store # Update to volume claim emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: kube-registry namespace: kube-system labels: k8s-app: kube-registry kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeRegistry" spec: selector: k8s-app: kube-registry type: NodePort ports: - name: registry port: 5000 nodePort: 30000 protocol: TCP - Add an ingress controller manifest
On each of the master nodes, customize the template below and copy to
/etc/kubernetes/addons/nginx-ingress.yaml:apiVersion: extensions/v1beta1 kind: Deployment metadata: name: default-http-backend labels: k8s-app: default-http-backend namespace: kube-system spec: replicas: 1 template: metadata: labels: k8s-app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.0 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: kube-system labels: k8s-app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: k8s-app: default-http-backend --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: nginx-ingress-controller labels: k8s-app: nginx-ingress-controller namespace: kube-system spec: template: metadata: labels: k8s-app: nginx-ingress-controller spec: # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used # like with kubeadm hostNetwork: true terminationGracePeriodSeconds: 60 containers: - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2 name: nginx-ingress-controller readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 ports: - containerPort: 80 hostPort: 80 - containerPort: 443 hostPort: 443 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - Create the addons.
On one of the master nodes, use kubectl to create the addons by entering the following commands:
/usr/bin/kubectl apply -f /etc/kubernetes/addons/addon-manager.yaml /usr/bin/kubectl apply -f /etc/kubernetes/addons/dns-addon.yaml /usr/bin/kubectl apply -f /etc/kubernetes/addons/dashboard.yaml /usr/bin/kubectl apply -f /etc/kubernetes/addons/grafana-influx-monitoring.yaml /usr/bin/kubectl apply -f /etc/kubernetes/addons/registry.yaml /usr/bin/kubectl apply -f /etc/kubernetes/addons/nginx-ingress.yaml - Get cluster status.
Run the following command on a master node to see all the addons have been created:
kubectl cluster-info
-
On each master node, create the addons directory.