Installing, configuring 3 node Kubernetes(master) cluster on CentOS 7.5 – Adding Kubernetes worker nodes to the Kubernetes cluster
Below I am continuing with the Kubernetes cluster setup. In this post we are going to add / configure worker nodes. Please check out the full series to see how to configure a 3 node Kubernetes master, the links are below.
- Part 1: Initial setup – bear-metal installation, configuration
- Part 2: Installing the Kubernetes VM’s
- Part 3: Installing and configuring Flanneld, CNI plugin and Docker
- Part 4: Installing and configuring kubernetes manifest and kubelet service
- Part 5: Adding CoreDNS as part of the Kubernetes cluster
- Part 6: Adding / Configuring Kubernetes worker nodes
- Part 7: Enabling / Configuring RBAC, TLS Node bootstrapping
- Part 8: Installing / Configuring Helm, Prometheus, Alertmanager, Grafana and Elasticsearch
This is Part 6 – Adding / Configuring Kubernetes worker nodes.
Preparing Kubernetes worker nodes
Below is a quick re-cape of the Hostname / IP Address I am using for the Master and Worker Nodes. First we need to preparing the Worker Node (or VM), similar to a the Master node. but, with a few exceptions. Configure each VM with the below resources.
- 1 Virtual CPU is fine.
- At least 2Gb of RAM.
- At least 12Gb HDD.
- Assign Network Switch to network port.
Set the below on each Kubernetes VM (master and Workers). Disable SE Linux by Running the below.
setenforce 0 # Save the config after a reboot sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
Disable swap
swapoff -a # Save the config after a reboot by commenting in /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0
If you are behind a firewall or corporate proxy, add your proxy to /etc/yum.conf and /etc/environment (for an example check out in part 1).
modprobe br_netfilter echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables echo '1' > /proc/sys/net/ipv4/ip_forward # Run sysctl to verify sysctl -p
Install Docker packages
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install -y docker-ce
Add kubernetes repo
cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
Install kubernetes and other related packages
yum install -y kubelet kubeadm kubectl flannel epel-release conntrack jq vim bash-completion lsof screen git net-tools && yum update -y
Since we are not going to use kubeadm for our configuration comment out all entry’s in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf like the below example.
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # Note: This dropin only works with kubeadm and kubelet v1.11+ #[Service] #Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" #Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically #EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. #EnvironmentFile=-/etc/sysconfig/kubelet #ExecStart= #ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Disable the system firewall by running the below, it would be partially managed by kubernetes.
systemctl disable firewalld.service systemctl stop firewalld
Note: There are other options to deal with firewall rules, for example just enabling the ports required by kubernetes. Lastly before continuing reboot each vm instances. Note: etcd shuld NOT be configured on the worker node, however /etc/kubernetes/ssl shuld be copied to every worker node.
Worker Node Flannel configuration
Tip: Similar to Master Nodes, you need to configure flannel, however the flannel service dose not depend on etcd since there is no local etcd service on a worker node. The first thing we are going to do, is grab the latest flanneld binary, you do so by running something like the below.
curl -o flannel-v0.10.0-linux-amd64.tar.gz https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz tar zxf flannel-v0.10.0-linux-amd64.tar.gz mv flanneld /usr/bin/flanneld /usr/bin/flanneld -version v0.10.0
Note: For a list of the latest flanneld versions click here. Make sure vXlan is enabled on your system, by running the below. Note: Flannel uses vXlan as the encapsulation protocol.
cat /boot/config-`uname -r` | grep CONFIG_VXLAN CONFIG_VXLAN=m
Next, lets create the flanneld service cat /usr/lib/systemd/system/flanneld.service
[Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target Requires=flanneld.service Before=docker.service [Service] Type=notify EnvironmentFile=/etc/sysconfig/flanneld EnvironmentFile=-/etc/sysconfig/docker-network ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target WantedBy=docker.service
Next, modify /etc/sysconfig/flanneld something like the below (change ip address for each worker node).
cat /etc/sysconfig/flanneld # Flanneld configuration options # etcd url location. Point this to the server where etcd runs #FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379" FLANNEL_ETCD_ENDPOINTS="https://172.20.0.11:2379,https://172.20.0.12:2379,https://172.20.0.13:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment #FLANNEL_ETCD_PREFIX="/atomic.io/network" FLANNEL_ETCD_PREFIX="/coreos.com/network" # Any additional options that you want to pass FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/etcd-node.pem -etcd-keyfile=/etc/kubernetes/ssl/etcd-node-key.pem -iface=enp0s3 -public-ip=172.20.0.51 -ip-masq=true"
Flannel CNI configuration
We are now going to add the CNI configuration. First lets Download the latest CNI drivers, you do so by running the below.
mkdir -p /opt/cni/bin && cd /opt/cni/bin curl -o cni-amd64-v0.6.0.tgz https://github.com/containernetworking/cni/releases/download/v0.6.0/cni-amd64-v0.6.0.tgz tar zxf cni-amd64-v0.6.0.tgz
Note: You can find the latest CNI releases here. Next, lets create the CNI configuration directory
mkdir -p /etc/kubernetes/cni/net.d /etc/cni /etc/docker /usr/bin/ln -sf /etc/kubernetes/cni/net.d /etc/cni/net.d
Create the the CNI network configuration file.
cat /etc/kubernetes/cni/net.d/10-containernet.conf { "name": "podnet", "type": "flannel", "delegate": { "forceAddress": true, "isDefaultGateway": true, "hairpinMode": true } }
We are now ready to start flannel, you do so by running the below.
# Show flanneld log/output journalctl -u flanneld -f & # Re-load systemd systemctl daemon-reload # Enable the service and start the flanneld service systemctl enable flanneld && systemctl start flanneld
Configuring the docker service(s).
Replace in /usr/lib/systemd/system/docker.service service like the below.
# from After=network-online.target firewalld.service # to After=network-online.target flanneld.service add [Service] Type=notify EnvironmentFile=-/run/flannel/docker <<<---(without the arrows) ...
Create a docker socket service file.
cat /etc/systemd/system/docker.socket [Unit] Description=Docker Socket for the API PartOf=docker.service [Socket] ListenStream=/var/run/docker.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target
Create a docker network file /etc/docker/daemon.json with the below content (replace with the ip of each node). Note: Example below is taken from node1.
cat /etc/docker/daemon.json { "bip": "172.30.0.51/20" }
Now lets start the docker service.
systemctl daemon-reload # Pre docker service start systemctl enable docker.socket && systemctl start docker.socket journalctl -u docker -f & systemctl enable docker && systemctl start docker
Creating the kubernets worker manifest
A Kubernetes Worker Node only consists of a proxy components(process) something like the below proxy yaml config.
- Kube Proxy
Now, lets create the Kubernetes proxy manifest file. just create the below file in the /etc/kubernetes/manifests/ directory. Note: Replace –hostname-override with each work node IP address. cat manifests/kube-proxy-work.yaml
--- apiVersion: v1 kind: Pod metadata: labels: k8s-app: kube-proxy tier: node name: kube-proxy namespace: kube-system spec: containers: - command: - ./hyperkube - proxy - "--master=https://172.20.0.11:443" - "--kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml" - "--logtostderr=true" - "--proxy-mode=iptables" - "--hostname-override=172.20.0.51" - "--cluster-cidr=10.20.0.0/20" - "--v=3" env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName image: "quay.io/coreos/hyperkube:v1.8.0_coreos.0" name: kube-proxy securityContext: privileged: true volumeMounts: - mountPath: /etc/ssl/certs name: ssl-certs-host readOnly: true - mountPath: /etc/kubernetes/ssl name: "kube-ssl" readOnly: true hostNetwork: true tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists volumes: - hostPath: path: /usr/share/ca-certificates name: ssl-certs-host - hostPath: path: "/etc/kubernetes/ssl" name: "kube-ssl"
Create your kubeconfig.yaml – (your authentication method).
cat /etc/kubernetes/ssl/kubeconfig.yaml apiVersion: v1 kind: Config clusters: - name: local cluster: server: https://172.20.0.11:443 certificate-authority: /etc/kubernetes/ssl/ca.pem users: - name: kubelet user: client-certificate: /etc/kubernetes/ssl/etcd-node.pem client-key: /etc/kubernetes/ssl/etcd-node-key.pem contexts: - context: cluster: local user: kubelet
Create the config.yaml file. this file is contains additional kubelet configuration.
cat /etc/kubernetes/config.yaml address: 0.0.0.0 apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/ssl/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: cgroupfs cgroupsPerQOS: true clusterDNS: - 10.3.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kind: KubeletConfiguration tlsCertFile: "/etc/kubernetes/ssl/etcd-node.pem" tlsPrivateKeyFile: "/etc/kubernetes/ssl/etcd-node-key.pem" kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 port: 10250 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s
Finally, create your kubelet service file.
cat /etc/systemd/system/kubelet.service [Unit] Description=kubelet: The Kubernetes Node Agent Documentation=http://kubernetes.io/docs/ [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/bin/kubelet \ --register-node=true \ --allow-privileged \ --hostname-override=kmaster1 \ --kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml \ --config=/etc/kubernetes/config.yaml \ --network-plugin=cni \ --cni-conf-dir=/etc/kubernetes/cni/net.d \ --lock-file=/var/run/lock/kubelet.lock \ --exit-on-lock-contention \ --logtostderr=true \ --v=2 Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target
Create the kubelet directory
mkdir /var/lib/kubelet
We are finnaly ready to start the kubelet service.
systemctl daemon-reload journalctl -u kubelet -f & systemctl enable kubelet && systemctl start kubelet
To verify your pods runing/wokring run the below. If all is working properly, you should see something like the below output.
kubectl get all --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system pod/coredns-58c8c868b7-84d5q 1/1 Running 0 4d 10.20.2.31 kmaster2 kube-system pod/coredns-58c8c868b7-jkg4h 1/1 Running 0 4d 10.20.3.41 kmaster1 kube-system pod/kube-apiserver-kmaster1 1/1 Running 6 5d 172.20.0.11 kmaster1 kube-system pod/kube-apiserver-kmaster2 1/1 Running 9 7d 172.20.0.12 kmaster2 kube-system pod/kube-apiserver-kmaster3 1/1 Running 11 11d 172.20.0.13 kmaster3 kube-system pod/kube-controller-manager-kmaster1 1/1 Running 6 5d 172.20.0.11 kmaster1 kube-system pod/kube-controller-manager-kmaster2 1/1 Running 9 7d 172.20.0.12 kmaster2 kube-system pod/kube-controller-manager-kmaster3 1/1 Running 11 11d 172.20.0.13 kmaster3 kube-system pod/kube-proxy-kmaster1 1/1 Running 6 5d 172.20.0.11 kmaster1 kube-system pod/kube-proxy-kmaster2 1/1 Running 5 7d 172.20.0.12 kmaster2 kube-system pod/kube-proxy-kmaster3 1/1 Running 6 8d 172.20.0.13 kmaster3 kube-system pod/kube-proxy-knode1 1/1 Running 0 12m 172.20.0.51 knode1 kube-system pod/kube-proxy-knode2 1/1 Running 0 24s 172.20.0.52 knode2 kube-system pod/kube-scheduler-kmaster1 1/1 Running 6 5d 172.20.0.11 kmaster1 kube-system pod/kube-scheduler-kmaster2 1/1 Running 9 7d 172.20.0.12 kmaster2 kube-system pod/kube-scheduler-kmaster3 1/1 Running 11 11d 172.20.0.13 kmaster3 NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/kubernetes ClusterIP 10.3.0.1 443/TCP 12d kube-system service/kube-dns ClusterIP 10.3.0.10 53/UDP,53/TCP 4d k8s-app=kube-dns NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR kube-system deployment.apps/coredns 2 2 2 2 4d coredns coredns/coredns:1.2.0 k8s-app=kube-dns NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR kube-system replicaset.apps/coredns-58c8c868b7 2 2 2 4d coredns coredns/coredns:1.2.0
To test the worker nodes using the Nginx application, using the below nginx.yaml file.
cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment2 spec: selector: matchLabels: app: nginx-dev replicas: 2 template: metadata: labels: app: nginx-dev spec: containers: - name: nginx-dev image: nginx:latest ports: - containerPort: 80
Just run the below to create the nginx pods.
kubectl apply -f nginx-latest.yaml deployment.apps/nginx-deployment2 created
Verify nginx deployment, output shuld be simlar to below.
kubectl get all --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE default pod/alpine 1/1 Running 100 4d 10.20.9.22 kmaster3 default pod/nginx-deployment-67594d6bf6-wlmvh 1/1 Running 1 5d 10.20.9.20 kmaster3 default pod/nginx-deployment-67594d6bf6-zh6px 1/1 Running 1 5d 10.20.2.26 kmaster2 default pod/nginx-deployment2-7ccf4bdf58-glrsr 1/1 Running 0 10s 10.20.14.2 knode2 default pod/nginx-deployment2-7ccf4bdf58-zz6qz 1/1 Running 0 10s 10.20.10.2 knode1 kube-system pod/coredns-58c8c868b7-84d5q 1/1 Running 0 4d 10.20.2.31 kmaster2 kube-system pod/coredns-58c8c868b7-jkg4h 1/1 Running 0 4d 10.20.3.41 kmaster1 kube-system pod/kube-apiserver-kmaster1 1/1 Running 6 5d 172.20.0.11 kmaster1 kube-system pod/kube-apiserver-kmaster2 1/1 Running 9 7d 172.20.0.12 kmaster2 kube-system pod/kube-apiserver-kmaster3 1/1 Running 11 11d 172.20.0.13 kmaster3 kube-system pod/kube-controller-manager-kmaster1 1/1 Running 6 5d 172.20.0.11 kmaster1 kube-system pod/kube-controller-manager-kmaster2 1/1 Running 9 7d 172.20.0.12 kmaster2 kube-system pod/kube-controller-manager-kmaster3 1/1 Running 11 11d 172.20.0.13 kmaster3 kube-system pod/kube-proxy-kmaster1 1/1 Running 6 5d 172.20.0.11 kmaster1 kube-system pod/kube-proxy-kmaster2 1/1 Running 5 7d 172.20.0.12 kmaster2 kube-system pod/kube-proxy-kmaster3 1/1 Running 6 8d 172.20.0.13 kmaster3 kube-system pod/kube-proxy-knode1 1/1 Running 0 49m 172.20.0.51 knode1 kube-system pod/kube-proxy-knode2 1/1 Running 0 37m 172.20.0.52 knode2 kube-system pod/kube-scheduler-kmaster1 1/1 Running 6 5d 172.20.0.11 kmaster1 kube-system pod/kube-scheduler-kmaster2 1/1 Running 9 7d 172.20.0.12 kmaster2 kube-system pod/kube-scheduler-kmaster3 1/1 Running 11 11d 172.20.0.13 kmaster3 NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/kubernetes ClusterIP 10.3.0.1 443/TCP 12d kube-system service/kube-dns ClusterIP 10.3.0.10 53/UDP,53/TCP 4d k8s-app=kube-dns NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR default deployment.apps/nginx-deployment 2 2 2 2 5d nginx nginx:1.7.9 app=nginx default deployment.apps/nginx-deployment2 2 2 2 2 10s nginx-dev nginx:latest app=nginx-dev kube-system deployment.apps/coredns 2 2 2 2 4d coredns coredns/coredns:1.2.0 k8s-app=kube-dns NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR default replicaset.apps/nginx-deployment-67594d6bf6 2 2 2 5d nginx nginx:1.7.9 app=nginx,pod-template-hash=2315082692 default replicaset.apps/nginx-deployment2-7ccf4bdf58 2 2 2 10s nginx-dev nginx:latest app=nginx-dev,pod-template-hash=3779068914 kube-system replicaset.apps/coredns-58c8c868b7 2 2 2 4d coredns coredns/coredns:1.2.0 k8s-app=kube-dns,pod-template-hash=1474742463
Test the Nginx application by runing curl agenst one of the pod ip address.
curl 10.20.14.2
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required. For online documentation and support please refer to nginx.org. Commercial support is available at nginx.com. Thank you for using nginx.
This concludes the series of CentOS Kubernetes Master and Worker Nodes setup. In the next post Part 7 I am going to show you how to enable RBAC in your Kubernetes cluster as well as Node bootstraping. You might also like – Other related articles to Docker and Kubernetes / micro-service. Like what you’re reading? please provide feedback, any feedback is appreciated.