(2 votes, average: 3.00 out of 5)
Loading...
Installing, configuring 3 node Kubernetes(master) cluster on CentOS 7.5 – configure the Kubernetes VM’s
In Part 1 I described how to install and configure the bear-metal OS hosting all the Kubernetes VM’s, below I am continuing with with the installation and configuration of all Kubernetes VM/masters.- Part 1: Initial setup – bear-metal installation, configuration
- Part 2: Installing the Kubernetes VM’s
- Part 3: Installing and configuring Flanneld, CNI plugin and Docker
- Part 4: Installing and configuring kubernetes manifest and kubelet service
- Part 5: Adding CoreDNS as part of the Kubernetes cluster
- Part 6: Adding / Configuring Kubernetes worker nodes
- Part 7: Enabling / Configuring RBAC, TLS Node bootstrapping
- Part 8: Installing / Configuring Helm, Prometheus, Alertmanager, Grafana and Elasticsearch
Installing the Kubernetes VM’s
- Install CentOS 7.5(1804) / (CentOS-7-x86_64-Minimal-1804.iso), on 5 Virtual Box VM’s.
- 3 VM’s will be configured as Masters, and 2 will be configured as worker nodes.
- 1 Virtual CPU is fine.
- At least 2Gb of RAM.
- At least 12Gb HDD.
- Assign Network Switch to network port.
- The host names, IP Address’s used in my configuration are below (feel free to come up with your own configuration schema).
Configure each VM with the below resources.
setenforce 0 # Save the config after a reboot sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinuxDisable swap
swapoff -a # Save the config after a reboot by commenting in /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0If you are behind a firewall or corporate proxy, add your proxy to /etc/yum.conf and /etc/environment (for an example check out in part 1).
modprobe br_netfilter echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables echo '1' > /proc/sys/net/ipv4/ip_forward # Run sysctl to verify sysctl -pInstall Docker packages
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install -y docker-ceAdd kubernetes repo
cat <Install kubernetes and other related packages/etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
yum install -y kubelet kubeadm kubectl flannel epel-release conntrack jq vim bash-completion lsof screen git net-tools && yum update -ySince we are not going to use kubeadm for our configuration comment out all entry’s in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf like the below example.
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # Note: This dropin only works with kubeadm and kubelet v1.11+ #[Service] #Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" #Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically #EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. #EnvironmentFile=-/etc/sysconfig/kubelet #ExecStart= #ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGSDisable the system firewall by running the below, it would be partially managed by kubernetes.
systemctl disable firewalld.service systemctl stop firewalldNote: There are other options to deal with firewall rules, for example just enabling the ports required by kubernetes. Lastly before continuing reboot each vm instances.
Configuring etcd – kubernetes SSL certificates
In order for all communication in kubernetes to be secure we need to create certificates. Note: In order to keep things the simplest possible, I will be using the same certificate key for all components. in production you might consider breaking those out in your production environment.Creating Kubernetes SSL certificates
Create a cert.conf file with the content below. add other host names/ ip address as needed, also make sure to replace with your domain name.[req] default_bits = 2048 prompt = no default_md = sha256 distinguished_name = dn req_extensions = v3_req x509_extensions = v3_ca [ dn ] C = US ST = NY L = New York O = Company1 OU = Ops CN = etcd-node [ v3_ca ] keyUsage = critical,keyCertSign, cRLSign basicConstraints = critical,CA:TRUE subjectKeyIdentifier = hash [ v3_req ] keyUsage = critical,digitalSignature, keyEncipherment, nonRepudiation extendedKeyUsage = clientAuth, serverAuth basicConstraints = critical,CA:FALSE subjectKeyIdentifier = hash subjectAltName = @alt_names [ alt_names ] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster.local DNS.5 = kube-apiserver DNS.6 = kube-admin DNS.7 = localhost DNS.8 = domain.com DNS.9 = kmaster1 DNS.10 = kmaster2 DNS.11 = kmaster3 DNS.12 = kmaster1.local DNS.13 = kmaster2.local DNS.14 = kmaster3.local DNS.15 = kmaster1.domain.com DNS.16 = kmaster2.domain.com DNS.17 = kmaster3.domain.com DNS.18 = knode1 DNS.19 = knode2 DNS.20 = knode3 DNS.21 = knode1.domain.com DNS.22 = knode2.domain.com DNS.23 = knode3.domain.com IP.1 = 127.0.0.1 IP.2 = 0.0.0.0 IP.3 = 10.3.0.1 IP.4 = 10.3.0.10 IP.5 = 10.3.0.50 IP.6 = 172.20.0.1 IP.7 = 172.20.0.2 IP.8 = 172.20.0.11 IP.9 = 172.20.0.12 IP.10 = 172.20.0.13 IP.11 = 172.20.0.51 IP.12 = 172.20.0.52 IP.13 = 172.20.0.53 email = admin@example.comNext, Copy the below and create a file called run.sh in the same directory as cert.conf file.
# Generate the CA private key openssl genrsa -out ca-key.pem 2048 sed -i 's/^CN.*/CN = Etcd/g' cert.conf # Generate the CA certificate. openssl req -x509 -new -extensions v3_ca -key ca-key.pem -days 3650 \ -out ca.pem \ -subj '/C=US/ST=New York/L=New York/O=example.com/CN=Etcd' \ -config cert.conf # Generate the server/client private key. openssl genrsa -out etcd-node-key.pem 2048 sed -i 's/^CN.*/CN = etcd-node/g' cert.conf # Generate the server/client certificate request. openssl req -new -key etcd-node-key.pem \ -newkey rsa:2048 -nodes -config cert.conf \ -subj '/C=US/ST=New York/L=New York/O=example.com/CN=etcd-node' \ -outform pem -out etcd-node-req.pem -keyout etcd-node-req.key # Sign the server/client certificate request. openssl x509 -req -in etcd-node-req.pem \ -CA ca.pem -CAkey ca-key.pem -CAcreateserial \ -out etcd-node.pem -days 3650 -extensions v3_req -extfile cert.confNext, make the file executable, and run to create the certificates. output would look something like the below.
chmod +x run.sh ./run.sh Generating RSA private key, 2048 bit long modulus ......................+++ ................................................+++ e is 65537 (0x10001) Generating RSA private key, 2048 bit long modulus ..................................+++ ........................................................................+++ e is 65537 (0x10001) Signature ok subject=/C=US/ST=NY/L=New York/O=Company1/OU=Ops/CN=etcd-node Getting CA Private KeyTo verify the certificate, run the below.
openssl x509 -in etcd-node.pem -text -nooutOnce completed you should be left with the below list of files.
ca-key.pem ca.pem ca.srl cert.conf etcd-node-key.pem etcd-node-req.pem etcd-node.pemFrom the list of above files we only need the ca.pem, ca-key.pem, etcd-node.pem and etcd-node-key.pem. Create the directory if not exist, and copy the certificate files.
mkdir -p /etc/kubernetes/ssl ln -s /etc/kubernetes/ssl /etc/kubernetes/pki cp ca.pem ca-key.pem etcd-node.pem etcd-node-key.pem /etc/kubernetes/sslNote: I named the most used key certificate as etcd-node-(key).pem, since its the first SSL certificate being used, feel free to rename at your own desire(just remember to update all references).
Etcd installation and configuration
Add to /etc/environmentETCDCTL_SSL_DIR=/etc/kubernetes/ssl ETCD_SSL_DIR=/etc/kubernetes/ssl ETCDCTL_CA_FILE=/etc/kubernetes/ssl/ca.pem ETCDCTL_CERT_FILE=/etc/kubernetes/ssl/etcd-node.pem ETCDCTL_KEY_FILE=/etc/kubernetes/ssl/etcd-node-key.pem ETCDCTL_ENDPOINTS="https://172.20.0.11:2379,https://172.20.0.12:2379,https://172.20.0.13:2379" ETCD_ENDPOINTS="https://172.20.0.11:2379,https://172.20.0.12:2379,https://172.20.0.13:2379" FLANNELD_ETCD_ENDPOINTS="https://172.20.0.11:2379,https://172.20.0.12:2379,https://172.20.0.13:2379" FLANNELD_IFACE="enp0s3" FLANNELD_ETCD_PREFIX="/coreos.com/network"If you are behind a corporate firewall or proxy, set the below in /etc/systemd/system/docker.service.d/http-proxy.conf (create the directory if not exist).
Environment="HTTP_PROXY=http://your_proxy_ip_:1234/" Environment="HTTPS_PROXY=http://your_proxy_ip_:1234/" Environment="NO_PROXY=.domain.com,127.0.0.1,localhost,kmaster1,kmaster1.domain.com,kmaster2,kmaster2.domain.com,kmaster3,kmaster3.domain.com,172.20.0.12,172.20.0.11,172.20.0.13"
Install etcd on the 3 master servers
Download the latest etcd and configuremkdir -p /var/lib/etcd curl -# -LO https://github.com/coreos/etcd/releases/download/v3.3.8/etcd-v3.3.8-linux-amd64.tar.gz tar xf etcd-v3.3.8-linux-amd64.tar.gz chown -Rh root:root etcd-v3.3.8-linux-amd64 find etcd-v3.3.8-linux-amd64 -xdev -type f -exec chmod 0755 '{}' \; cp etcd-v3.3.8-linux-amd64/etcd* /usr/bin/Create etcd service, like the below. cat /etc/systemd/system/etcd.service Note: Replace on each each master the below values with the proper name/ip.
- –name=kmaster1
- –listen-peer-urls=https://172.20.0.11:2380
- –advertise-client-urls=https://172.20.0.11:2379
- –initial-advertise-peer-urls=https://172.20.0.11:2380
- –listen-client-urls=https://172.20.0.11:2379,https://127.0.0.1:2379,https://127.0.0.1:4001
[Unit] Description=Etcd Server Documentation=https://coreos.com/etcd/docs/latest/ After=network.target After=network-online.target Wants=network-online.target [Service] ExecStart=/usr/bin/etcd \ --name=kmaster1 \ --cert-file=/etc/kubernetes/pki/etcd-node.pem \ --key-file=/etc/kubernetes/pki/etcd-node-key.pem \ --peer-cert-file=/etc/kubernetes/pki/etcd-node.pem \ --peer-key-file=/etc/kubernetes/pki/etcd-node-key.pem \ --trusted-ca-file=/etc/kubernetes/pki/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/pki/ca.pem \ --initial-advertise-peer-urls=https://172.20.0.11:2380 \ --listen-peer-urls=https://172.20.0.11:2380 \ --listen-client-urls=https://172.20.0.11:2379,https://127.0.0.1:2379,https://127.0.0.1:4001 \ --advertise-client-urls=https://172.20.0.11:2379 \ --initial-cluster-token=etcd-token \ --initial-cluster=kmaster1=https://172.20.0.11:2380,kmaster2=https://172.20.0.12:2380,kmaster3=https://172.20.0.13:2380 \ --data-dir=/var/lib/etcd \ --initial-cluster-state=new Restart=always RestartSec=5s User=etcd [Install] WantedBy=multi-user.targetCreate an etcd user
useradd etcd chown -R etcd:etcd /var/lib/etcdAdd network configuration on all 3 masters, by running the below.
cat <Start and Enable etcd on all 3 masters./etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables = 1 EOF # Reload stack sysctl --system
systemctl enable etcd && systemctl start etcd # Verify if etcd started. systemctl status etcd OR journalctl -u etcdVerify if etcd works across all nodes.
etcdctl member list 7c8d40e4de52c1b9: name=kmaster2 peerURLs=https://172.20.0.12:2380 clientURLs=https://172.20.0.12:2379 isLeader=false 96e236888999c3ca: name=kmaster3 peerURLs=https://172.20.0.13:2380 clientURLs=https://172.20.0.13:2379 isLeader=true 9e191dbdf076e744: name=kmaster1 peerURLs=https://172.20.0.11:2380 clientURLs=https://172.20.0.11:2379 isLeader=falseNote: The isLeader=true will automatically get elected/updated based on last elected etcd master (in the case above its kmaster3). Create flannel network key in etcd, by running the below.
/usr/bin/etcdctl set /coreos.com/network/config '{ "Network": "10.20.0.0/20", "SubnetLen": 24, "Backend": { "Type": "vxlan", "VNI": 1 } }Verify that the key got created.
etcdctl get /coreos.com/network/config { "Network": "10.20.0.0/20", "SubnetLen": 24, "Backend": { "Type": "vxlan", "VNI": 1 } }In Part 3 will continue configuring Flannel and Docker. You might also like – Other related articles to Docker and Kubernetes / micro-service. Like what you’re reading? please provide feedback, any feedback is appreciated.
0
0
votes
Article Rating
Hello Eli, I have installed last version of etcd (v3.3.12) but I have configured kmaster1 as you said but to start by systemctl I got “etcd.service: main process exited, code=exited, status=1/FAILURE”. However I can start “etcd” by command successfully. Outpout for journalctl -u etcd.service : Apr 04 06:45:10 localhost.localdomain systemd[1]: etcd.service: main process exited, code=exited, status=1/FAILURE Apr 04 06:45:10 localhost.localdomain systemd[1]: Unit etcd.service entered failed state. Apr 04 06:45:10 localhost.localdomain systemd[1]: etcd.service failed. Apr 04 06:45:16 localhost.localdomain systemd[1]: etcd.service holdoff time over, scheduling restart. Apr 04 06:45:16 localhost.localdomain systemd[1]: Stopped Etcd Server. Apr 04 06:45:16 localhost.localdomain systemd[1]: Started Etcd Server.… Read more »
Hi, I am not sure what the issue might be, but In many cases you get an error like this if the “/var/lib/etcd” directory doesn’t exist or has the wrong permissions. please make sure you create (or have ) the user/group etcd, and set ownership to etcd:etcd on /var/lib/etcd. If you first ran etcd as root depending on the options you specified this can latter on cause issues by running as user etcd. For example sometimes, if you used /var/lib/etcd as the data-dir as the root user, the files will now be owned by root, next time you start with… Read more »
I do appreciate your fast reply and details 🙂
I have checked already file owner and it’s “etcd”.
However I got the problem fixed on “kmaster1” by changing path for “–cert*, –key*, –peer” in “/etc/systemd/system/etcd.service” file from “pki” to “ssl”.
I don’t know the reason behind using “pki” path. Because we added SSL certification files in “/etc/kubernetes/ssl/*”. Maybe it’s typo or there is a reason you set the path “/etc/kubernetes/pki/*”.
However I have new problem with kmaster2 and kmaster3 :
listen tcp 172.20.0.11:2379: bind: cannot assign requested address
Please let me know if I am in wrong way.
Thanks
Hi, The below error means that etcd port “2379” cannot to the IP Address of 172.20.0.11. >> listen tcp 172.20.0.11:2379: bind: cannot assign requested address On kmaster2, and kmaster3 the ip address should be 172.20.0.12 and 172.20.0.13 (not 172.20.0.11). Now, besides just configuring kmaster2 and kmaster3 to listen to .12 and .13, you have to update/modify the systemd configuration to match the name/ip address. in other words on kmaster2 all systemd references to local ip address has to be modified to your local ip i.e. in case of kmaster2 would be .12, and kmaster3 would be .13. Note: The configuration… Read more »
Hi Eli, I appreciate your help. Actually I noticed the part you wrote Note: Replace on each each master the below values with the proper name/ip. –name=kmaster1 –listen-peer-urls=https://172.20.0.11:2380 –advertise-client-urls=https://172.20.0.11:2379 And I also changed following your instruction. However I changed other ip addresses as you said and it works now 🙂 As a result the field “isLeader=true” for your log belongs to kmaster3 but for my log: 7c8d40e4de52c1b9: name=kmaster2 peerURLs=https://172.20.0.12:2380 clientURLs=https://172.20.0.12:2379 isLeader=false 96e236888999c3ca: name=kmaster3 peerURLs=https://172.20.0.13:2380 clientURLs=https://172.20.0.13:2379 isLeader=false 9e191dbdf076e744: name=kmaster1 peerURLs=https://172.20.0.11:2380 clientURLs=https://172.20.0.11:2379 isLeader=true Thanks for suggestion, I wish I could use CoreOS in my case but I have to use CentOS only.… Read more »
Hi,
I am really glad it worked out for you, I hope the rest configuration will be a smooth ride.
If you don’t mind sharing what other fields the ip address should be updated, I will then add those to my post so others can benefit.
The etcd Leader gets automatically elected by using the Raft protocol on which ever node was first up as master, it then gets passed around as notes reboot.
The current election is reflect in the “isLeader=true” field.
I found the CoreOS write-up to well explain how tis works.
https://coreos.com/etcd/docs/latest/faq.html
Thanks,
Eli
Hello Eli,
Thanks for information. I will continue to end. hope the rest, I get less trouble ;).
Here are 3 files for etcd.service on each master:
https://gist.github.com/Qolzam/2af074e703403cc982b5e8e8477da681
https://gist.github.com/Qolzam/783b6dbdfe325921189a4c82339eda47
https://gist.github.com/Qolzam/bb975e17ab792ff8d793710d7d16f644
Thanks again!
Hi,
I updated the post with all your suggestions, an absolute help for my self as well as others following this post.
Please do let me know of any issues, I will defiantly try to be at help,
Thanks,
Eli
Nothing wrong with using the ssl directory. The reason for me using pki is, because pki is the default directory Google references in all documentation. I usally create a link between these two directorys. But I am glad you ware able to get it to work. Thanks, eli