(1 votes, average: 1.00 out of 5)
Loading...
Adding Ingress, DNS and or Dashboard to Kubernetes
In the previous post I went through how to finalize the kubelet configuration to use RKT and Flannel/CNI. below are examples on how to configure and use Træfik as an Ingress Controller, as well as Kube-dns and Kube-dashboard configuration(coming soon). I divided the configuration into parts outlined below (still working progress).- Part 1: Initial setup – getting CoreOS, prepare SSL certificates, etc.
- Part 2: Configure Etcd key value store, Configure Flannel.
- Part 3: Configure Kubernetes manifests for controller, api, scheduler and proxy.
- Part 4: Finalize the kubelet configuration to use RKT and Flannel+CNI.
- Part 5: Optional – configure Ingress, kube-dns and kube-dashboard.
- Part 6: Automate the Kubernetes deployment, etcd, kubelet, rkt, flannel, cni and ssl.
This is part 5 Optional – configure Ingress, Kube-dns and Kube-dashboard
The Kube-dashboard examples is still in the works therefor missing from the below configuration, I hope to updated once I get a chance.Simple Kubernetes kube-dns Configuration
To create your DNS pod just run the below using the kube-dns.yaml(below).kubectl create -f kube-dns.yaml serviceaccount "kube-dns" created configmap "kube-dns" created service "kube-dns" created deployment "kube-dns" created # Verify status kubectl get po --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system kube-apiserver-coreos1 1/1 Running 0 2h 172.0.2.11 coreos1 kube-system kube-apiserver-coreos2 1/1 Running 0 2h 172.0.2.12 coreos2 kube-system kube-apiserver-coreos3 1/1 Running 0 2h 172.0.2.13 coreos3 kube-system kube-controller-manager-coreos1 1/1 Running 0 2h 172.0.2.11 coreos1 kube-system kube-controller-manager-coreos2 1/1 Running 0 2h 172.0.2.12 coreos2 kube-system kube-controller-manager-coreos3 1/1 Running 0 2h 172.0.2.13 coreos3 kube-system kube-dns-5d9f945775-qcqx9 3/3 Running 0 12s 10.20.1.4 worker1 kube-system kube-proxy-coreos1 1/1 Running 0 2h 172.0.2.11 coreos1 kube-system kube-proxy-coreos2 1/1 Running 0 2h 172.0.2.12 coreos2 kube-system kube-proxy-coreos3 1/1 Running 0 2h 172.0.2.13 coreos3 kube-system kube-proxy-worker1 1/1 Running 0 1h 172.0.2.51 worker1 kube-system kube-scheduler-coreos1 1/1 Running 0 2h 172.0.2.11 coreos1 kube-system kube-scheduler-coreos2 1/1 Running 0 2h 172.0.2.12 coreos2 kube-system kube-scheduler-coreos3 1/1 Running 0 2h 172.0.2.13 coreos3To scale to additional nodes, run the below.
kubectl scale deployment/kube-dns --replicas=2 -n kube-systemTo remove the DNS pod, run the below.
kubectl delete -f kube-dns.yaml # or kubectl delete deployment kube-dns -n kube-system kubectl delete service kube-dns -n kube-system kubectl delete configmap kube-dns -n kube-system kubectl delete serviceaccount kube-dns -n kube-systemTo troubleshot the kube-dns pod logs, just run something like the below.
kubectl logs `kubectl get po --all-namespaces -o wide|grep kube-dns|grep worker1|awk '{print $2}'` -n kube-system kubedns -fI struggled with the below error for a while.
waiting for services and endpoints to be initialized from apiserver...Till I noticed that the PodCIDR is being wrongly created causing bad iptable rules on the worker node. redoing the cluster cleared the bad PodCIDR creation. An example of a bad (or good entry is below) depending on your network, watch for the right Pod CIDR.
journalctl -u kubelet -f ... Dec 13 20:14:37 coreos3 kubelet-wrapper[9895]: I1213 20:14:37.950577 9895 kubelet_network.go:276] Setting Pod CIDR: -> 10.0.0.0/24
Kube DNS configuration files
cat /etc/kubernetes/ssl/worker-kubeconfig.yamlapiVersion: v1 kind: Config clusters: - name: local cluster: server: https://10.3.0.1:443 certificate-authority: /etc/kubernetes/ssl/ca.pem users: - name: kubelet user: client-certificate: /etc/kubernetes/ssl/worker.pem client-key: /etc/kubernetes/ssl/worker-key.pem contexts: - context: cluster: local user: kubeletNote: Make sure to append the ca.pem at the end of the worker.pem. an example if below.
-----BEGIN CERTIFICATE----- MIIFQTCCBCmgAwIBAgIJAIeb0H3YfptEMA0GCSqGSIb3DQEBCwUAMFwxCzAJBgNV ...[..] snip -----END CERTIFICATE----- # And below add your CA cert. (remove this line) -----BEGIN CERTIFICATE----- MIIDfTCCAmWgAwIBAgIJAKboSpp9s2ZLMA0GCSqGSIb3DQEBCwUAMFwxCzAJBgNV ...[..] snip -----END CERTIFICATE-----cat kube-dns.yaml Note: Replace example.com with your domain and IP Address.
apiVersion: v1 kind: ServiceAccount metadata: name: kube-dns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: stubDomains: | {"example.com": ["1.2.3.4"]} upstreamNameservers: | ["8.8.8.8", "8.8.4.4"] --- apiVersion: v1 kind: Service metadata: labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: KubeDNS name: kube-dns namespace: kube-system spec: clusterIP: "10.3.0.10" ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kube-dns sessionAffinity: None type: ClusterIP --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" name: kube-dns namespace: kube-system spec: replicas: 1 selector: matchLabels: k8s-app: kube-dns strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 type: RollingUpdate template: metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: ~ labels: k8s-app: kube-dns spec: containers: - args: - "--domain=cluster.local." - "--dns-port=10053" - "--kube-master-url=https://10.3.0.1:443" - "--config-dir=/kube-dns-config" - "--kubecfg-file=/etc/kubernetes/ssl/worker-kubeconfig.yaml" - "--v=9" env: ~ image: "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7" livenessProbe: failureThreshold: 5 httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: kubedns ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readiness port: 8081 scheme: HTTP initialDelaySeconds: 3 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi volumeMounts: - mountPath: /kube-dns-config name: kube-dns-config - mountPath: /etc/ssl/certs name: ssl-certs-host readOnly: true - mountPath: /etc/kubernetes/ssl name: kube-ssl readOnly: true - mountPath: /etc/kubernetes/ssl/worker-kubeconfig.yaml name: kubeconfig readOnly: true - args: - "-v=2" - "-logtostderr" - "-configDir=/etc/k8s/dns/dnsmasq-nanny" - "-restartDnsmasq=true" - "--" - "-k" - "--cache-size=1000" - "--log-facility=-" - "--no-resolv" - "--server=/cluster.local/127.0.0.1#10053" - "--server=/in-addr.arpa/127.0.0.1#10053" - "--server=/ip6.arpa/127.0.0.1#10053" - "--log-queries" image: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7" livenessProbe: failureThreshold: 5 httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 30 name: dnsmasq ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP resources: requests: cpu: 150m memory: 20Mi volumeMounts: - mountPath: /etc/k8s/dns/dnsmasq-nanny name: kube-dns-config - args: - "--v=2" - "--logtostderr" - "--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A" - "--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A" image: "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7" livenessProbe: failureThreshold: 5 httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 30 name: sidecar ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: cpu: 10m memory: 20Mi dnsPolicy: Default restartPolicy: Always serviceAccount: kube-dns serviceAccountName: kube-dns terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists volumes: - configMap: defaultMode: 420 name: kube-dns optional: true name: kube-dns-config - hostPath: path: /usr/share/ca-certificates name: ssl-certs-host - hostPath: path: /etc/kubernetes/ssl name: kube-ssl - hostPath: path: /etc/kubernetes/ssl/worker-kubeconfig.yaml name: kubeconfigBelow is a quick simple example on how to use Træfik / Nginx as an Ingres Controller. Once I get around I hope to update the below configuration with a more complex/useful example.
Simple Kubernetes Træfik Ingress Configuration
You will need to create the below pod configuration files. This creates two Nginx instances(replicas). cat nginx-deployment.yamlapiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80cat nginx-ingres.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginxingress spec: rules: - host: coreos1.domain.com http:do paths: - path: / backend: serviceName: nginxsvc servicePort: 80cat nginx-svc.yaml
apiVersion: v1 kind: Service metadata: labels: name: nginxsvc name: nginxsvc spec: ports: - port: 80 selector: app: nginx type: ClusterIPA very simple Traefik example is below, another example is available here using Docker cat traefik.toml
[web] address = ":8181" ReadOnly = true [kubernetes] # Kubernetes server endpoint endpoint = "http://localhost:8080" namespaces = ["default","kube-system"]Download Traefik version 1.4.2 was the latest at time of this writing.
wget -O traefik https://github.com/containous/traefik/releases/download/v1.4.2/traefik_linux-amd64 chmod u+x traefikTo Create the configuration, run the below.
kubectl create -f nginx-deployment.yaml kubectl create -f nginx-svc.yaml kubectl create -f nginx-ingres.yaml Then start traefik ./traefik -c traefik.toml &Note: Ideally, Traefik itself should run in a pod. I hope to update the example soon with such an example. Tip: To scale from 2 to 3 pods.
kubectl scale --replicas=3 deployment nginx-deploymentTo quick test
curl http://coreos1.domain.com # Verify the new pods kubectl get po --all-namespaces -o wideTo access the Traefik dashboard.
http://coreos1.domain.com:8181To verify the service
kubectl describe svc nginxsvcTo remove / destroy the configuration.
kubectl delete -f nginx-ingres.yaml kubectl delete -f nginx-svc.yaml kubectl delete -f nginx-deployment.yaml
Adding Kube-DNS
How-to coming soon. Check out the next part – Part 6 – Automate the Kubernetes deployment(coming soon). You might also like – Other articles related to Docker Kubernetes / micro-service.Like what you’re reading? please provide feedback, any feedback is appreciated.
0
0
votes
Article Rating
Thanks for the guide. However traefik is not suited as ingress controller in Kubernetes due to it not having propper TLS cert handling for the ingress resources
https://github.com/containous/traefik/issues/378
https://github.com/containous/traefik/issues/2438
for more info
really sorry for the late response, I had a serious issue and many comments ware marked as SPAM.
As I am sure you know that traefik now full supports TLS.
Please provide kube-dashboard example
Hi and welcome to my Blog.
This post is kind of a bit outdated, CoreOS is now owned by RedHat and many things have changed, however you can access the official google kube-dashboard repository by going here https://github.com/kubernetes/dashboard, or directly apply it like so kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml.
Alternatively you can use helm to install the dashboard or the full prometheus stack(as well as many other packages), I have a full how-to here http://devtech101.com/2018/09/04/deploying-helm-tiller-prometheus-alertmanager-grafana-elasticsearch-on-your-kubernetes-cluster/
I hope to update the RedHat/CoreOS installation as well as the Git-hub kube-auto-generator repository in the future.
I hope this helps.
Eli