With all the pre-requisites met, including SSL, flannel is fairly simple to install and configure. Where it goes wrong is if some of those pre-requisites have not been met or are misconfigured. You will star to find that out in this step.
We will be running flannel in a docker image, even on the master versus a standalone which is much easier to manage.
Why Do We Need Flannel Or An Overlay?
Without flannel, each node has the same IP range associated with docker. We could change this and manage it ourselves. We would then need to setup firewall rules and routing table entries to handle this. Then we also need to keep up with ip allocations.
Flannel does all of this for us. It does so with a minimal amount of effort.
Staging for Flannel
We need to update /etc/kubernetes/controller-manager again and add
--allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 KUBE_CONTROLLER_MANAGER_ARGS="--root-ca-file=/secret/ca.crt --service-account-private-key-file=/secret/server.key --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16"
And then restart kube-controller-manager
I always prefer to download my yaml files so I can review and replay as necessary. Per their documentation I am just going to curl the URL and then apply it
On each node we need to add the following the the /etc/kubernetes/kubelet config and then restart kubelet
Since flannel is an overlay, it overlays over the existing network and we need to open UDP/8285 per their doc. Therefore we need to put this in iptables on each host
# This line for VXLAN -A INPUT -p udp -m udp --dport 8472 -j ACCEPT # This line for UDP -A INPUT -p udp -m udp --dport 8285 -j ACCEPT
Fire it up!
Now we are ready to apply and let it all spin up!
root@kube-master [ ~/kube ]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml root@kube-master [ ~/kube ]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
If all is well at this point, it should be chewing through CPU and disk and in a minute or two the pods are deployed!
root@kube-master [ ~/kube ]# kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE kube-flannel-ds-amd64-7dqd4 1/1 Running 17 138m kube-flannel-ds-amd64-hs6c7 1/1 Running 1 138m kube-flannel-ds-amd64-txz9g 1/1 Running 18 139m
On each node you should see a “flannel” interface now too.
root@kube-master [ ~/kube ]# ifconfig -a | grep flannel flannel.1 Link encap:Ethernet HWaddr 1a:f8:1a:65:2f:75
From the “RESTARTS” section you can see some of them had some issues. What kind of blog would this be if I didn’t walk you through some troubleshooting steps?
I knew that the successful one was the master so it was likely a connectivity issue. Testing “curl -v https://10.254.0.1” passed on the master but failed on the nodes. By pass, I mean it made a connection but complained about the TLS certificate (which is fine). The nodes, however, indicated some sort of connectivity issue or firewall issue. So I tried the back end service member https://192.168.116.174:6443 and same symptoms. I would have expected Kubernetes to open up this port but it didn’t so I added it to iptables and updated my own documentation.
Some other good commands are “kubectl logs <resource>” such as
root@kube-master [ ~/kube ]# kubectl logs pod/kube-flannel-ds-amd64-txz9g --namespace=kube-system I1031 18:47:14.419895 1 main.go:514] Determining IP address of default interface I1031 18:47:14.420829 1 main.go:527] Using interface with name eth0 and address 192.168.116.175 I1031 18:47:14.421008 1 main.go:544] Defaulting external address to interface address (192.168.116.175) I1031 18:47:14.612398 1 kube.go:126] Waiting 10m0s for node controller to sync I1031 18:47:14.612648 1 kube.go:309] Starting kube subnet manager ....
You will notice the “namespace” flag. Kubernetes can segment resources into namespaces. If you’re unsure of which namespace something exists in, you can use “–all-namespaces”
Now we have a robust network topology where pods can have unique IP ranges and communicate to pods on other nodes.
Next we will be talking about Kubernetes Dashboard and how to load it. The CLI is not for everyone and the dashboard helps put things into perspective.