Summary
In my previous article Intro To Kubernetes, we walked through installing dependencies and setting the stage for initializing Kubernetes. At this point you should have a master and one or two nodes with the required software installed.
A Little More Configuration
Master Config Prep
We have just a little more configuration to do. On kube-master we need to change “/etc/kubenertes/apiserver” lines as follows. This allows other hosts to connect to it. If you don’t want to bind to 0.0.0.0 you could bind to the specific IP but would lose localhost binding.
# From this
KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
# To this
KUBE_API_ADDRESS="--address=0.0.0.0"
Create the Cluster Member Metadata
Save the following as a file, we’ll call it create_nodes.json. When standing up a cluster I like to start out with doing it on the master so I create a /root/kube and put my files in there for reference.
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "kube-master",
"labels":{ "name": "kube-master-label"}
},
"spec": {
"externalID": "kube-master"
}
}
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "kube-node1",
"labels":{ "name": "kube-node-label"}
},
"spec": {
"externalID": "kube-node1"
}
}
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "kube-node2",
"labels":{ "name": "kube-node-label"}
},
"spec": {
"externalID": "kube-node2"
}
}
We can then run kubectl to create the nodes based on that json. Keep in mind this is just creating metadata
root@kube-master [ ~/kube ]# kubectl create -f /root/kube/create_nodes.json
node/kube-master created
node/kube-node1 created
node/kube-node2 created
# We also want to "taint" the master so no app workloads get scheduled.
kubectl taint nodes kube-master key=value:NoSchedule
root@kube-master [ ~/kube ]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master NotReady <none> 88s
kube-node1 NotReady <none> 88s
kube-node2 NotReady <none> 88s
You can see they’re “NotReady” because the services have not been started. This is expected at this point.
All Machine Config Prep
This will be run on all machines, master and node. We need to edit “/etc/kubernetes/kubelet”
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME=""
Also edit /etc/kubernetes/kubeconfig
server: http://127.0.0.1:8080
# Should be
server: http://kube-master:8080
In /etc/kubernetes/config
KUBE_MASTER="--master=http://kube-master:8080"
Starting Services
Master
The VMware Photon Kubernetes guide we have been going by has the following snippit which I want to give credit to. Please run this on the master
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
You can then run “netstat -an | grep 8080” to see it is listening. Particularly on 0.0.0.0 or the expected bind address.
Nodes
On the nodes we are only starting kube-proxy, kubelet and docker
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
Health Check
At this point we’ll run kubectl get nodes and see the status
root@kube-master [ ~/kube ]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
127.0.0.1 Ready <none> 23s v1.14.6
kube-master NotReady <none> 3m13s
kube-node1 NotReady <none> 3m13s
kube-node2 NotReady <none> 3m13s
Oops, we didn’t add 127.0.0.1 – I forgot to clear the hostname override in /etc/kubernetes/kubelet. Fixed that, restarted kubelet and then “kubectl delete nodes 127.0.0.1”
It does take a while for these to start showing up. The provisioning and orchestration processes are not fast but you should slowly show the version show up and then the status to Ready and here we are.
root@kube-master [ ~/kube ]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready <none> 9m42s v1.14.6
kube-node1 Ready <none> 9m42s v1.14.6
kube-node2 Ready <none> 9m42s v1.14.6
Final Words
At this point we could start some pods if we wanted but there are a few other things that should be configured for a proper bare metal(or virtual) install. Many pods are now depending on auto discovery which uses TLS. Service accounts also need and service accounts are using secrets.
For the networking we will go over flannel which will provide our networking overlay using VXLAN. This is needed so that pods running on each node have a unique and routable address space that each node can see. Right now each node has a docker interface with the same address and pods on different nodes cannot communicate with each other.
Flannel uses the TLS based auto discovery to the ClusterIP. Without hacking it too much it is just best to enable SSL/TLS Certificates and also a security best practice.
root@kube-master [ ~/kube ]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 49m
root@kube-master [ ~/kube ]# kubectl describe services/kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.254.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 192.168.116.174:6443
Session Affinity: None
Events: <none>
Next – SSL Configuration