Installing a Kubernetes cluster the hard way

Installed a first Kubernetes cluster consisting of 1 master node and 1 worker node on CentOS 7. Kubernetes has version 1.6.2.

Kubernetes is not working with CentOS 8 yet so I have used the latest CentOS 7. Everything is installed on Oracle VM VirtualBox on a Windows 10 host.

Install 2 servers with a minimal installation of CentOS 7 with a regular user. User student in my case.
Make sure the servers have at least 2 vcpu. I configured a 8 GB of memory and a maximum of 40 GB disk space.

# Become root
sudo -i

# Update to the latest versions 
yum update -y

# Reboot the server
reboot

# Disable the firewall
systemctl disable firewalld 
systemctl stop firewalld

# Set SELinux in permissive mode
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# Remove swap
cat /proc/swaps
swapoff -a

# Remove the swap partition
vi /etc/fstab

# Install docker
yum install docker -y
systemctl enable docker.service
systemctl start docker.service 

# Add the Kubernetes repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

# Install te latest kubelet, kubeadm and kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes 

# Enable kubelet
systemctl enable --now kubelet 

# Load br_netfilter
modprobe br_netfilter 

# Make sure traffic is routed correctly 
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

# Add the 2 hosts ip-addresses to /etc/hosts
# Add k8smaster ip-address
10.0.2.7    k8smaster kube01.petersplanet.local
10.0.2.8    node01.petersplanet.local

Login to the Kubernetes master node and create the cluster with the Calico network. Make sure there is no overlap with the Calico network which is 192.168.0.0/16 by default.

# Master node
# Initialize the cluster
kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane endpoint=k8smaster:6443 

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8smaster:6443 --token yv8otv.act8e9865fcgg2mt \
    --discovery-token-ca-cert-hash sha256:3f945a0e0c88f76a2df172f2e133e7c8956c8e9859da530e82c1891be503cdd7 \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8smaster:6443 --token yv8otv.act8e9865fcgg2mt \
    --discovery-token-ca-cert-hash sha256:3f945a0e0c88f76a2df172f2e133e7c8956c8e9859da530e82c1891be503cdd7
[root@kube01 ~]#

# Apply the network
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml 


# Become user student again
exit

# As user student
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Verify the result
kubectl get pods --all-namespaces
kubectl get nodes -o wide 
 

Now add the worker node to the cluster

# Log into the worker node as user student
# Use the info from the kubeadm init command
sudo kubeadm join k8smaster:6443 --token yv8otv.act8e9865fcgg2mt --discovery-token-ca-cert-hash sha256:3f945a0e0c88f76a2df172f2e133e7c8956c8e9859da530e82c1891be503cdd7

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# Verify the result on the master node

[student@kube01 ~]$ kubectl get pods --all-namespaces
  NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
  kube-system   calico-kube-controllers-55754f75c-mt62c             1/1     Running   1          13h
  kube-system   calico-node-fdtqk                                   1/1     Running   0          48m
  kube-system   calico-node-mdbt9                                   1/1     Running   1          13h
  kube-system   coredns-5644d7b6d9-bqqk6                            1/1     Running   1          13h
  kube-system   coredns-5644d7b6d9-jw5wv                            1/1     Running   1          13h
  kube-system   etcd-kube01.petersplanet.local                      1/1     Running   1          13h
  kube-system   kube-apiserver-kube01.petersplanet.local            1/1     Running   1          13h
  kube-system   kube-controller-manager-kube01.petersplanet.local   1/1     Running   1          13h
  kube-system   kube-proxy-d6bnc                                    1/1     Running   1          13h
  kube-system   kube-proxy-rlxs6                                    1/1     Running   0          48m
  kube-system   kube-scheduler-kube01.petersplanet.local            1/1     Running   1          13h

#
[student@kube01 ~]$ kubectl get nodes
 NAME                        STATUS   ROLES    AGE   VERSION
 kube01.petersplanet.local   Ready    master   14h   v1.16.2
 node01.petersplanet.local   Ready    <none>   84m   v1.16.2

Troubeshooting

  • If there is a problem with the overlay network make sure the network interfaces are running in promiscuous mode.

References: