this post will teach you how to deploy Kubernetes in multiple nodes mode based on Docker
Preparations
Three minimal installation Centos servers
one for K8S master node
two for K8S computing node
by the way my CentOS version is CentOS-7 x86_64 1810
Turn off firewalld && selinux
# selinux
> setenforce 0
> sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
# firewalld
> systemctl disable firewalld && systemctl stop firewalld
Config static ip address && set hosts
set hosts is optional, you don’t have to do it, it’s just for easier recognize here
hostname | ip address |
---|---|
k8smaster | 192.168.122.10 |
k8snode0 | 192.168.122.11 |
k8snode1 | 192.168.122.12 |
- Master
> echo "127.0.0.1 k8smaster" >> /etc/hosts
> echo "192.168.122.11 k8snode0" >> /etc/hosts
> echo "192.168.122.12 k8snode1" >> /etc/hosts
- Node0
> echo "192.168.122.10 k8smaster" >> /etc/hosts
> echo "127.0.0.1 k8snode0" >> /etc/hosts
> echo "192.168.122.12 k8snode1" >> /etc/hosts
- Node1
> echo "192.168.122.10 k8smaster" >> /etc/hosts
> echo "192.168.122.11 k8snode0" >> /etc/hosts
> echo "127.0.0.1 k8snode1" >> /etc/hosts
Synchronize your system datetime
> yum install ntpdate -y && ntpdate -s 0.us.pool.ntp.org
Disable SWAP
i didn’t set SWAP partition at CentOS installation, if you already set it, you should disable it.
remove SWAP entry in /etc/fstab
, reboot your system
Install docker
# install
> yum install -y yum-utils device-mapper-persistent-data lvm2
> yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
> yum install -y docker-ce-18.06.1.ce
> mkdir /etc/docker
# set cgroup driver to `systemd` to stabilizing system
> cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
> systemctl daemon-reload
> systemctl enable docker && systemctl start docker
# check
> docker version
Client:
Version: 18.06.1-ce
API version: 1.39
Go version: go1.10.4
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.06.1-ce
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
OS/Arch: linux/amd64
Experimental: false
> docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.06.1-ce
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: systemd
.....
Install kubelet && kubeadm
# add k8s repo
> cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
> yum makecache
# install
> yum install -y kubelet kubeadm
# check
> kubeadm version
kubeadm version: & version.Info {
Major: "1",
Minor: "13",
GitVersion: "v1.13.1",
GitTreeState: "clean",
GoVersion: "go1.11.2",
Compiler: "gc",
Platform: "linux/amd64"
}
> systemctl enable kubelet
Pulling Kubernetes docker images
if you got network problem in pulling docker images, you can set proxy with docker service
> mkdir -p /etc/systemd/system/docker.service.d
> cat <<EOF > /etc/systemd/system/docker.service.d/https-proxy.conf
[Service]
Environment="HTTPS_PROXY=https://proxy.example.com:443/"
EOF
now move on
> kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.13.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.13.1
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.13.1
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.13.1
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.2.24
[config/images] Pulled k8s.gcr.io/coredns:1.2.6
> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.13.1 xxxxxxxxxxxx 2 weeks ago 80.2MB
k8s.gcr.io/kube-apiserver v1.13.1 xxxxxxxxxxxx 2 weeks ago 181MB
k8s.gcr.io/kube-controller-manager v1.13.1 xxxxxxxxxxxx 2 weeks ago 146MB
k8s.gcr.io/kube-scheduler v1.13.1 xxxxxxxxxxxx 2 weeks ago 79.6MB
k8s.gcr.io/coredns 1.2.6 xxxxxxxxxxxx 8 weeks ago 40MB
k8s.gcr.io/etcd 3.2.24 xxxxxxxxxxxx 3 months ago 220MB
k8s.gcr.io/pause 3.1 xxxxxxxxxxxx 12 months ago 742kB
Deploy Kubernetes – Master
Kubeadm init
> kubeadm init --pod-network-cidr=10.244.0.0/20 # you can use another pod network, it's all decided by you
............
............
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
............
kubeadm join 192.168.122.10:6443 --token xxx.xxxx --discovery-token-ca-cert-hash sha256:xxxxxx
>
keep that string, it use for your node server join master K8S server
now check your master server
if you get The connection to the server localhost:8080 was refused
, that probably means you forget to put admin.conf
into .kube/config
directory
> kubectl get cs
The connection to the server localhost:8080 was refused - did you specify the right host or port?
> mkdir -p $HOME/.kube
> sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
> sudo chown $(id -u):$(id -g) $HOME/.kube/config
> kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster NotReady master 5m55s v1.13.1
Pod Network – Flannel
download flannel config file, change pod network to the pod-network-cidr you set on above
at net-conf.json
, then execute kubectl apply
> cd ~ && curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml > kube-flannel.yml
# set flannel
> kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
# check
> kubectl get pods --all-namespaces
kube-system coredns-86c58d9df4-hdk75 1/1 Running 2 5m
kube-system coredns-86c58d9df4-vshjv 1/1 Running 2 5m
kube-system etcd-k8smaster 1/1 Running 2 5m
kube-system kube-apiserver-k8smaster 1/1 Running 2 5m
kube-system kube-controller-manager-k8smaster 1/1 Running 2 5m
kube-system kube-flannel-ds-amd64-65jm4 1/1 Running 0 5m
kube-system kube-flannel-ds-amd64-mjbql 1/1 Running 2 5m
kube-system kube-flannel-ds-amd64-qt2xk 1/1 Running 0 5m
kube-system kube-proxy-p925w 1/1 Running 0 5m
kube-system kube-proxy-xl954 1/1 Running 0 5m
kube-system kube-proxy-zk8z4 1/1 Running 2 5m
kube-system kube-scheduler-k8smaster 1/1 Running 2 5m
Deploy Kubernetes – Nodes
execute that join command in every nodes
> kubeadm join 192.168.122.10:6443 --token xxx.xxxx --discovery-token-ca-cert-hash sha256:xxxxxx
............
............
............
Run 'kubectl get nodes' on the master to see this node join the cluster.
now check in master server
> kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8smaster Ready master 18m v1.13.1 192.168.122.10 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.6.1
k8snode0 Ready <none> 3m9s v1.13.1 192.168.122.11 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.6.1
k8snode1 Ready <none> 3m9s v1.13.1 192.168.122.12 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.6.1
gut, it works now.
Testing your kubernetes cluster
create hostnames
pods
- hostnames.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
spec:
selector:
matchLabels:
app: hostnames
replicas: 3
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
ports:
- containerPort: 9376
protocol: TCP
- hostnames-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: hostnames
spec:
selector:
app: hostnames
ports:
- name: default
protocol: TCP
port: 80
targetPort: 9376
create pod && pod service
> kubectl apply -f hostsnames.yaml
deployment.apps/hostnames created
> kubectl apply -f hostsnames-svc.yaml
service/hostnames created
see what you got
> kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hostnames-85bc9c579-2fcr8 1/1 Running 0 51s 10.244.2.3 k8snode1 <none> <none>
hostnames-85bc9c579-5flpd 1/1 Running 0 51s 10.244.0.9 k8smaster <none> <none>
hostnames-85bc9c579-jxnq7 1/1 Running 0 51s 10.244.1.3 k8snode0 <none> <none>
> kubectl get svc -o wide --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default hostnames ClusterIP 10.103.8.147 <none> 80/TCP 83s app=hostnames
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 57m <none>
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 57m k8s-app=kube-dns
above you can see, three hostnames
instances running on different server, also you can see hostnames
got a service ip address 10.103.8.147
, three instances but only have one service ip address, wich means it’s load balance
> curl 10.103.8.147
hostnames-85bc9c579-2fcr8
> curl 10.103.8.147
hostnames-85bc9c579-2fcr8
> curl 10.103.8.147
hostnames-85bc9c579-jxnq7
> curl 10.103.8.147
hostnames-85bc9c579-5flpd
by the way kubernetes got dns server inside, you can also use domain name to access your pods
> echo "nameserver 10.96.0.10" > /etc/resolv.conf
> curl hostnames.default.svc.cluster.local
hostnames-85bc9c579-2fcr8
> curl hostnames.default.svc.cluster.local
hostnames-85bc9c579-jxnq7
> curl hostnames.default.svc.cluster.local
hostnames-85bc9c579-2fcr8
> curl hostnames.default.svc.cluster.local
hostnames-85bc9c579-5flpd
Something you have to know
Autocomplete does not work
make sure bash-completion
package already installed in your system, then execute echo "source <(kubectl completion bash)" >> /etc/profile
to add autocompletion, then reboot your system
Why Master server not schedule pods?
generally Kubernetes Master will not scheduling pods
, but you can use command to open that feature look at below
> kubectl taint nodes --all node-role.kubernetes.io/master-
I lost kuberadm token, how to create new one?
> kubeadm token generate
xxxxx.xxxxxxxxxxxx
> kubeadm token create xxxxx.xxxxxxxxxxxx --print-join-command
kubeadm join 192.168.122.10:6443 --token xxxxx.xxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxx
X509 error while join cluster
if you get certificate error while you trying to join to cluster, like below
[discovery] Failed to request cluster info, will try again: [Get https://x.x.x.x:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp x.x.x.x:6443: getsockopt: connection refused]
then reset your master and init again with advertise address, like below
> kubeadm reset
> iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
> kubeadm init --apiserver-advertise-address x.x.x.x --pod-network-cidr x.x.x.x/xx
then copy the token string, try again
One thought on “CentOS 7 Kubernetes Master-Node installation guide”