kubeadm安装高可用集群

著论准过秦,作赋拟子虚。这篇文章主要讲述kubeadm安装高可用集群相关的知识,希望能为你提供帮助。
1. 准备工作centos7.6的系统

主机名
IP
配置
角色
k8s-master1
10.0.0.2
4c4g
k8s-master
k8s-master2
10.0.0.3
4c4g
k8s-master,node
k8s-master3
10.0.0.4
4c4g
k8s-master,node
k8s-vip
10.0.0.5
1c1g
k8s-vip.(haproxy)
1.1解析域名这里使用dns 具体参考下方博客
??DNS??
1.2 更改主机名这一步一定要做。 不然装完K8S后,会发现K8S的节点名称一言难尽
hostnamectl set-hostname k8s-master1
hostnamectl set-hostname k8s-master2
hostnamectl set-hostname k8s-master3
hostnamectl set-hostname k8s-vip

1.3 调整系统参数1.3.1 清空iptables规则
iptables -P FORWARD ACCEPT

1.3.2 关闭swap
swapoff -a
# 防止开机自动挂载 swap 分区
sed -i \'/ swap / s/^\\(.*\\)$/#\\1/g\' /etc/fstab

1.3.3 关闭selinux和防火墙
sed -ri \'s#(SELINUX=).*#\\1disabled#\' /etc/selinux/config
setenforce 0
systemctl disable firewalld & & systemctl stop firewalld

1.3.4 修改内核参数
#句柄数 vm.max_map_count=262144
cat < < EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
sysctl -p /etc/sysctl.d/k8s.conf
modprobe br_netfilter
#加载网桥防火墙

1.3.5 yum源优化
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
cat < < EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all & & yum makecache

2. 安装docker
## 查看所有的可用版本
yum list docker-ce --showduplicates | sort -r
## 安装当前源里的最新版本
yum install docker-ce-20.10.6 -y
## 配置docker加速
mkdir -p /etc/docker
# harbor为后面会用到的docker私有仓库
vi /etc/docker/daemon.json
{
"insecure-registries": [
"harbor.hs.com"
],
"registry-mirrors" : [
"https://8xpk5wnt.mirror.aliyuncs.com"
]
}
## 启动docker
systemctl enable docker & & systemctl start docker

3. 安装K8S相关依赖
yum install -y kubelet-1.19.8 kubeadm-1.19.8 kubectl-1.19.8
systemctl enable kubelet# 可以选择不启动, 等下初始化节点的时候,会替你启动



4. 配置LB该lb根据官方推荐选择haproxy,因为其可以绕过https的请求,直接将请求传送给后端代理的服务(kubeadm安装的组件是通过https进行通信的)
1. 安装haproxy
yum install -y haproxy

2. 调整配置
backend kubernetes-apiserver frontend kubernetes-apiserver 着重这两项
cat /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events.This is done
#by adding the \'-r\' option to the SYSLOGD_OPTIONS in
#/etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
#file. A line like the following can be added to
#/etc/sysconfig/syslog
#
#local2.*/var/log/haproxy.log
#
log127.0.0.1 local2

chroot/var/lib/haproxy
pidfile/var/run/haproxy.pid
maxconn4000
userhaproxy
grouphaproxy
daemon

# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the \'listen\' and \'backend\' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
modehttp
logglobal
optionhttplog
optiondontlognull
option http-server-close
option forwardforexcept 127.0.0.0/8
optionredispatch
retries3
timeout http-request10s
timeout queue1m
timeout connect10s
timeout client1m
timeout server1m
timeout http-keep-alive 10s
timeout check10s
maxconn3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
modetcp
bind*:7443
optiontcplog
default_backendkubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
modetcp
balanceroundrobin
serverk8s-master10.0.0.2:6443 check
serverk8s-slave110.0.0.3:6443 check
serverk8s-slave210.0.0.4:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind*:1080
stats authadmin:admin
stats refresh5s
stats realmHAProxy\\ Statistics
stats uri/admin?stats

3. 启动haproxy
systemctl start haproxy
systemctl enable haproxy
#]ss -tlnp|grep 7443
LISTEN0128*:7443*:*users:(("haproxy",pid=7265,fd=5))

5. 初始化集群随机找一个节点执行
kubeadm init --control-plane-endpoint "10.0.0.5:7443" --pod-network-cidr 172.16.0.0/16 --service-cidr 10.96.0.0/16--image-repository registry.aliyuncs.com/google_containers --upload-certs

--upload-certs #标志用来将在所有控制平面实例之间的共享证书上传到集群。即当你把其他集群加入节点时,会自动创建证书。 不加就需要手动将证书拷贝到别的服务器
1. 创建证书文件等
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2. 创建flannel插件
wgethttps://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

做如下调整.在上面的文件内 找到对应的模块。大概在第190行附近
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.15.0-rc1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=ens33# 如果机器存在多网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网

拉取镜像
docker pull quay.io/coreos/flannel:v0.15.0-rc1#注意查看上面的镜像 可能每次都不一样

启动flannel
kubectl apply -f kube-flannel.yml

3. 将其他节点加入集群
kubeadm join 10.0.0.5:7443 --token jx2l6h.94ow9imamqxen1cl--discovery-token-ca-cert-hash sha256:613608d88c74330358422c1e5badf84f214b8fa40f2911a9f40b996bf7a6830d--control-plane --certificate-key 0fb5cae296a77fd3a41eb08a3d21014c4c832b553857b16e9b8e1378978a8e4d

4. 集群调整4.1 修改scheduler和 kube-controller-manager部分配置该--port=0表示禁止使用非安全的http接口,同时 ??--bind-address??  非安全的绑定地址会失效
vim /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
# - --port=0
image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.17.14
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
...

vim /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=172.16.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
#- --port=0

上面两个服务,理论上k8s会自动重建,不需要手动修改
如果不更改该内容会导致如下的报错
# kubectl get cs # 解决controller-manager和scheduler不健康问题
NAMESTATUSMESSAGEERROR
controller-managerUnhealthyGet http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
schedulerUnhealthyGet http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0Healthy{"health":"true"}

4.2 启用ipvsservice流量调度方式有三种方式: userspace(废弃)、iptables(性能差,复杂)、ipvs(性能好)。ipvs也是走iptables规则,只是随着service规模的增加,性能降低衰减很小而已
加载ipvs模块
vi /root/ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i & > /dev/null
if [ $? -eq 0 ]; then
/sbin/modprobe $i
fi
done
chmod +x ipvs.sh
sh ipvs.sh

更改kubeproxy的配置
kubectl editcm -n kube-system kube-proxy
第44行
mode: "ipvs" #改为这样
重启kube-proxy
kubectl -n kube-system get daemonset
kubectl -n kube-system delete daemonset kube-proxy

将ipvs启动脚本加入rc.local. 
若不生效,可参考
??开机启动??
# tail -1 /etc/rc.local
bash /root/ipvs.sh

4.3 验证集群
kubectl get nodes
NAMESTATUSROLESAGEVERSION
k8s-master1Readymaster14mv1.19.8
k8s-master2Readymaster10mv1.19.8
k8s-master3Readymaster6m27sv1.19.8
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAMESTATUSMESSAGEERROR
controller-managerHealthyok
schedulerHealthyok
etcd-0Healthy{"health":"true"}

4.4 查看k8s的核心组件
kubectl get pod -n kube-system
NAMEREADYSTATUSRESTARTSAGE
coredns-6d56c8448f-jbhtc1/1Running016m
coredns-6d56c8448f-pzzbq1/1Running016m
etcd-k8s-master11/1Running016m
etcd-k8s-master21/1Running012m
etcd-k8s-master31/1Running08m25s
kube-apiserver-k8s-master11/1Running016m
kube-apiserver-k8s-master21/1Running012m
kube-apiserver-k8s-master31/1Running08m27s
kube-controller-manager-k8s-master11/1Running06m32s
kube-controller-manager-k8s-master21/1Running112m
kube-controller-manager-k8s-master31/1Running08m27s
kube-flannel-ds-c4j5g1/1Running015m
kube-flannel-ds-gc7271/1Running03m57s
kube-flannel-ds-vfcpd1/1Running03m48s
kube-proxy-7bwg21/1Running08m27s
kube-proxy-c9qdf1/1Running016m
kube-proxy-t6mz71/1Running012m
kube-scheduler-k8s-master11/1Running06m52s
kube-scheduler-k8s-master21/1Running112m
kube-scheduler-k8s-master31/1Running08m27s

4.5 取消污点
kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-

6. 部署dashboard6.1 获取yaml文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
获取的是当前最新的版本,可能首页会不同。大致都是那些功能

6.2 更改部分参数
vi recommended.yaml
# 修改Service为NodePort类型,文件的45行上下
......
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
type: NodePort# 加上type=NodePort变成NodePort类型的服务
......

6.3 部署服务
kubectl apply -f recommended.yaml

6.4 查看服务状态
$ kubectl -n kubernetes-dashboard get svc
NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE
dashboard-metrics-scraperClusterIP172.16.62.124< none> 8000/TCP31m
kubernetes-dashboardNodePort172.16.74.46< none> 443:30133/TCP31m

6.5 创建登录秘钥
vi dashboard-admin.conf
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin
namespace: kubernetes-dashboard

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kubernetes-dashboard

kubectl apply -f dashboard-admin.conf #创建服务

6.6 获取秘钥
kubectl -n kubernetes-dashboard get secret |grep admin-token
admin-token-fqdpfkubernetes.io/service-account-token37m17s
kubectl -n kubernetes-dashboard get secret admin-token-fqdpf -o jsonpath={.data.token}|base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1rb2xHWHMwbWFPMjJaRzhleGRqaExnVi1BLVNRc2txaEhETmVpRzlDeDQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1mcWRwZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjYyNWMxNjJlLTQ1ZG...

6.7登录访问访问地址
??https://10.0.0.2://30133??
kubeadm安装高可用集群

文章图片

kubeadm安装高可用集群

文章图片

7. 设置kubectl自动补全

操作节点:??k8s-master??


yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source < (kubectl completion bash)
echo "source < (kubectl completion bash)" > > ~/.bashrc

8. 注意目前来说,集群还缺少很多关键的依赖。 后续文章会持续更新
【kubeadm安装高可用集群】


    推荐阅读