龙空技术网

k8s 三master高可用搭建(kubeadm)

凌云归来 301

前言:

现在我们对“netcc1”大体比较重视,你们都想要学习一些“netcc1”的相关文章。那么小编也在网上网罗了一些对于“netcc1””的相关资讯,希望我们能喜欢,我们一起来了解一下吧!

k8s 高可用集群搭建参数优化

以下操作如未说明均为所有机器都操作

系统:centos7.9

设置主机名

hostnamectl set-hostname master01

...

...

...

2.添加hosts表

cat >> /etc/hosts <<EOF

10.0.0.200 vip

10.0.0.111 master01

10.0.0.112 master02

10.0.0.113 master03

10.0.0.120 node01

EOF

3.关闭防火墙

systemctl start firewalld

systemctl stop firewalld

systemctl disable firewalld

setenforce 0

sed -i 's#enforcing#disabled#g' /etc/selinux/config

4.关闭swap分区

swapoff -a

vi /etc/fstab

注释swap那行

5.开启路由转发

cat >> /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

sysctl --system

6.更改阿里云yum源

配置DNS

cat >> /etc/resolv.conf << EOF

nameserver 114.114.114.114

EOF

yum -y install wget

备份:

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

下载:

wget -O /etc/yum.repos.d/CentOS-Base.repo

清理缓存:

yum clean all && yum makecache

7.开启IPVS转发

yum -y install ipvsadm ipset

临时生效:

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

永久生效:

cat >> /etc/sysconfig/modules/ipvs.modules << EOF

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

EOF

nginx

三台master操作,采用nginx负载svc

设置repo:

cat > /etc/yum.repos.d/nginx.repo <<EOF

[nginx]

name=nginx repo

baseurl=\$releasever/\$basearch

gpgcheck=0

enabled=1

EOF

更新缓存:

yum clean all && yum makecache

下载:

yum install nginx -y

9.安装常用包

yum install vim bash-completion net-tools gcc lrzsz -y

10.集群内master互相免秘钥

三台master操作:

1 ssh-keygen

三次回车

2查看本机公钥

cat ~/.ssh/id_rsa.pub

3 加入信任列表

将其余两台公钥放入本机的信任列表,三台都操作

vi ~/.ssh/authorized_keys

01-公钥

02-公钥

。。。

4 测试互相登录

首次配置需每台服务器登录测试,输入yes。

安装docker安装源:

cat > /etc/yum.repos.d/docker-ce.repo << EOF

[docker-ce-stable]

name=Docker CE Stable - \$basearch

baseurl=\$basearch/stable

enabled=1

gpgcheck=1

gpgkey=

EOF

查看最新版本

yum list docker-ce.x86_64 --showduplicates |sort -r

安装安装指定版本:

yum -y install docker-ce-19.03.3-3.el7

如报错:

Problem: package docker-ce-3:19.03.4-3.el7.x86_64 requires containerd.io >= 1.2.2-3

安装新版本containerd.io

dnf -y install

启动

systemctl start docker

systemctl enable docker

5.安装阿里云 docker仓库加速器

cat > /etc/docker/daemon.json << EOF

{

"bip" : "192.168.166.1/24",

"registry-mirrors": [";],

"log-driver":"json-file",

"insecure-registries": ["192.168.56.59"],

"log-opts": {"max-size":"500m", "max-file":"3"}

}

EOF

说明:

#bip: 设置docker 默认启动分配地址段,无需要可忽略

#"log-driver":"json-file" 设置默认容器输出日志类型 ,无需要可忽略

#"log-opts": {"max-size":"500m", "max-file":"3"}

限制每个容器日志输出总共三个文件,每个大小500m,结合上个参数限制长时间运行容器日志过大占用磁盘,无需要可忽略。

systemctl daemon-reload

systemctl restart docker

部署k8s1.添加阿里云K8S源

cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]

name=Kubernetes

baseurl=

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=

EOF

2.安装kubectl、kubelet、kubeadm安装指定版本:

yum -y install kubectl-1.20.4 kubelet-1.20.4 kubeadm-1.20.4

设置kubelet开机自启动

systemctl enable kubelet

部署hapoxy、keepalived

本环节三台master操作

1.下载

yum install keepalived haproxy -y

2.配置keepalived

cat > /etc/keepalived/keepalived.conf << EOF

! Configuration File for keepalived

global_defs {

router_id LVS_DEVEL

}

vrrp_script check_apiserver {

script "/etc/keepalived/check_apiserver.sh"

interval 3

weight -2

fall 10

rise 2

}

vrrp_instance VI_1 {

state MASTER ### 集群角色

interface ens33 ### 注意网卡名称

virtual_router_id 51

priority 100 ### 集群master选举优先级

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

10.0.0.200 ### 虚拟Ip

}

track_script {

check_apiserver

}

}

EOF

注意:

keepalived.conf中四个参数

一台服务器设置:

state MASTER

priority 100

另外两台服务器设置:

state BACKUP

priority 50

三台服务器设置高可用的虚拟Ip:

virtual_ipaddress {

10.0.0.200 ### 虚拟Ip

三台服务器设置网卡名称

interface ens33 ### 注意网卡名称

设置检测脚本:

三台机器一样

cat >/etc/keepalived/check_apiserver.sh << EOF

#!/bin/sh

errorExit() {

echo "*** $*" 1>&2

exit 1

}

curl --silent --max-time 2 --insecure -o /dev/null || errorExit "Error GET ;

if ip addr | grep -q 10.0.0.200; then

curl --silent --max-time 2 --insecure -o /dev/null || errorExit "Error GET ;

fi

EOF

3.配置haproxy

cat > /etc/haproxy/haproxy.cfg << EOF

global

log /dev/log local0

log /dev/log local1 notice

daemon

defaults

mode http

log global

option httplog

option dontlognull

option http-server-close

option forwardfor except 127.0.0.0/8

option redispatch

retries 1

timeout http-request 10s

timeout queue 20s

timeout connect 5s

timeout client 20s

timeout server 20s

timeout http-keep-alive 10s

timeout check 10s

frontend apiserver

bind *:7443

mode tcp

option tcplog

default_backend apiserver ### 设置默认后端服务器组为 apiserver

#---------------------------------------------------------------------

# round robin balancing for apiserver

#---------------------------------------------------------------------

backend apiserver ### 配置后端服务器组apiserver

option httpchk GET /healthz

http-check expect status 200

mode tcp

option ssl-hello-chk

balance roundrobin

server master01 10.0.0.111:6443 check

server master02 10.0.0.112:6443 check

server master03 10.0.0.113:6443 check

### 后端服务组的机器配置

EOF

注意:

三台服务器配置相同

修改参数:

frontend apiserver

bind *:7443

设置vip的监听端口为7443,避开k8s默认的6443就行,k8s集群初始化时用到。

4.启动

systemctl start keepalived

systemctl start haproxy

hapoxy 启动时会报错找不到6443,因为k8s-apiserver还未启动

5.加入开机启动项

systemctl enable keepalived

systemctl enable haproxy

五、初始化集群1.获取默认初始化文件

在keepalived中设置master服务器操作

kubeadm config print init-defaults > init-k8s.yaml

2.修改初始化文件

修改四个参数

2.1设置本机ip

localAPIEndpoint:

advertiseAddress: 10.0.0.111

2.2新增一行keepalivedd 虚拟ip

clusterName: kubernetes

controlPlaneEndpoint: 10.0.0.200:7443

2.3 设置从阿里云的镜像仓库拉取

imageRepository: registry.aliyuncs.com/google_containers

2.4 设置service网段并新增pod的网段

serviceSubnet: 10.110.0.0/16

podSubnet: 10.120.0.0/16

3.初始化集群

kubeadm init --config init-k8s.yaml --upload-certs

注意加 --upload-certs 参数,新master加入时自动传送认证文件到新master

成功信息:根据结果提示创建kubectl

每个master都需要操作以下步骤后才能使用kubectl

mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config

配置kubectl可以自动补充命令

echo "source <(kubectl completion bash)" >> ~/.bashrc

4.新master加入集群5.node加入集群6.开启master参与pod调度 (可选)

kubectl taint nodes master01 node-role.kubernetes.io/master-

六、安装网络

master01 操作本环节

1.calico1.1下载yaml文件

wget

1.2 配置文件

修改网段地址,和master初始化时pod段一致:

(--pod-network-cidr=10.120.0.0/16)

vi calico.yaml

修改网卡发现规则:

- name: IP_AUTODETECTION_METHOD

value: "interface=ens.*"

###匹配以ens开头的网卡

1.3 启动calico

kubectl apply -f calico.yaml

1.4配置 ipvs 转发

配置kube-proxy,在master操作

kubectl edit cm kube-proxy -n kube-system

重启kube-proxy

kubectl get pod -n kube-system|grep proxy

kubectl delete -n kube-system pod kube-proxy-tr5gn

验证:

kubectl logs kube-proxy-cvzb4 -n kube-system

###开启后 pod 可ping通svc (dns地址)

报错kubectl 操作时报错

Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

当master不只一台时,必须同时有两台及以上master工作,否则kubectl操作报错。

master加入集群时报错

Message from syslogd@master01 at Sep 1 16:40:36 ...

haproxy[17095]:backend apiserver has no server available!

Broadcast message from systemd-journald@master01 (Wed 2021-09-01 16:40:36 CST):

注册时会重启6443端口,haporxy检测到6443端口开启又关闭后报错,不用管

标签: #netcc1