龙空技术网

Centos7使用kubeadm部署高可用集群1.18

折戟杀 248

前言:

今天兄弟们对“centos高可用集群”大致比较关心,大家都需要知道一些“centos高可用集群”的相关内容。那么小编同时在网摘上收集了一些关于“centos高可用集群””的相关知识,希望姐妹们能喜欢,各位老铁们快快来了解一下吧!

环境

ip

作用

系统centos7.6

172.16.76.201

master1+keepalived+haproxy

vip:172.16.76.200

172.16.76.202

master2+keepalived+haproxy

vip:172.16.76.200

172.16.76.203

master3+keepalived+haproxy

vip:172.16.76.200

172.16.76.204

node

kubernetes主要由以下几个核心组件组成:etcd保存了整个集群的状态;apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;

除了核心组件,还有一些推荐的组件:

kube-dns负责为整个集群提供DNS服务Ingress Controller为服务提供外网入口Heapster提供资源监控Dashboard提供GUIFederation提供跨可用区的集群Fluentd-elasticsearch提供集群日志采集、存储与查询

环境说明

本文采用三台master和两台node搭建kubernetes集群,采用两台机器搭建haproxy+keepalived负载均衡master,保证master高可用,从而保证整个kubernetes高可用。官方要求机器配置必须>=2C2G,操作系统>=18.04。

基本设置关闭防火墙

systemctl stop firewalldsystemctl disable firewalld
关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0
关闭swap
swapoff -a  # 临时sed -ri 's/.*swap.*/#&/' /etc/fstab  # 永久
修改主机名
hostnamectl set-hostname   
修改hosts文件(所有节点)

vim /etc/hosts

172.16.76.201 master1172.16.76.202 master2172.16.76.203 master3172.16.76.204 node1172.16.76.200 kubernetes.haproxy.com
开启转发
cat > /etc/sysctl.d/k8s.conf << EOFnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system
时间同步配置
yum install chrony -yserver cn.ntp.org.cn iburstsystemctl restart chronyd ; systemctl enable chronyd
安装keepalived

部署在所有的master节点,keeplived的主要作用是为haproxy提供vip,在三个haproxy实例之间提供主备,降低当其中一个haproxy失效的时对服务的影响。vip地址指向master1、master2、master3。*

yum install keepalived -y
cat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs {   router_id LVS_DEVEL}vrrp_script check_haproxy {    script "killall -0 haproxy" # 根据进程名称检测进程是否存活    interval 3    weight -2    fall 10    rise 2}vrrp_instance VI_1 {    state MASTER        # 备分服务器上改为BACKUP    interface ens33     # 改为自己的网络接口    virtual_router_id 51    priority 250        # 备分服务器上改为小于250的数字,如200,150    advert_int 1    authentication {        auth_type PASS        auth_pass 35f18af7190d51c9f7f78f37300a0cbd    }    virtual_ipaddress {        192.168.3.10 # 虚拟ip,自己设定    }    track_script {        check_haproxy    }}EOF
启动并检测
# 启动systemctl start keepalived.service && systemctl enable keepalived.service# 查看状态systemctl status keepalived.service# 查看vipip address show ens33

搭建haproxy

部署在所有的master节点,haproxy为apiserver提供反向代理,haproxy将所有请求轮询转发到每个master节点上。相对于仅仅使用keepalived主备模式仅单个master节点承载流量,这种方式更加合理、健壮。

安装harpoxy

yum install -y haproxy

系统配置

cat >> /etc/sysctl.conf << EOFnet.ipv4.ip_nonlocal_bind = 1EOF# 立即生效sysctl -p
cat > /etc/haproxy/haproxy.cfg << EOF#---------------------------------------------------------------------# Global settings#---------------------------------------------------------------------global    # to have these messages end up in /var/log/haproxy.log you will    # need to:    #    # 1) configure syslog to accept network log events.  This is done    #    by adding the '-r' option to the SYSLOGD_OPTIONS in    #    /etc/sysconfig/syslog    #    # 2) configure local2 events to go to the /var/log/haproxy.log    #   file. A line like the following can be added to    #   /etc/sysconfig/syslog    #    #    local2.*                       /var/log/haproxy.log    #    log         127.0.0.1 local2    chroot      /var/lib/haproxy    pidfile     /var/run/haproxy.pid    maxconn     4000    user        haproxy    group       haproxy    daemon    # turn on stats unix socket    stats socket /var/lib/haproxy/stats#---------------------------------------------------------------------# common defaults that all the 'listen' and 'backend' sections will# use if not designated in their block#---------------------------------------------------------------------defaults    mode                    http    log                     global    option                  httplog    option                  dontlognull    option http-server-close    option forwardfor       except 127.0.0.0/8    option                  redispatch    retries                 3    timeout http-request    10s    timeout queue           1m    timeout connect         10s    timeout client          1m    timeout server          1m    timeout http-keep-alive 10s    timeout check           10s    maxconn                 3000#---------------------------------------------------------------------# kubernetes apiserver frontend which proxys to the backends#---------------------------------------------------------------------frontend kubernetes    mode                 tcp    bind                 *:16443    option               tcplog    default_backend      kubernetes-apiserver#---------------------------------------------------------------------# round robin balancing between the various backends#---------------------------------------------------------------------backend kubernetes-apiserver    mode        tcp    balance     roundrobin    server  master1 172.16.76.201:6443 check    server  master2 172.16.76.202:6443 check    server  master3 172.16.76.203:6443 check#---------------------------------------------------------------------# collection haproxy statistics message#---------------------------------------------------------------------listen stats    bind                 *:1080    stats auth           admin:awesomePassword    stats refresh        5s    stats realm          HAProxy\ Statistics    stats uri            /admin?statsEOF
# 启动systemctl start haproxy.service && systemctl enable haproxy.service# 查看状态systemctl status haproxy.service # 查看端口ss -lnt | grep -E "16443|1080"

kubernetes集群搭建

基本设置配置阿里源(所有节点)

wget -O /etc/yum.repos.d/CentOS-Base.repo  -P /etc/yum.repos.d/   -O /etc/yum.repos.d/docker-ce.repoyum clean all
安装docker环境
# 安装指定版版本 yum -y install docker-ce-18.09.9-3.el7#也可以查看版本安装yum list docker-ce --showduplicates | sort -r# 配置docker.jsonmkdir -p /etc/docker/cat >> /etc/docker/daemon.json <<EOF{  "log-driver": "json-file",  "log-opts": {    "max-size": "10m",    "max-file": "3"  },  "exec-opts": ["native.cgroupdriver=systemd"],  "data-root": "/data/docker",  "insecure-registries":[";],  "default-ulimits": {    "nofile": {     "Name": "nofile",     "Hard": 64000,    "Soft": 64000  }  },  "storage-driver": "overlay2",  "storage-opts": [  "overlay2.override_kernel_check=true"  ],  "mtu": 1450}EOF# 启动dockersystemctl enable dockersystemctl start dockersystemctl status docker
安装kubernetes组件(所有机器)配合kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl= 
下载kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0systemctl enable kubelet
确保kubelet 的cgroup drive 和docker的cgroup drive一样
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
踩坑:解决cgroupfs问题加载ipvs模块
# 如果kube-proxy使用ipvs模式,需要加载ipvs模块modprobe ip_vsmodprobe ip_vs_rrmodprobe ip_vs_wrrmodprobe ip_vs_shmodprobe nf_conntrack_ipv4
配置内核参数
cat <<EOF >  /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_nonlocal_bind = 1net.ipv4.ip_forward = 1vm.swappiness=0EOFsysctl --system# 配置内核参数,需要重启服务器,否则后面初始化的时候会报错。

初始化k8s

在VIP的节点上操作

vim kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: 1.18.0controlPlaneEndpoint: "kubernetes.haproxy.com:16443"imageRepository: registry.aliyuncs.com/google_containersnetworking:      podSubnet: "20.20.0.0/16"
kubeadm init --config=kubeadm-config.yaml  
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:   can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root:  kubeadm join kubernetes.haproxy.com:16443 --token eygkav.muh57hcy5ah1a1sw \    --discovery-token-ca-cert-hash sha256:b34b0dbdda5c1427e2832f70a6f1b93d9c63c30e6f4b2771d84c00a337a75629 \    --control-planeThen you can join any number of worker nodes by running the following on each as root:kubeadm join kubernetes.haproxy.com:16443 --token eygkav.muh57hcy5ah1a1sw \    --discovery-token-ca-cert-hash sha256:b34b0dbdda5c1427e2832f70a6f1b93d9c63c30e6f4b2771d84c00a337a75629
mkdir -p $HOME/.kubecp -i /etc/kubernetes/admin.conf $HOME/.kube/configchown $(id -u):$(id -g) $HOME/.kube/config
安装网络插件

直接远程下载flannel网络查件pod

 wget   kubectl apply -f kube-flannel.yml
kubectl get pod -n kube-system -wkubectl get nodes
集群加入另外控制节点复制秘钥及相关文件到其他两个master节点
master2和master3上操作: mkdir /etc/kubernetes/pki/etcd/ -pmaster1上操作:scp /etc/kubernetes/admin.conf master2:/etc/kubernetes/scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} master2:/etc/kubernetes/pki/scp /etc/kubernetes/pki/etcd/ca.* master2:/etc/kubernetes/pki/etcd/scp /etc/kubernetes/admin.conf master3:/etc/kubernetes/scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} master3:/etc/kubernetes/pki/scp /etc/kubernetes/pki/etcd/ca.* master3:/etc/kubernetes/pki/etcd/
控制节点2
kubeadm join kubernetes.haproxy.com:16443 --token eygkav.muh57hcy5ah1a1sw \    --discovery-token-ca-cert-hash sha256:b34b0dbdda5c1427e2832f70a6f1b93d9c63c30e6f4b2771d84c00a337a75629 \    --control-plane
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
控制节点3
kubeadm join kubernetes.haproxy.com:16443 --token eygkav.muh57hcy5ah1a1sw \    --discovery-token-ca-cert-hash sha256:b34b0dbdda5c1427e2832f70a6f1b93d9c63c30e6f4b2771d84c00a337a75629 \    --control-plane
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
特别操作如果token丢了,用以下方法进行输出
1.kubeadm token create --print-join-command# 在master上生成新的token2.kubeadm init phase upload-certs --experimental-upload-certs# 在master上生成用于新master加入的证书3.--experimental-control-plane --certificate-key# 添加新master,把证书部分加到后面
验证master集群(master1节点操作)
kubectl get pod -n kube-system -wkubectl get nodes
集群加入计算节点node1
kubeadm join kubernetes.haproxy.com:8443 --token tbszga.jmx2rrkenzv1qqa2 \    --discovery-token-ca-cert-hash sha256:13cb23be6e3e058e3f0cf07dddec365e91701d7a9b78ada177ebd6b2d09f56af
整个集群搭建完成查看结果(任意master执行)
kubectl get pods --all-namespaceskubectl get nodes

集群接入dashboard

wget   apply -f kuboard.yamlkubectl get po -n kube-system
# 获取dashboard token, 也就是创建service account并绑定默认cluster-admin管理员集群角色kubectl create serviceaccount dashboard-admin -n kube-systemkubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin[root@k8s-master1 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system  get secret | awk '/dashboard-admin/{print $1}')
登录验证

标签: #centos高可用集群