龙空技术网

kubeadm部署k8s集群

fana 163

前言:

眼前我们对“centos7310”可能比较着重,各位老铁们都需要知道一些“centos7310”的相关资讯。那么小编在网摘上收集了一些对于“centos7310””的相关文章,希望姐妹们能喜欢,小伙伴们一起来了解一下吧!

kubeadm介绍

kubeadm是Kubernetes官方提供的一种工具,用于快速部署和初始化 Kubernetes 集群。

kubeadm主要分为以下几个部分:

初始化:kubeadm init命令用于初始化一个Kubernetes集群,它将在所有节点上安装必要的组件和配置文件,并将集群的初始化信息保存在Kubernetes配置文件中。加入:kubeadm join命令用于将新的节点加入到Kubernetes集群中,它将为节点生成必要的证书和密钥,并将节点的信息记录在Kubernetes配置文件中。组件:kubeadm还负责管理Kubernetes集群中的一些核心组件,如kubelet、kube-proxy等。它将安装这些组件并确保它们正常运行。配置:kubeadm允许管理员对Kubernetes集群进行一些基本的配置,如网络插件、授权策略等。这些配置将被记录在Kubernetes配置文件中,并应用于整个集群。reset:尽最大努力还原init或者join对集群的影响。

 #常用命令 kubeadm init #引导一个Kubernetes控制平面节点。 kubeadm join #引导Kubernetes工作节点并将其连接到集群。 kubeadm upgrade #将Kubernetes集群升级到新版本。 kubeadm config #如果使用kubeadm v1.7.x或更低版本初始化集群,以便为kubeadm upgrade配置集群。 kubeadm token #管理用于kubeadm join的令牌。 kubeadm reset #还原由kubeadm init或kubeadm join对该主机所做的所有更改。 kubeadm version #打印kubeadm版本。 kubeadm alpha #预览可用于收集社区反馈的特性。
环境准备
 #机器规划:Alma linux 8.6 192.168.157.21 - master  192.168.157.22 - master 192.168.157.23 - master 192.168.157.24 - node 192.168.157.25 - node
升级内核
 #1.添加elrepo库 dnf -y install   #2.查看版本 dnf --disablerepo="*" --enablerepo="elrepo-kernel" list available|grep kernel-lt  #3.安装内核 dnf --enablerepo=elrepo-kernel install kernel-lt -y ##没法yum可以下载下来内核rpm安装:  #4.查看所有内核版本 grubby --info=ALL | grep ^kernel  #5.查看默认的内核 grubby --default-kernel  #6.设置内核版本 grubby --set-default "/boot/vmlinuz-5.4.225-1.el8.elrepo.x86_64"  #7.重启 reboot  #8.查看内核 uname -r
系统配置
 #1.关闭防火墙并禁用 systemctl disable --now firewalld  #2.关闭selinux setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config  #3.禁用swap swapoff -a sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab  #4.时间同步 yum -y install chrony  vim /etc/chrony.conf server ntp.aliyun.com iburst  systemctl enable --now chronyd #启动时间同步 timedatectl set-ntp true timedatectl set-timezone Asia/Shanghai #设置时区上海  #5.安装常用软件 yum -y install \  net-tools nmap-ncat sysstat \  unzip lrzsz lsof vim telnet \  git wget ipset ipvsadm \  bash-completion  #6.设置主机名 hostnamectl set-hostname alma-1  #修改hosts文件 cat <<EOF>> /etc/hosts 192.168.157.21 alma-21 192.168.157.22 alma-22 192.168.157.23 alma-23 EOF  #7.加载ipvs模块  #开机加载 cat <<EOF> /etc/modules-load.d/ipvs.conf  ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack br_netfilter EOF  #手动加载 for mod in ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack br_netfilter; do   modprobe $mod done  #8.系统参数 tee /etc/sysctl.d/kubernetes.conf <<EOF net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv4.neigh.default.gc_stale_time = 120 net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.default.arp_announce = 2 net.ipv4.conf.lo.arp_announce = 2 net.ipv4.conf.all.arp_announce = 2 net.ipv4.ip_forward = 1 net.ipv4.tcp_max_tw_buckets = 5000 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 1024 net.ipv4.tcp_synack_retries = 2 # 要求iptables不对bridge的数据进行处理 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables = 1 net.netfilter.nf_conntrack_max = 2310720 fs.inotify.max_user_watches=89100 fs.may_detach_mounts = 1 fs.file-max = 52706963 fs.nr_open = 52706963 vm.overcommit_memory=1 # 开启OOM vm.panic_on_oom=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它 vm.swappiness=0 # ipvs优化 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_intvl = 30 net.ipv4.tcp_keepalive_probes = 10 EOF  sysctl --system  #优化日志处理,减少磁盘IO sed -ri 's/^\$ModLoad imjournal/#&/' /etc/rsyslog.conf sed -ri 's/^\$IMJournalStateFile/#&/' /etc/rsyslog.conf  sed -ri 's/^#(DefaultLimitCORE)=/\1=100000/' /etc/systemd/system.conf sed -ri 's/^#(DefaultLimitNOFILE)=/\1=100000/' /etc/systemd/system.conf  #9.文件句柄数 cat <<EOF>> /etc/security/limits.conf *    hard    nproc    1048576 *    soft    nofile    1048576 *    hard    nofile    1048576 *    soft    memlock    unlimited *    hard    memlock    unlimited EOF  #10.配置yum源 yum install -y yum-utils  ##docker源 yum-config-manager --add-repo   ##k8s源 cat <<EOF >> /etc/yum.repos.d/kubernetes.repo [kubernetes] name = kubernetes baseurl =  enabled = 1 gpgcheck =1 gpgkey =  \   EOF  ##阿里云源: sed -e 's|^mirrorlist=|#mirrorlist=|g' \  -e 's|^# baseurl=; \  -i.bak /etc/yum.repos.d/almalinux*.repo  dnf makecache
 #ssh免密登陆 sed -ri 's/^#(UseDNS )yes/\1no/' /etc/ssh/sshd_config  sed -i '/StrictHostKeyChecking/s/^#//; /StrictHostKeyChecking/s/ask/no/' /etc/ssh/ssh_config  cat <<EOF>> /etc/profile export TMOUT=900 EOF  source /etc/profile  #免密登陆 ssh-keygen -t rsa   #循环拷贝,#创建文本,ip 端口 密码 cat <<EOF> ip.txt 192.168.157.21 22 fana 192.168.157.22 22 fana 192.168.157.23 22 fana 192.168.157.24 22 fana 192.168.157.25 22 fana EOF  #while循环拷贝 cat ip.txt|while read ip port password;do sshpass -p $password ssh-copy-id -p $port root@$ip;done
部署组件
 #1.安装 yum install containerd.io kubeadm kubectl kubelet -y  #2.配置containerd containerd config default > /etc/containerd/config.toml  sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml  #---------pause镜像拉取不到可以改成阿里云的---------------# sandbox_image = "registry.k8s.io/pause:3.6"  sed -i 's#registry.k8s.io#registry.aliyuncs.com/google_containers#g' /etc/containerd/config.toml  #3.启动containerd systemctl enable --now containerd  #4.启动kubelet systemctl --now enable kubelet
配置负载
 #1.haproxy yum install -y haproxy  #2.配置文件 cat << EOF > /etc/haproxy/haproxy.cfg global     log         127.0.0.1 local2     chroot      /var/lib/haproxy     pidfile     /var/run/haproxy.pid     maxconn     10000     user        haproxy     group       haproxy     daemon  defaults     mode        http     log         global     option      httplog     option      dontlognull     option      http-server-close     option      redispatch     retries     3     timeout http-request    10s     timeout queue           20s     timeout connect         5s     timeout client          20s     timeout server          20s     timeout http-keep-alive 10s     timeout check           10s  listen  admin_stats     bind 0.0.0.0:9090     mode http     log 127.0.0.1 local0 err     stats refresh 30s     stats uri /status     stats realm welcome login\ Haproxy     stats auth admin:123456     stats hide-version     stats admin if TRUE  listen  k8s-apiserver      bind *:8443     mode tcp     timeout client 1h     timeout connect 1h      log global     option tcplog     balance roundrobin     server alma-21 192.168.157.21:6443 check     server alma-22 192.168.157.22:6443 check     server alma-23 192.168.157.23:6443 check     acl is_websocket hdr(Upgrade) -i WebSocket     acl is_websocket hdr_beg(Host) -i ws EOF  #3.启动 systemctl enable --now haproxy
 #1.keepalived yum install -y keepalived  #2.配置 cat <<EOF > /etc/keepalived/keepalived.conf global_defs {    router_id Haproxy }  vrrp_script chk_haproxy {     script "nc -nvz -w 2 127.0.0.1 8443"     interval 1     timeout 3     fall 2     rise 2 }  vrrp_instance VI_1 {     state MASTER     interface ens160     virtual_router_id 110     priority 100     advert_int 1     nopreempt     mcast_src_ip 192.168.157.21     unicast_peer {         192.168.157.22         192.168.157.23     }     authentication {         auth_type PASS         auth_pass 1234     }      virtual_ipaddress {         192.168.157.110/24     }     track_script {         chk_haproxy     } } EOF  #3.启动 systemctl enable --now keepalived
初始化集群
#1.在第一台上执行kubeadm init初始化集群##yum安装的kubelet是什么版本,k8s集群就部署什么版本kubelet --versionkubeadm init \  --kubernetes-version=v1.25.4 \  --image-repository registry.aliyuncs.com/google_containers \  --control-plane-endpoint=192.168.157.110:8443 \  --service-cidr=172.16.0.0/16 \  --pod-network-cidr=10.240.0.0/12 \  --cri-socket=/run/containerd/containerd.sock \  --upload-certs \  --v=5##init参数参考: init \  --kubernetes-version=v1.26.0 \  --apiserver-advertise-address=192.168.10.10 \ #单节点  --image-repository=registry.aliyuncs.com/google_containers \  --service-cidr=172.16.0.0/16 \  --pod-network-cidr=10.240.0.0/12 \  --cri-socket=unix:///run/containerd/containerd.sock \  --upload-certs \  --v=5##提示如下表示初始化正常Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:  export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:   can now join any number of the control-plane node running the following command on each as root:  kubeadm join 192.168.157.110:8443 --token dhlgh2.mm87d78wyqtcxggr \        --discovery-token-ca-cert-hash sha256:7310d4ce77046a73b4bbc6e1799e8560b59b1147e7c271f84f3ebfcb9f2fe3b6 \        --control-plane --certificate-key 9a04e12c57999c7606c2ab560408fcf234a691caad13d8c786e80ecde379d9bdPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.157.110:8443 --token dhlgh2.mm87d78wyqtcxggr \        --discovery-token-ca-cert-hash sha256:7310d4ce77046a73b4bbc6e1799e8560b59b1147e7c271f84f3ebfcb9f2fe3b6 #执行报错可以执行这个重置kubeadm reset -f #2.拷贝kubeconfig文件mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config#3.按照提示:加入其他master节点kubeadm join 192.168.157.110:8443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:cc1404b57b51c5669ee5519cd4b7057a4f32668c93262b6438ff81767ccb3e3a \ --control-plane --certificate-key 0bc421206b1412049d6543540522c2572239a4fe0b0a78455135fc3cd90b970b##其他master节点加入成功后,可以修改etcd的配置信息,加入其他2台的etcd地址vim /etc/kubernetes/manifests/etcd.yaml--initial-cluster=alma-21=修改kube-apiserver的etcd地址,加入其他2台vim /etc/kubernetes/manifests/kube-apiserver.yaml--etcd-servers=.加入node节点kubeadm join 192.168.157.110:8443 --token yc4ns0.fv61quv3rsm6qe7m \ --discovery-token-ca-cert-hash sha256:cc1404b57b51c5669ee5519cd4b7057a4f32668c93262b6438ff81767ccb3e3a
calico插件
#1.下载yaml文件curl  -o calico.yaml#2.修改pod CIDRvim calico.yaml...- name: CALICO_IPV4POOL_CIDR  value: "10.240.0.0/12"#3.部署kubectl apply -f calico.yaml#4.检查集群状态kubectl get nodekubectl get pod -A
#1.让master节点参与调度1.默认的当taint支持如下三个选项:NoSchedule :表示k8s将不会将Pod调度到具有该污点的Node上PreferNoSchedule :表示k8s将尽量避免将Pod调度到具有该污点的Node上NoExecute :表示k8s将不会将Pod调度到具有该污点的Node上,同时会将Node上已经存在的Pod驱逐#2.查看master和node的标签kubectl get node --show-labelsNAME      STATUS   ROLES           AGE   VERSION   LABELSalma-23   Ready    control-plane   33h   v1.25.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=alma-23,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=alma-24   Ready    <none>          31h   v1.25.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=alma-24,kubernetes.io/os=linux#2.查看污点kubectl describe node alma-21 |grep TaintsTaints: node-role.kubernetes.io/control-plane:NoSchedule#3.删除污点,让master参与调度kubectl taint nodes alma-21 node-role.kubernetes.io/control-plane-#删除所有node节点kubectl taint nodes --all node-role.kubernetes.io/control-plane-
新增节点
#1.新增node节点kubeadm token create --print-join-command #master节点上生成新的token#新的node节点,需要安装containerd.io kubeadm kubelet,然后执行kubeadm join 192.168.157.110:8443 \ --token 2674k1.sh6px9stju1537vr \ --discovery-token-ca-cert-hash sha256:cc1404b57b51c5669ee5519cd4b7057a4f32668c93262b6438ff81767ccb3e3a #2.新增masterkubeadm token create --print-join-command #生成新的tokenkubeadm init phase upload-certs --upload-certs #生成新master加入的证书######提示如下######[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:19e2daa5a0090ecfd210d97925479e010a29a04196fee577416c19fa0c6d1bdd#新的master节点执行kubeadm join 192.168.157.110:8443 \ --token 2674k1.sh6px9stju1537vr \ --discovery-token-ca-cert-hash sha256:cc1404b57b51c5669ee5519cd4b7057a4f32668c93262b6438ff81767ccb3e3a \ --control-plane --certificate-key 19e2daa5a0090ecfd210d97925479e010a29a04196fee577416c19fa0c6d1bdd
#其他命令kubeadm init phase certs apiserver #重新生成apiserver证书openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text #查看新生成的证书包含的内容#更新kubeadm-config.yamlkubeadm config migrate --old-config kubeadm-config.yaml --new-config kubeadm-config-migrate.yaml#token过期,重新获取token,node节点kubeadm token create --print-join-command #--ttl=0 加上表示永不失效#检查token是否过期kubectl get configmap cluster-info --namespace=kube-public -o yamlapiVersion: v1data:  jws-kubeconfig-gdtrlm: eyJhbGciOiJIUzI1NiIsImtpZCI6ImdkdHJsbSJ9..IK_YSH-y_oSd1jdW8kPOl1Pe-5AQeJ7aTptqDEjWNq0  kubeconfig: |      apiVersion: v1##如果Token过期,不会显示jws-kubeconfig-gdtrlm#关闭node节点kubeadm reset && rm -rf /etc/cni/net.d && ipvsadm --clear && rm -rf $HOME/.kube && rm -rf /etc/kubernetes/*#查看proxy的工作模式curl 127.0.0.1:10249/proxyMode

标签: #centos7310