龙空技术网

二进制部署kubernetes高可用集群

技术怪圈 319

前言:

今天小伙伴们对“ubuntu1604黑屏”都比较注重,各位老铁们都想要了解一些“ubuntu1604黑屏”的相关资讯。那么小编也在网摘上收集了一些关于“ubuntu1604黑屏””的相关资讯,希望咱们能喜欢,小伙伴们快快来学习一下吧!

单机kubeadm部署请参考:Kubeadm单机部署kubernetes:v1.23

本次部署采用github上的开源项目kubeasz,以二进制安装的方式,此方式安装支持系统有CentOS/RedHat 7, Debian 9/10, Ubuntu 1604/1804/2004。

注意1:确保各节点时区设置一致、时间同步。 如果你的环境没有提供NTP 时间同步注意2:确保在干净的系统上开始安装,不要使用曾经装过kubeadm或其他k8s发行版的环境注意3:建议操作系统升级到新的稳定内核,请结合阅读kubeasz文档中的内核升级文档注意3: 各节点设置免密

架构图

一、集群系统环境

root@k8s-master:~# cat /etc/issueUbuntu 20.04.4 LTS \n \l-docker: 20.10.9- k8s: v.1.23.1
二、IP和角色规划

下面是此次虚拟机集群安装前的IP等信息规划,因为资源有限所以有些节点资源混用。如果资源充足的话,master节点基数最好(三台以上)、etcd(三台)可以考虑每个服务一台虚拟机。此文配置环境如下:

IP

HostName

Role

VIP

172.31.7.2

k8s-master.host.com

master/etcd/HA1

172.31.7.188

172.31.7.3

k8s-node1.host.com

node

172.31.7.4

k8s-node2.host.com

node

172.31.7.5

k8s-node3.host.com

node

172.31.7.252

harbor1.host.com

harbor/HA2

三、初始化系统和全局变量

3.1 设置主机名(此处略)

~# hostnamectl set-hostname k8s-master.host.com #其他的更换主机名即可

3.2 ubuntu系统修改IP信息(单台为例)

root@k8s-master:~# cat /etc/netplan/00-installer-config.yaml# This is the network config written by 'subiquity'network: ethernets: ens33: dhcp4: no addresses: [172.31.7.2/16] #ip gateway4: 172.31.7.254 nameservers: addresses: [114.114.114.114] #dns version: 2 renderer: networkd #修改完重启网络 netplan apply

3.3 设置系统时区和时钟同步

timedatectl set-timezone Asia/Shanghairoot@k8s-master:/etc/default# cat /etc/default/localeLANG=en_US.UTF-8LC_TIME=en_DK.UTF-8root@k8s-master:~# cat /var/spool/cron/crontabs/root*/5 * * * * ntpdate time1.aliyun.com &> /dev/null && hwclock -w

3.4 内核资源优化

cat > /etc/sysctl.d/kubernetes.conf <<EOFnet.ipv4.ip_forward=1vm.max_map_count=262144kernel.pid_max=4194303fs.file-max=1000000net.ipv4.tcp_max_tw_buckets=6000net.netfilter.nf_conntrack_max=2097152net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1vm.swappiness=0EOFmodprobe ip_conntrackmodprobe br_netfiltersysctl -p /etc/sysctl.d/kubernetes.confreboot各节点做快照

3.5 免密设置

root@k8s-master:~# ssh-keygenGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /root/.ssh/id_rsaYour public key has been saved in /root/.ssh/id_rsa.pubThe key fingerprint is:SHA256:MQhTEGzxTr9rnP412bdROTZVgW6ZfnUiU4b6b6VIzok root@k8s-master1-etcd1.host.comThe key's randomart image is:+---[RSA 3072]----+|   .*=.      ...o||    o+ .    ..o .||   .  + o  ..oo .||     o . o. o=. =||      . S  .oo *+||         .  o+..=||       ... =++o+.||        +.E.=.+.o||       oo..  . . |+----[SHA256]-----+  #$IPs为所有节点地址包括自身,按照提示输入yes 和root密码root@k8s-master:~# ssh-copy-id $IPs 
四、高可用负载均衡部署

k8s-master 和harbor节点

#k8s-master.host.com 和harbor.host.com#安装keepalived haproxy#apt install keepalived haproxy -y cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf#keepalived主节点配置文件#cat /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {   notification_email {     acassen   }   notification_email_from Alexandre.Cassen@firewall.loc   smtp_server 192.168.200.1   smtp_connect_timeout 30   router_id LVS_DEVEL}vrrp_instance VI_1 {    state MASTER #别一个主机上用BACKUP    interface ens33  #网卡名与主机的一致    garp_master_delay 10 #每个虚拟路由器必须唯一,同属一个虚拟路由器的多个keepalived节点必须相同。    smtp_alert    virtual_router_id 51    priority 100   #在另一个节点上为80    advert_int 1    authentication {        auth_type PASS #预共享密钥认证,同一虚拟路由器的keepalived节点一样        auth_pass 1111    }    virtual_ipaddress {        172.31.7.188 dev ens33 label ens33:0        172.31.7.189 dev ens33 label ens33:1        172.31.7.190 dev ens33 label ens33:2    }}#复制keepalived配置文件到另一台节点,按上面要求修改配置cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf#重启keepalivedsystemctl restart keepalived.service && systemctl enable keepalived

VIP 已生效

编辑haproxy的配置文件

#增加listen配置#cat /etc/haproxy/haproxy.cfglisten k8s-cluster-01-6443        bind 172.31.7.188:6443   #监听vip的6443端口        mode tcp                 #模式tcp        #如果有多台master都添加到这里        server k8s-master.host.com 172.31.7.2:6443 check inter 3s fall 3 rise 1        #server k8s-master2-.host.com 172.31.7.X:6443 check inter 3s fall 3 rise 1#重启haproxyroot@k8s-master:~# systemctl restart haproxy.serviceroot@k8s-master:~# systemctl enable haproxy.service#复制haproxy配置文件到另一台ha节点scp  /etc/haproxy/haproxy.cfg  172.31.7.252:/etc/haproxy/#内核配置文件添加如下net.ipv4.ip_nonlocal_bind = 1 #意思是启动haproxy的时候,允许忽视VIP的存在

查看监听的IP与端口

五、部署harbor本地仓库

请参考Docker仓库之harbor部署

六、部署Kubernetes

6.1 部署节点ansible 安装(这里直在master节点部署)

apt install ansible -y#为每个节点设置python软链接root@k8s-master:~# ln -s /usr/bin/python3.8 /usr/bin/python

6.2 下载项目源码、二进制及离线镜像

下载工具脚本ezdown,与kubernetes相应的版本请查看kubeasz官方文档说明。

# 举例使用kubeasz版本3.2.0export release=3.2.0wget {release}/ezdownchmod +x ./ezdown# 使用工具脚本下载root@k8s-master1:~# ./ezdown --help./ezdown: illegal option -- -Usage: ezdown [options] [args]  option: -{DdekSz}    -C         stop&clean all local containers    -D         download all into "/etc/kubeasz 会自动下载到/etc/kubeasz这个目录    -P         download system packages for offline installing    -R         download Registry(harbor) offline installer    -S         start kubeasz in a container    -d <ver>   set docker-ce version, default "19.03.15"    -e <ver>   set kubeasz-ext-bin version, default "1.0.0"    -k <ver>   set kubeasz-k8s-bin version, default "v1.23.1"    -m <str>   set docker registry mirrors, default "CN"(used in Mainland,China)    -p <ver>   set kubeasz-sys-pkg version, default "0.4.2"    -z <ver>   set kubeasz version, default "3.2.0#./ezdown -D 会自动下载到/etc/kubeasz这个目录root@k8s-master:~# ./ezdown -D............60775238382e: Pull complete528677575c0b: Pull completeDigest: sha256:f741e403b3ca161e784163de3ebde9190905fdbf7dfaa463620ab8f16c0f6423Status: Downloaded newer image for easzlab/nfs-subdir-external-provisioner:v4.0.2docker.io/easzlab/nfs-subdir-external-provisioner:v4.0.23.2.0: Pulling from easzlab/kubeaszDigest: sha256:55910c9a401c32792fa4392347697b5768fcc1fd5a346ee099e48f5ec056a135Status: Image is up to date for easzlab/kubeasz:3.2.0docker.io/easzlab/kubeasz:3.2.02022-04-14 14:14:57 INFO Action successed: download_all#上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeaszroot@k8s-master:~# ll /etc/kubeasz/total 120drwxrwxr-x  11 root root  4096 Apr 14 13:32 ./drwxr-xr-x 101 root root  4096 Apr 14 13:08 ../-rw-rw-r--   1 root root   301 Jan  5 20:19 .gitignore-rw-rw-r--   1 root root  6137 Jan  5 20:19 README.md-rw-rw-r--   1 root root 20304 Jan  5 20:19 ansible.cfgdrwxr-xr-x   3 root root  4096 Apr 14 13:32 bin/drwxrwxr-x   8 root root  4096 Jan  5 20:28 docs/drwxr-xr-x   2 root root  4096 Apr 14 14:14 down/drwxrwxr-x   2 root root  4096 Jan  5 20:28 example/-rwxrwxr-x   1 root root 24716 Jan  5 20:19 ezctl*-rwxrwxr-x   1 root root 15350 Jan  5 20:19 ezdown*drwxrwxr-x  10 root root  4096 Jan  5 20:28 manifests/drwxrwxr-x   2 root root  4096 Jan  5 20:28 pics/drwxrwxr-x   2 root root  4096 Jan  5 20:28 playbooks/drwxrwxr-x  22 root root  4096 Jan  5 20:28 roles/drwxrwxr-x   2 root root  4096 Jan  5 20:28 tools/

6.3 创建集群配置实例

root@k8s-master:/etc/kubeasz# ./ezctl new k8s-cluster-012022-04-14 14:34:07 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-cluster-012022-04-14 14:34:07 DEBUG set versions2022-04-14 14:34:07 DEBUG cluster k8s-cluster-01: files successfully created.2022-04-14 14:34:07 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-cluster-01/hosts'2022-04-14 14:34:07 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-cluster-01/config.yml'root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# vim hostsroot@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# vim config.yml

然后根据提示配置/etc/kubeasz/clusters/k8s-cluster-01/hosts/etc/kubeasz/clusters/k8s-01/config.yml:根据前面节点规划修改hosts 文件和其他集群层面的主要配置选项;其他集群组件等配置项可以在config.yml文件中修改

6.4 编辑ansible host文件

root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# cat hosts# 'etcd' cluster should have odd member(s) (1,3,5,...)[etcd]172.31.7.2# master node(s)[kube_master]172.31.7.2# work node(s)[kube_node]172.31.7.3172.31.7.4172.31.7.5# [optional] harbor server, a private docker registry# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one[harbor]#这里选false,因为我自己准备了harbor仓库#172.31.1.8 NEW_INSTALL=false# [optional] loadbalance for accessing k8s from outside[ex_lb] #修改成自己的vip地址与端口172.31.1.6 LB_ROLE=backup EX_APISERVER_VIP=172.31.7.188 EX_APISERVER_PORT=6443172.31.1.7 LB_ROLE=master EX_APISERVER_VIP=172.31.7.188 EX_APISERVER_PORT=6443# [optional] ntp server for the cluster[chrony]#时间同步也不需要,在环境准备时已完成#172.31.1.1[all:vars]# --------- Main Variables ---------------# Secure port for apiserversSECURE_PORT="6443"#这里采用docker来部署# Cluster container-runtime supported: docker, containerdCONTAINER_RUNTIME="docker"#网络组件采用calico# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovnCLUSTER_NETWORK="calico"# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'PROXY_MODE="ipvs"#service网段# K8S Service CIDR, not overlap with node(host) networkingSERVICE_CIDR="10.100.0.0/16"#pod网段# Cluster CIDR (Pod CIDR), not overlap with node(host) networkingCLUSTER_CIDR="10.200.0.0/16"#端口的范围,以后部署服务要用上# NodePort RangeNODE_PORT_RANGE="30000-65525"# Cluster DNS DomainCLUSTER_DNS_DOMAIN="cluster.local"# -------- Additional Variables (don't change the default value right now) ---# Binaries Directorybin_dir="/usr/local/bin"# Deploy Directory (kubeasz workspace)base_dir="/etc/kubeasz"# Directory for a specific clustercluster_dir="{{ base_dir }}/clusters/k8s-cluster-01"# CA and other components cert/key Directoryca_dir="/etc/kubernetes/ssl"

6.5 编辑ansible config文件

root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# cat config.yml############################# prepare############################# 可选离线安装系统软件包 (offline|online)INSTALL_SOURCE: "online"# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardeningOS_HARDEN: false# 设置时间源服务器【重要:集群内机器时间必须同步】ntp_servers:  - "ntp1.aliyun.com"  - "time1.cloud.tencent.com"  - "0.cn.pool.ntp.org"# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",默认全部允许local_network: "0.0.0.0/0"############################# role:deploy#############################ca证书的有效期# default: ca will expire in 100 years# default: certs issued by the ca will expire in 50 yearsCA_EXPIRY: "876000h"CERT_EXPIRY: "438000h"# kubeconfig 配置参数CLUSTER_NAME: "cluster1"CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"# k8s versionK8S_VER: "1.23.1"############################# role:etcd############################# 设置不同的wal目录,可以避免磁盘io竞争,提高性能ETCD_DATA_DIR: "/var/lib/etcd"ETCD_WAL_DIR: ""############################# role:runtime [containerd,docker]############################# ------------------------------------------- containerd# [.]启用容器仓库镜像ENABLE_MIRROR_REGISTRY: true#pause镜像可以 自己准备,我把它放在本地的harbor仓库# [containerd]基础容器镜像SANDBOX_IMAGE: "harbor.host.com/base/pause:3.6"# [containerd]容器持久化存储目录CONTAINERD_STORAGE_DIR: "/var/lib/containerd"# ------------------------------------------- docker# [docker]容器存储目录DOCKER_STORAGE_DIR: "/var/lib/docker"# [docker]开启Restful APIENABLE_REMOTE_API: false# [docker]信任的HTTP仓库INSECURE_REG: '["127.0.0.1/8"]'############################# role:kube-master############################# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)MASTER_CERT_HOSTS:  - "172.31.7.188"  - "10.1.1.1"  - "k8s.test.io"  #- ";# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段# : 24############################# role:kube-node############################# Kubelet 根目录KUBELET_ROOT_DIR: "/var/lib/kubelet"# node节点最大pod 数MAX_PODS: 110# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量# 数值设置详见templates/kubelet-config.yaml.j2KUBE_RESERVED_ENABLED: "no"# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存SYS_RESERVED_ENABLED: "no"# haproxy balance modeBALANCE_ALG: "roundrobin"############################# role:network [flannel,calico,cilium,kube-ovn,kube-router]############################# ------------------------------------------- flannel# [flannel]设置flannel 后端"host-gw","vxlan"等FLANNEL_BACKEND: "vxlan"DIRECT_ROUTING: false# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"flannelVer: "v0.15.1"flanneld_image: "easzlab/flannel:{{ flannelVer }}"# [flannel]离线镜像tar包flannel_offline: "flannel_{{ flannelVer }}.tar"# ------------------------------------------- calico# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.mdCALICO_IPV4POOL_IPIP: "Always"# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"# [calico]设置calico 网络 backend: brid, vxlan, noneCALICO_NETWORKING_BACKEND: "brid"# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]calico_ver: "v3.19.3"# [calico]calico 主版本calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"# [calico]离线镜像tar包calico_offline: "calico_{{ calico_ver }}.tar"# ------------------------------------------- cilium# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...ETCD_CLUSTER_SIZE: 1# [cilium]镜像版本cilium_ver: "v1.4.1"# [cilium]离线镜像tar包cilium_offline: "cilium_{{ cilium_ver }}.tar"# ------------------------------------------- kube-ovn# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点OVN_DB_NODE: "{{ groups['kube_master'][0] }}"# [kube-ovn]离线镜像tar包kube_ovn_ver: "v1.5.3"kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"# ------------------------------------------- kube-router# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"OVERLAY_TYPE: "full"# [kube-router]NetworkPolicy 支持开关FIREWALL_ENABLE: "true"# [kube-router]kube-router 镜像版本kube_router_ver: "v0.3.1"busybox_ver: "1.28.4"# [kube-router]kube-router 离线镜像tar包kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"busybox_offline: "busybox_{{ busybox_ver }}.tar"############################# role:cluster-addon############################# coredns 自动安装dns_install: "no"corednsVer: "1.8.6"ENABLE_LOCAL_DNS_CACHE: falsednsNodeCacheVer: "1.21.1"# 设置 local dns cache 地址LOCAL_DNS_CACHE: "169.254.20.10"# metric server 自动安装metricsserver_install: "no"metricsVer: "v0.5.2"# dashboard 自动安装dashboard_install: "no"dashboardVer: "v2.4.0"dashboardMetricsScraperVer: "v1.0.7"# ingress 自动安装ingress_install: "no"ingress_backend: "traefik"traefik_chart_ver: "10.3.0"# prometheus 自动安装prom_install: "no"prom_namespace: "monitor"prom_chart_ver: "12.10.6"# nfs-provisioner 自动安装nfs_provisioner_install: "no"nfs_provisioner_namespace: "kube-system"nfs_provisioner_ver: "v4.0.2"nfs_storage_class: "managed-nfs-storage"nfs_server: "192.168.1.10"nfs_path: "/data/nfs"############################# role:harbor############################# harbor version,完整版本号HARBOR_VER: "v2.1.3"HARBOR_DOMAIN: "harbor.yourdomain.com"HARBOR_TLS_PORT: 8443# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'HARBOR_SELF_SIGNED_CERT: true# install extra componentHARBOR_WITH_NOTARY: falseHARBOR_WITH_TRIVY: falseHARBOR_WITH_CLAIR: falseHARBOR_WITH_CHARTMUSEUM: true
6.6 部署kubernetes集群

6.6.1环境初始化

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 --helpUsage: ezctl setup <cluster> <step>available steps:    01  prepare            to prepare CA/certs & kubeconfig & other system settings    02  etcd               to setup the etcd cluster    03  container-runtime  to setup the container runtime(docker or containerd)    04  kube-master        to setup the master nodes    05  kube-node          to setup the worker nodes    06  network            to setup the network plugin    07  cluster-addon      to setup other useful plugins    90  all                to run 01~07 all at once    10  ex-lb              to install external loadbalance for accessing k8s from outside    11  harbor             to install a new harbor server or to integrate with an existed oneexamples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)          ./ezctl setup test-k8s all          ./ezctl setup test-k8s 04 -t restart_master          root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 01

6.6.2 部署etcd集群

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 02#======================验证==========================root@k8s-master:/etc/kubeasz# export NODE_IPS="172.31.7.2"root@k8s-master:/etc/kubeasz# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints={ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done#预期结果: is healthy: successfully committed proposal: took = 4.734365ms

6.6.3 部署Docker

手动安装可以参考Docker-ce安装

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 03#======================验证==========================root@k8s-master:/etc/kubeasz# docker infoClient: Context:    default Debug Mode: falseServer: Containers: 0  Running: 0  Paused: 0  Stopped: 0 Images: 0 Server Version: 20.10.9 Storage Driver: overlay2  Backing Filesystem: extfs  Supports d_type: true  Native Overlay Diff: true  userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 1.............. Live Restore Enabled: true Product License: Community Engine

6.6.4 部署master

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 04#======================验证==========================root@k8s-master:/etc/kubeasz# kubectl get nodesNAME         STATUS                     ROLES    AGE   VERSION172.31.7.2   Ready,SchedulingDisabled   master   41s   v1.23.1

6.6.5 部署node

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 05#======================验证==========================root@k8s-master:/etc/kubeasz# kubectl get nodesNAME         STATUS                     ROLES    AGE     VERSION172.31.7.2   Ready,SchedulingDisabled   master   2m25s   v1.23.1172.31.7.3   Ready                      node     18s     v1.23.1172.31.7.4   Ready                      node     18s     v1.23.1172.31.7.5   Ready                      node     18s     v1.23.1#node节点root@k8s-node1:~# cat /etc/kube-lb/conf/kube-lb.confuser root;worker_processes 1;error_log  /etc/kube-lb/logs/error.log warn;events {    worker_connections  3000;}stream {    upstream backend {        server 172.31.7.2:6443    max_fails=2 fail_timeout=3s;    }    server {        listen 127.0.0.1:6443;        proxy_connect_timeout 1s;        proxy_pass backend;    }}

6.6.6 部署calico网络服务

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 06#======================验证==========================root@k8s-master:/etc/kubeasz# kubectl get pods -ANAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGEkube-system   calico-kube-controllers-754966f84c-fmtkl   1/1     Running   0          13mkube-system   calico-node-gbgnn                          1/1     Running   0          13mkube-system   calico-node-n6scc                          1/1     Running   0          13mkube-system   calico-node-tdw75                          1/1     Running   0          13mkube-system   calico-node-vzw96                          1/1     Running   0          13mroot@k8s-master:/etc/kubeasz# calicoctl node statusCalico process is running.IPv4 BGP status+--------------+-------------------+-------+----------+-------------+| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |+--------------+-------------------+-------+----------+-------------+| 172.31.7.3   | node-to-node mesh | up    | 05:01:55 | Established || 172.31.7.4   | node-to-node mesh | up    | 05:03:48 | Established || 172.31.7.5   | node-to-node mesh | up    | 05:04:31 | Established |+--------------+-------------------+-------+----------+-------------+IPv6 BGP statusNo IPv6 peers found.

验证网络

#创建多个podkubectl run net-test1 --image=harbor.host.com/base/centos-base:202211162129 sleep 360000kubectl run net-test2 --image=harbor.host.com/base/centos-base:202211162129 sleep 360000kubectl run net-test3 --image=harbor.host.com/base/centos-base:202211162129 sleep 360000root@k8s-master:/etc/kubeasz# kubectl get pods -owideNAME        READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATESnet-test1   1/1     Running   0          3m32s   10.200.12.1      172.31.7.5   <none>           <none>net-test2   1/1     Running   0          3m32s   10.200.228.129   172.31.7.3   <none>           <none>net-test3   1/1     Running   0          3m29s   10.200.111.129   172.31.7.4   <none>           <none>root@k8s-master:/etc/kubeasz# kubectl exec -it -n default net-test1 bashkubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.[root@net-test1 /]# ping 10.200.111.129PING 10.200.111.129 (10.200.111.129) 56(84) bytes of data.64 bytes from 10.200.111.129: icmp_seq=1 ttl=62 time=0.402 ms64 bytes from 10.200.111.129: icmp_seq=2 ttl=62 time=0.452 ms64 bytes from 10.200.111.129: icmp_seq=3 ttl=62 time=0.457 ms^C--- 10.200.111.129 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2025msrtt min/avg/max/mdev = 0.402/0.437/0.457/0.024 ms[root@net-test1 /]# ping 10.200.228.129PING 10.200.228.129 (10.200.228.129) 56(84) bytes of data.64 bytes from 10.200.228.129: icmp_seq=1 ttl=62 time=0.463 ms64 bytes from 10.200.228.129: icmp_seq=2 ttl=62 time=0.977 ms64 bytes from 10.200.228.129: icmp_seq=3 ttl=62 time=0.705 ms64 bytes from 10.200.228.129: icmp_seq=4 ttl=62 time=0.483 ms

pod之间的网络验证

标签: #ubuntu1604黑屏