龙空技术网

k8s v1.27 快速安装部署最佳实践

掌控K8s 1034

前言:

如今我们对“dellr410centos7”都比较看重,我们都想要学习一些“dellr410centos7”的相关文章。那么小编在网上网罗了一些有关“dellr410centos7””的相关文章,希望姐妹们能喜欢,咱们快快来学习一下吧!

一、环境配置

演示环境

Master 192.168.33.151Node  192.168.33.152系统版本 CentOS 7kubernetes 版本 v1.27.1containerd-1.6.20
1. 系统基础配置1.1 修改主机名
#master节点hostnamectl set-hostname master#node节点hostnamectl set-hostname node
1.2 配置hosts
cat >> /etc/hosts << EOF192.168.33.151 master192.168.33.152 nodeEOF
1.3 关闭防火墙
systemctl stop firewalldsystemctl disable firewalld
1.4 关闭selinux
getenforcesetenforce 0sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
1.5 关闭交换分区

为了保证 kubelet 正常工作,必须禁用交换分区。

sed -i '/swap/s/^/#/' /etc/fstabswapoff -a
1.6 文件句柄数参数调优

进程级优化文件

vim /etc/security/limits.conf* soft nofile 65535* hard nofile 65535* soft nproc 65535* hard nproc 65535

系统级文件优化

修改/etc/sysctl.conf添加如下参数

fs.file-max=65535
1.7 加载系统内核模块

转发 IPv4 并让 iptables 看到桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.confoverlaybr_netfilterEOFmodprobe overlaymodprobe br_netfilter#验证br_netfilter模块[root@master ~]# lsmod | grep br_netfilterbr_netfilter           22256  0 bridge                155432  1 br_netfilter

配置sysctl参数

# 设置所需的 sysctl 参数,参数在重新启动后保持不变cat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-iptables  = 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.ip_forward                 = 1user.max_user_namespaces=28633EOF# 应用 sysctl 参数而不重新启动sudo sysctl --system

验证:

通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
1.8 配置时间同步

安装配置chrony时间同步(推荐)

服务端

#获取本机IP地址IP=`ip addr | grep 'state UP' -A2 | grep inet | egrep -v '(127.0.0.1|inet6|docker)' | awk '{print $2}' | tr -d "addr:" | head -n 1 | cut -d / -f1`#安装chronyyum install -y chrony#备份配置文件cp /etc/chrony.conf{,.bak}#修改配置文件sed -i '3,6s/^/#/g' /etc/chrony.confsed -i "7s|^|server $IP iburst|g" /etc/chrony.confecho "allow all" >> /etc/chrony.confecho "local stratum 10" >> /etc/chrony.confsystemctl restart chronydsystemctl enable chronydtimedatectl set-ntp truesleep 5systemctl restart chronydchronyc sources

server - 可用于时钟服务器,iburst 选项当服务器可达时,发送一个八个数据包而不是通常的一个数据包。包间隔通常为2秒,可加快初始同步速度driftfile - 根据实际时间计算出计算机增减时间的比率,将它记录到一个文件中,会在重启后为系统时钟作出补偿rtcsync - 启用内核模式,系统时间每11分钟会拷贝到实时时钟(RTC)allow / deny - 指定一台主机、子网,或者网络以允许或拒绝访问本服务器cmdallow / cmddeny - 可以指定哪台主机可以通过chronyd使用控制命令bindcmdaddress - 允许chronyd监听哪个接口来接收由chronyc执行的命令makestep - 通常chronyd将根据需求通过减慢或加速时钟,使得系统逐步纠正所有时间偏差。在某些特定情况下,系统时钟可能会漂移过快,导致该调整过程消耗很长的时间来纠正系统时钟。该指令强制chronyd在调整期大于某个阀值时调整系统时钟local stratum 10 - 即使server指令中时间服务器不可用,也允许将本地时间作为标准时间授时给其它客户端chronyc命令

客户端:

只需修改server配置,注释掉原有配置添加一行server 192.168.33.151 iburst,然后重启服务

sed -i '3,6s/^/#/g' /etc/chrony.confsed -i "7s|^|server 192.168.33.151 iburst|g" /etc/chrony.conf
二、安装containerd2.1 下载

从 下载存档, 验证其SHA256sum,并将其提取到:containerd-<VERSION>-<OS>-<ARCH>.tar.gz``/usr/local

wget   containerd-1.6.20-linux-amd64.tar.gz

2.2 安装

root@jial:~# tar zxvf containerd-1.6.20-linux-amd64.tar.gz -C /usr/local/bin/bin/containerd-shimbin/containerd-shim-runc-v1bin/containerd-stressbin/containerdbin/ctrbin/containerd-shim-runc-v2root@jial:~# ls /usr/local/bin/containerd  containerd-shim  containerd-shim-runc-v1  containerd-shim-runc-v2  containerd-stress  ctr

2.3 配置systemd方式启动

通过 systemd 启动 containerd,需要将单元文件从 下载到 /usr/local/lib/systemd/system/ 目录

wget  mkdir -p /usr/local/lib/systemd/system/cp containerd.service /usr/local/lib/systemd/system/containerd.servicesystemctl daemon-reloadsystemctl enable --now containerdsystemctl status containerd.service

containerd.service文件

# cat containerd.service # Copyright The containerd Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##      Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.[Unit]Description=containerd container runtimeDocumentation= local-fs.target[Service]#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration#Environment="ENABLE_CRI_SANDBOXES=sandboxed"ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinity# Comment TasksMax if your systemd version does not supports it.# Only systemd 226 and above support this version.TasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.target
2.4 生成默认配置文件
mkdir -p /etc/containerdcontainerd config default > /etc/containerd/config.toml
2.5 验证
root@jial:~# ctr versionClient:  Version:  v1.6.20  Revision: 2806fc1057397dbaeefbea0e4e17bddfbd388f38  Go version: go1.19.7Server:  Version:  v1.6.20  Revision: 2806fc1057397dbaeefbea0e4e17bddfbd388f38  UUID: 1aaccbf7-893b-484a-a079-d6017c9c6abf

2.6 安装crictl

kubernetes中使用crictl管理容器,不使用ctr。

crictl 是 CRI 兼容的容器运行时命令行接口。可以使用它来检查和调试 Kubernetes 节点上的容器运行时和应用程序。

crictl软件包下载地址:Releases · kubernetes-sigs/cri-tools (github.com)

VERSION="v1.27.0"wget  tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin

配置crictl

crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

会生成以下配置文件

[root@master k8s]# cat /etc/crictl.yaml runtime-endpoint: "unix:///var/run/containerd/containerd.sock"image-endpoint: ""timeout: 0debug: falsepull-image-on-create: falsedisable-pull-on-run: false

验证

root@jial:~# crictl versionVersion:  0.1.0RuntimeName:  containerdRuntimeVersion:  v1.6.20RuntimeApiVersion:  v1
2.7 安装 runc

从 下载二进制文件, 验证其SHA256sum,并将其安装为。runc.<ARCH>``/usr/local/sbin/runc

wget   -r $(sha256sum runc.amd64 |awk '{print $1}') runc.sha256suminstall -m 755 runc.amd64 /usr/local/sbin/runc

二进制文件是静态构建的,应该适用于任何 Linux 发行版。

2.8 安装 CNI 插件

从 下载存档, 验证其SHA256sum,并将其提取到:cni-plugins-<OS>-<ARCH>-<VERSION>.tgz``/opt/cni/bin

wget   -p /opt/cni/bintar zxvf cni-plugins-linux-amd64-v1.2.0.tgz -C /opt/cni/bin

2.8 配置阿里云加速器(可选)

#参考:添加 config_path = "/etc/containerd/certs.d"sed -i 's/config_path\ =.*/config_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.tomlmkdir /etc/containerd/certs.d/docker.io -pcat > /etc/containerd/certs.d/docker.io/hosts.toml << EOFserver = ";[host.";]  capabilities = ["pull", "resolve"]EOFsystemctl daemon-reload && systemctl restart containerd

三、配置cgroup驱动

在 Linux 上,控制组(CGroup)用于限制分配给进程的资源。

kubelet 和底层容器运行时都需要对接控制组 为 Pod 和容器管理资源 ,如 CPU、内存这类资源设置请求和限制。若要对接控制组(CGroup),kubelet 和容器运行时需要使用一个 cgroup 驱动。关键的一点是 kubelet 和容器运行时需使用相同的 cgroup 驱动并且采用相同的配置。

#把SystemdCgroup = false修改为:SystemdCgroup = truesed -i 's/SystemdCgroup\ =\ false/SystemdCgroup\ =\ true/g' /etc/containerd/config.toml#把sandbox_image = "k8s.gcr.io/pause:3.6"修改为:sandbox_image="registry.aliyuncs.com/google_containers/pause:3.9"sed -i 's/sandbox_image\ =.*/sandbox_image\ =\ "registry.aliyuncs.com\/google_containers\/pause:3.9"/g' /etc/containerd/config.toml|grep sandbox_imagesystemctl daemon-reload systemctl restart containerd

四、安装kubectl kubelet kubeadmin

配置阿里云镜像源

kubernetes镜像kubernetes下载地址kubernetes安装教程-阿里巴巴开源镜像站 (aliyun.com)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=  install -y kubelet kubeadm kubectl

启动kubelet服务

systemctl enable kubelet && systemctl start kubelet

五、拉取镜像到本地

可以访问互联网的这一步可以省略,直接在线安装。

5.1 拉取镜像

使用kubeadm config images pull拉取镜像到本地,--image-repository 指定使用的仓库地址,--kubernetes-version 指定集群版本

[root@master containerd]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containersW0419 13:29:46.707179    3380 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.27.1[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.27.1[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.27.1[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.27.1[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.7-0[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1

5.2 导出镜像(用于离线安装)

#列出要导出的镜像id[root@master ~]# nerdctl -n k8s.io image list |grep -v none |awk '{print $3}' |grep -v IMAGEbf4b62b13166373a63e92c04a0ead06651cf51eae8381dcba6daed8429c5ed43c8f8a78f958ddb03a4d40b942e32d0d37031c1b28338#导出镜像nerdctl -n k8s.io image save -o kuberketes-v1.27.1.tar \bf4b62b13166 \373a63e92c04 \a0ead06651cf \51eae8381dcb \a6daed8429c5 \ed43c8f8a78f \958ddb03a4d4 \0b942e32d0d3 \7031c1b28338 

六、配置IPVS

加载ipvs内核模块

cat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOF#ubuntu 为  nf_conntrack

执行并检查是否生效

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

接下来还需要确保各个节点上已经安装了ipset软件包,为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

yum install -y ipset ipvsadm查看规则# ipvsadm -L -nIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

如果不满足以上前提条件,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

配置kubelet

cat >>  /etc/sysconfig/kubelet << EOF# KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"KUBE_PROXY_MODE="ipvs"EOF

七、使用kubeadm初始化集群

查看kubeadm版本

# kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"4c9411232e10168d7b050c49a1b59f6df9d7ea4b", GitTreeState:"clean", BuildDate:"2023-04-14T13:20:04Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}

生成默认配置文件

kubeadm config print init-defaults > kubeadm.yaml

修改相应的配置:

[root@master ~]# cat kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups:  - system:bootstrappers:kubeadm:default-node-token  token: abcdef.0123456789abcdef  ttl: 24h0m0s  usages:  - signing  - authenticationkind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 192.168.33.151  #修改为Master节点IP  bindPort: 6443nodeRegistration:  criSocket: unix:///var/run/containerd/containerd.sock  imagePullPolicy: IfNotPresent  name: master  #修改为master节点主机名  taints: null---apiServer:  timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}etcd:  local:    dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containers   #修改为阿里镜像仓库kind: ClusterConfigurationkubernetesVersion: 1.27.1    #修改为和kubeadm的版本一致networking:  dnsDomain: cluster.local  serviceSubnet: 10.96.0.0/12  podSubnet: 10.244.0.0/16   ## 设置pod网段scheduler: {}##########################################################################添加内容:配置kubelet的CGroup为systemd---kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1cgroupDriver: systemd

使用配置文件初始化集群

kubeadm init --config kubeadm.yaml

使用参数初始化文件

kubeadm init  \    --apiserver-advertise-address=192.168.33.151 \    --pod-network-cidr=10.244.0.0/16 \    --image-repository registry.aliyuncs.com/google_containers \    --kubernetes-version v1.27.1

集群初始化后,输出如下信息:

[root@master ~]# kubeadm init --config kubeadm.yaml[init] Using Kubernetes version: v1.27.1[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'W0419 14:40:22.988535    8656 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.33.151][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.33.151 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.33.151 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"W0419 14:40:26.179390    8656 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 8.506536 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:  export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:   you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.33.151:6443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:76c21fa45295c342a0c9987369257fe38f177a4be65f34bda338c83aac0d15dc

删除集群(仅在部署失败时使用)

kubeadm reset
八、部署Flannel

对于 Kubernetes v1.17+ 使用以下yml文件直接部署,如果使用自定义网段,需要修改podCIDR 10.244.0.0/16

wget --no-check-certificate  kubectl apply -f  kube-flannel.yml namespace/kube-flannel createdserviceaccount/flannel createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created

查看pod

[root@master k8s]# kubectl get pod -o wide --all-namespacesNAMESPACE      NAME                             READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATESkube-flannel   kube-flannel-ds-d2ngt            1/1     Running   0          54s   192.168.33.151   master   <none>           <none>kube-system    coredns-7bdc4cb885-znxx6         1/1     Running   0          97m   10.244.0.3       master   <none>           <none>kube-system    coredns-7bdc4cb885-zsxv5         1/1     Running   0          97m   10.244.0.2       master   <none>           <none>kube-system    etcd-master                      1/1     Running   1          97m   192.168.33.151   master   <none>           <none>kube-system    kube-apiserver-master            1/1     Running   1          97m   192.168.33.151   master   <none>           <none>kube-system    kube-controller-manager-master   1/1     Running   1          97m   192.168.33.151   master   <none>           <none>kube-system    kube-proxy-ktct2                 1/1     Running   0          97m   192.168.33.151   master   <none>           <none>kube-system    kube-scheduler-master            1/1     Running   1          97m   192.168.33.151   master   <none>           <none>

九、部署helm

软件包下载地址:Releases · helm/helm (github.com)

wget  zxvf helm-v3.11.3-linux-amd64.tar.gzcp linux-amd64/helm /usr/local/bin/
十、添加node节点

查看token

[root@master ~]# kubeadm token listTOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPSabcdef.0123456789abcdef   7h          2023-04-20T06:49:13Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

token过期后使用kubeadm token create创建, 或使用 kubeadm token create --print-join-command 获取加入集群的命令

kubeadm join 192.168.33.151:6443 --token au3vy4.pznwtrnrkttihhyx --discovery-token-ca-cert-hash sha256:5c6b4c9fff89dcb7d60fe4cf9f5a22fcf5723433453b41039a59e4216f4e5797 

十一、部署nginx验证

[root@master nginx]# kubectl apply -f deployment-nginx.yaml deployment.apps/nginx-deployment created

nginx yaml文件

apiVersion: apps/v1kind: Deploymentmetadata:  name: nginx-deploymentspec:  selector:    matchLabels:      app: nginx  replicas: 2 # 告知 Deployment 运行 2 个与该模板匹配的 Pod  template:    metadata:      labels:        app: nginx    spec:      containers:      - name: nginx        image: docker.io/library/nginx:latest        ports:        - containerPort: 80---apiVersion: v1kind: Servicemetadata:  name: nginx-servicespec:  selector:    app: nginx  ports:    - protocol: TCP      port: 80      targetPort: 80

查看

[root@master nginx]# kubectl get pod -o wideNAME                                READY   STATUS    RESTARTS   AGE     IP           NODE   NOMINATED NODE   READINESS GATESnginx-deployment-557856cd54-wwckh   1/1     Running   0          3m29s   10.244.1.8   node   <none>           <none>[root@master nginx]# kubectl get svc -o wideNAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE     SELECTORkubernetes      ClusterIP   10.96.0.1        <none>        443/TCP   17h     <none>nginx-service   ClusterIP   10.102.221.145   <none>        80/TCP    3m33s   app=nginx

删除Pod

kubectl delete pod nginx-deployment-5745b9dfd8-fvl58 #强制删除kubectl delete pod nginx-deployment-5745b9dfd8-fvl58 --force --grace-period=0

关于《kubernetes集群的高可用部署》和《单Master节点扩容至高可用集群》以及更多实践会在后续更新。

标签: #dellr410centos7