龙空技术网

k8s集群删除和添加node节点

金沙湖一哥 63

前言:

此刻你们对“kubernetes删除node”大概比较注意,兄弟们都需要知道一些“kubernetes删除node”的相关内容。那么小编同时在网络上收集了一些关于“kubernetes删除node””的相关资讯,希望大家能喜欢,兄弟们一起来学习一下吧!

在k8s集群中一个节点因为没有设置主机名不好识别,但是已经加入了k8s集群中了,需要把这个节点从集群中删除,设置好主机名后重新加入k8s集群中

查看节点

[root@k8s-master centos]# kubectl get nodesNAME                                              STATUS   ROLES    AGE   VERSIONip-172-31-30-17.ap-southeast-1.compute.internal   Ready    <none>   51m   v1.18.0k8s-master                                        Ready    master   15h   v1.18.0

删除节点

[root@k8s-master centos]# kubectl drain ip-172-31-30-17.ap-southeast-1.compute.internal  --delete-local-data --force --ignore-daemonsetsnode/ip-172-31-30-17.ap-southeast-1.compute.internal cordonedWARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-drrf5, kube-system/kube-proxy-qxdxwnode/ip-172-31-30-17.ap-southeast-1.compute.internal drained[root@k8s-master centos]#[root@k8s-master centos]# kubectl delete node ip-172-31-30-17.ap-southeast-1.compute.internalnode "ip-172-31-30-17.ap-southeast-1.compute.internal" deleted

节点设置主机名

[root@ip-172-31-30-17 centos]# hostnamectl set-hostname k8s-node1

退出重新登录k8s-node1节点

重置节点

[root@k8s-node1 centos]# kubeadm reset[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.[reset] Are you sure you want to proceed? [y/N]: y[preflight] Running pre-flight checksW0604 12:50:03.807521   18749 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory[reset] No etcd config found. Assuming external etcd[reset] Please, manually reset etcd to prevent further issues[reset] Stopping the kubelet service[reset] Unmounting mounted directories in "/var/lib/kubelet"[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki][reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf][reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.Please, check the contents of the $HOME/.kube/config file.

重新加入集群

[root@k8s-node1 centos]# kubeadm join 172.31.17.189:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:77274b5fa4b6338713eef4323e40aa80695ef24ae6646468ea20789706cb4d4fW0604 12:50:12.481716   19006 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看集群节点

[root@k8s-master centos]# kubectl get nodeNAME         STATUS   ROLES    AGE     VERSIONk8s-master   Ready    master   15h     v1.18.0k8s-node1    Ready    <none>   9m40s   v1.18.0

标签: #kubernetes删除node