龙空技术网

使用RKE的方式快速部署K8S集群

Linux系统运维 516

前言:

现在小伙伴们对“zjl免流脚本”大体比较关切,你们都需要学习一些“zjl免流脚本”的相关知识。那么小编在网摘上收集了一些关于“zjl免流脚本””的相关知识,希望兄弟们能喜欢,大家快快来了解一下吧!

RKE是一款经过CNCF认证的开源Kubernetes发行版,可以在Docker容器内运行。它通过删除大部分主机依赖项,并为部署、升级和回滚、节点扩容提供一个稳定的路径,从而解决了Kubernetes最常见的安装复杂性问题。

这篇文章主要讲解通过RKE配置高可用K8S集群,并通过rancher管理。

正文开始:

1、环境准备

* rancher 2.7.1

* rke 1.4

* kubernetes v1.26.6

* docker 24.0.4

*

编号

HostName

IP

ROLE

1

rke-k8s-master1

172.16.213.95

controlplane,etcd

2

rke-k8s-master2

172.16.213.96

controlplane,etcd

3

rke-k8s-master3

172.16.213.97

controlplane,etcd

4

rke-k8s-worker1

172.16.213.161

worker,rancher

5

rke-k8s-worker2

172.16.213.163

workerr,rancher

6

rke-k8s-worker3

172.16.213.165

worker,rancher

## 1.1 主机设置

* hosts 配置

cat >> /etc/hosts << EOF172.16.213.95 RKE-k8s-master1172.16.213.96 RKE-k8s-master2172.16.213.97 RKE-k8s-master3172.16.213.161 RKE-k8s-worker1172.16.213.163 RKE-k8s-worker2172.16.213.165 RKE-k8s-worker3172.16.213.104 RKE-k8s-nginxEOF

*用户配置

#添加用户,RKE默认需要普通户来运行,如果root会无法执行下一步usermod -aG docker rancher#firewalld 会影响同步镜像所以需要关闭firwalldsystemctl stop firwalld && systemctl disable firewalld* 安装依赖yum install -y bash-completion conntrack-tools ipset ipvsadm libseccomp nfs-utils psmisc socat wget vim net-tools ntpdate

* 时间同步

echo '*/5 * * * * /usr/sbin/ntpdate ntp.aliyun.com &> /dev/null' |tee /var/spool/cron/root

* 防火墙

setenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#' /etc/selinux/configswapoff -a && sysctl -w vm.swappiness=0 && sed -i '/swap/d' /etc/fstabsystemctl stop firewalld && systemctl disable firewalld

* 加载内核

$ modprobe br_netfilter

* sysctl 内核参数修改

cat >> /etc/sysctl.conf << EOFnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF

* ulimits 文件打开数修改

mkdir -pv /etc/systemd/system.conf.d/cat > /etc/systemd/system.conf.d/30-k8s-ulimits.conf <<EOF[Manager]DefaultLimitCORE=infinityDefaultLimitNOFILE=100000DefaultLimitNPROC=100000EOF

1.2 容器运行时配置

* 安装容器运行时

yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo  -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repoyum install docker-ce-23.0.6-1.el7 -yyum makecache fast && systemctl enable docker &&systemctl start dockermkdir /etc/docker && mkdir -pv /data/docker-rootecho '{"oom-score-adjust": -1000,"log-driver": "json-file","log-opts": {"max-size": "100m","max-file": "3"},"max-concurrent-downloads": 10,"max-concurrent-uploads": 10,"bip": "172.20.1.0/16","storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"],"registry-mirrors": [";],"data-root": "/data/docker-root","exec-opts": ["native.cgroupdriver=systemd"]}'| tee /etc/docker/daemon.jsonsystemctl daemon-reload && systemctl start docker
#修改docker socket 文件的属主chown -R rancher.docker /run/docker.sock

* 配置免密钥登录

#在所有节点均要创建rancher用户,然后master1生成key,通过ssh-copy-id方式同步到其他主机ssh-keygen # 生成 key# 复制key ,注意是需要包括自己本机的ssh-copy-id /home/rancher/.ssh/id_rsa.put root@172.16.213.95ssh-copy-id /home/rancher/.ssh/id_rsa.put root@172.16.213.96ssh-copy-id /home/rancher/.ssh/id_rsa.put root@172.16.213.97ssh-copy-id /home/rancher/.ssh/id_rsa.put root@172.16.213.161ssh-copy-id /home/rancher/.ssh/id_rsa.put root@172.16.213.163ssh-copy-id /home/rancher/.ssh/id_rsa.put root@172.16.213.165

#上传rke

上传rke 到/usr/local/bin/rkechmod a+x /usr/local/bin/rke

## 1.3 配置RKE,安装k8s

* 初始化集群

```

rke config[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]:[+] Number of Hosts [1]: 6[+] SSH Address of host (1) [none]: 172.16.213.95[+] SSH Port of host (1) [22]:[+] SSH Private Key Path of host (172.16.213.95) [none]:[-] You have entered empty SSH key path, trying fetch from SSH key parameter[+] SSH Private Key of host (172.16.213.95) [none]:[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa[+] SSH User of host (172.16.213.95) [ubuntu]: rancher[+] Is host (172.16.213.95) a Control Plane host (y/n)? [y]: y[+] Is host (172.16.213.95) a Worker host (y/n)? [n]: n[+] Is host (172.16.213.95) an etcd host (y/n)? [n]: y[+] Override Hostname of host (172.16.213.95) [none]: rke-k8s-master1[+] Internal IP of host (172.16.213.95) [none]:[+] Docker socket path on host (172.16.213.95) [/run/docker.sock]:[+] SSH Address of host (2) [none]:...[+] Docker socket path on host (172.16.213.96) [/run/docker.sock]:[+] Network Plugin Type (flannel, calico, weave, canal, aci) [canal]:[+] Authentication Strategy [x509]:[+] Authorization Mode (rbac, none) [rbac]:[+] Kubernetes Docker image [rancher/hyperkube:v1.24.8-rancher1]: registry.cn-hangzhou.aliyuncs.com/rancher/hyperkube:vv1.24.8-rancher1[+] Cluster domain [cluster.local]:[+] Service Cluster IP Range [10.43.0.0/16]:[+] Enable PodSecurityPolicy [n]:[+] Cluster Network CIDR [10.42.0.0/16]:[+] Cluster DNS Service IP [10.43.0.10]:[+] Add addon manifest URLs or YAML files [no]:

$ cat /home/rancher/cluster.yml #114.161 rke-k8s-master1# If you intended to deploy Kubernetes in an air-gapped environment,# please consult the documentation on how to configure custom RKE images.nodes:- address: 172.16.213.95port: "22"internal_address: ""role:- controlplane- etcdhostname_override: "rke-k8s-master1"user: rancherdocker_socket: /run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: []- address: 172.16.213.96port: "22"internal_address: ""role:- controlplane- etcdhostname_override: "rke-k8s-master2"user: rancherdocker_socket: /run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: []- address: 172.16.213.97port: "22"internal_address: ""role:- controlplane- etcdhostname_override: "rke-k8s-master3"user: rancherdocker_socket: /run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: []- address: 172.16.213.161port: "22"internal_address: ""role:- workerhostname_override: "rke-k8s-worker1"user: rancherdocker_socket: /run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: []- address: 172.16.213.163port: "22"internal_address: ""role:- workerhostname_override: "rke-k8s-worker2"user: rancherdocker_socket: /run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: []- address: 172.16.213.165port: "22"internal_address: ""role:- workerhostname_override: "rke-k8s-worker3"user: rancherdocker_socket: /run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: []services:etcd:image: ""extra_args: {}extra_args_array: {}extra_binds: []extra_env: []win_extra_args: {}win_extra_args_array: {}win_extra_binds: []win_extra_env: []external_urls: []ca_cert: ""cert: ""key: ""path: ""uid: 0gid: 0snapshot: nullretention: ""creation: ""backup_config: nullkube-api:image: ""extra_args: {}extra_args_array: {}extra_binds: []extra_env: []win_extra_args: {}win_extra_args_array: {}win_extra_binds: []win_extra_env: []service_cluster_ip_range: 10.43.0.0/16service_node_port_range: ""pod_security_policy: falsepod_security_configuration: ""always_pull_images: falsesecrets_encryption_config: nullaudit_log: nulladmission_configuration: nullevent_rate_limit: nullkube-controller:image: ""extra_args: {}extra_args_array: {}extra_binds: []extra_env: []win_extra_args: {}win_extra_args_array: {}win_extra_binds: []win_extra_env: []cluster_cidr: 10.42.0.0/16service_cluster_ip_range: 10.43.0.0/16scheduler:image: ""extra_args: {}extra_args_array: {}extra_binds: []extra_env: []win_extra_args: {}win_extra_args_array: {}win_extra_binds: []win_extra_env: []kubelet:image: ""extra_args: {}extra_args_array: {}extra_binds: []extra_env: []win_extra_args: {}win_extra_args_array: {}win_extra_binds: []win_extra_env: []cluster_domain: cluster.localinfra_container_image: ""cluster_dns_server: 10.43.0.10fail_swap_on: falsegenerate_serving_certificate: falsekubeproxy:image: ""extra_args: {}extra_args_array: {}extra_binds: []extra_env: []win_extra_args: {}win_extra_args_array: {}win_extra_binds: []win_extra_env: []network:plugin: canaloptions: {}mtu: 0node_selector: {}update_strategy: nulltolerations: []authentication:strategy: x509sans: []webhook: nulladdons: ""addons_include: []system_images:etcd: rancher/mirrored-coreos-etcd:v3.5.6alpine: rancher/rke-tools:v0.1.89nginx_proxy: rancher/rke-tools:v0.1.89cert_downloader: rancher/rke-tools:v0.1.89kubernetes_services_sidecar: rancher/rke-tools:v0.1.89kubedns: rancher/mirrored-k8s-dns-kube-dns:1.22.20dnsmasq: rancher/mirrored-k8s-dns-dnsmasq-nanny:1.22.20kubedns_sidecar: rancher/mirrored-k8s-dns-sidecar:1.22.20kubedns_autoscaler: rancher/mirrored-cluster-proportional-autoscaler:1.8.6coredns: rancher/mirrored-coredns-coredns:1.9.4coredns_autoscaler: rancher/mirrored-cluster-proportional-autoscaler:1.8.6nodelocal: rancher/mirrored-k8s-dns-node-cache:1.22.20kubernetes: rancher/hyperkube:v1.26.6-rancher1flannel: rancher/mirrored-flannel-flannel:v0.21.4flannel_cni: rancher/flannel-cni:v0.3.0-rancher8calico_node: rancher/mirrored-calico-node:v3.25.0calico_cni: rancher/calico-cni:v3.25.0-rancher1calico_controllers: rancher/mirrored-calico-kube-controllers:v3.25.0calico_ctl: rancher/mirrored-calico-ctl:v3.25.0calico_flexvol: rancher/mirrored-calico-pod2daemon-flexvol:v3.25.0canal_node: rancher/mirrored-calico-node:v3.25.0canal_cni: rancher/calico-cni:v3.25.0-rancher1canal_controllers: rancher/mirrored-calico-kube-controllers:v3.25.0canal_flannel: rancher/mirrored-flannel-flannel:v0.21.4canal_flexvol: rancher/mirrored-calico-pod2daemon-flexvol:v3.25.0weave_node: weaveworks/weave-kube:2.8.1weave_cni: weaveworks/weave-npc:2.8.1pod_infra_container: rancher/mirrored-pause:3.7ingress: rancher/nginx-ingress-controller:nginx-1.7.0-rancher1ingress_backend: rancher/mirrored-nginx-ingress-controller-defaultbackend:1.5-rancher1ingress_webhook: rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794metrics_server: rancher/mirrored-metrics-server:v0.6.3windows_pod_infra_container: rancher/mirrored-pause:3.7aci_cni_deploy_container: noiro/cnideploy:5.2.7.1.81c2369aci_host_container: noiro/aci-containers-host:5.2.7.1.81c2369aci_opflex_container: noiro/opflex:5.2.7.1.81c2369aci_mcast_container: noiro/opflex:5.2.7.1.81c2369aci_ovs_container: noiro/openvswitch:5.2.7.1.81c2369aci_controller_container: noiro/aci-containers-controller:5.2.7.1.81c2369aci_gbp_server_container: noiro/gbp-server:5.2.7.1.81c2369aci_opflex_server_container: noiro/opflex-server:5.2.7.1.81c2369ssh_key_path: ~/.ssh/id_rsassh_cert_path: ""ssh_agent_auth: falseauthorization:mode: rbacoptions: {}ignore_docker_version: nullenable_cri_dockerd: nullkubernetes_version: ""private_registries: []ingress:provider: ""options: {}node_selector: {}extra_args: {}dns_policy: ""extra_envs: []extra_volumes: []extra_volume_mounts: []update_strategy: nullhttp_port: 0https_port: 0network_mode: ""tolerations: []default_backend: nulldefault_http_backend_priority_class_name: ""nginx_ingress_controller_priority_class_name: ""default_ingress_class: nullcluster_name: ""cloud_provider:name: ""prefix_path: ""win_prefix_path: ""addon_job_timeout: 0bastion_host:address: ""port: ""user: ""ssh_key: ""ssh_key_path: ""ssh_cert: ""ssh_cert_path: ""ignore_proxy_env_vars: falsemonitoring:provider: ""options: {}node_selector: {}update_strategy: nullreplicas: nulltolerations: []metrics_server_priority_class_name: ""restore:restore: falsesnapshot_name: ""rotate_encryption_key: falsedns: null

* 启动、更新集群

$ rke up --config cluster.yml

# 二、验证kubernetes 集群

## 2.1 配置k8s 命令行工具

* 安装kubectl 工具

cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl= 

#master 上都要安装(kubectl 用于管理k8s 集群,哪里需要哪里装 建议只在mater)

yum install kubectl-1.26.6 -y

* k8s config配置认证文件,注意保存,该文件为k8s重要密钥文件,请勿泄露,此处是测试数据

[rancher@rke-k8s-master1 ~]$ mkdir -pv ~/.kube# 手动创建echo > ~/.kube/config << EOFapiVersion: v1kind: Configclusters:- cluster:api-version: v1certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQWNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKTFdOaE1CNFhEVEl6TURjd09ERTBNRFkwTTFvWERUTXpNRGN3TlRFME1EWTBNMW93RWpFUU1BNEdBMVVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUt0UDlOb1JLbmZNCmtEalpDTVpVak56STgzYXpjUnI2NW9FUjVYQnI4cnJMaThKamQ3OWdrMWMzZElkSTBCN056Q2dvZTROc25oTzUKZ2pKcnk4N1E0YVMvdERSWGNWRXZGSENUMVZxL0xpSUlCRml3RVFQdXc0SGVBSmZHdnRaZVdPUkEvMUMveDdpNQpKdEVlUTB3djhlTVpoc3M5RHB2OVhJOHE1SGJtUzRRVStITzQ2eXNPYjR4Rk5YRkY4RE1OWnNoaFFLY1hDRHQ1CkZNT0xWS0lXU3JmM1BYbkRtV1JBbFcwNEdWV3lFZ2p0TWJZbjllVFM4RUQwTldIVWZzOXIwcDlqRUtMb1BEL04KeDRRSndxQ3BvcmVBdEd0THA4RmV0UUJvQldaRUo2UUxIYWpiaE9paWdMY2l6WGtyY3F2KzVueHlMRDNHM1hrcApWQkUzbER2QStXOENBd0VBQWFOQ01FQXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGRG5keDh5aDV6aUh0YTVWcTJFRk9KU00yZGkrTUEwR0NTcUdTSWIzRFFFQkN3VUEKQTRJQkFRQlN5K205SnpPSitXcVhlbjRWMUVFU1BJOWsvbm5XKzVaUGZDUkptOWxhTEozN1JSTDI3YlpGZnBRcQoxbGVMN3pHZGUzWnc4VTNJNk5uc1g2SmF6NlJDS2d4YWxoYjBKYnJQazJGcTZXNWdPVk0xZ3hFSm9Xd2tRVHVwCkM1bnB5TW12ZUJyMjFJMUZOOEVKOHpJUmliNVRDTmR3ZmNOVmR1SmtJY2Nld0hSV3oyNTJEMHNiVzB1aXNOWm8KMlpQbWpYUUwrSEtWdTgxN2NBNU11U3UxSjZVZTluNThodEhWbnpxL3VrRDJ4Zzh6Z0dBeTYxQWk1ZFF3dGJlbQpIbjhXSXZ5NzlKMU5aYnh1V1RkZmV3MklYQWFJNUZzUzAxUlJJWEZmNTIxcFVJZDdQNS9BYTFhWE02eU9JZXI2Cm9kb3BwbGlVK2I2bStsa1laVzhFd1J0T3lCZ3UKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=server: ";name: "local"contexts:- context:cluster: "local"user: "kube-admin-local"name: "local"current-context: "local"users:- name: "kube-admin-local"user:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDakNDQWZLZ0F3SUJBZ0lJVDhBeWIvRHJFVEl3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSGEzVmlaUzFqWVRBZUZ3MHlNekEzTURneE5EQTJORE5hRncwek16QTNNRFV4TkRFNE1qUmFNQzR4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJNd0VRWURWUVFERXdwcmRXSmxMV0ZrYldsdU1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXU3OGpmVjBwRFByRnoxZS91a0pwakJkWmppYmIKUVlOMWtualJrbC9TSDdDdlVLUisrNXl0cnZoVHFUcHE5VWp0b1k0Q3lHZVFmVCtUeDlRdEUrTGVMakhPRUZSUQpUbkEyWWd4VksvbGFOT1FGNUtlM24ydW9RQndpdFFubXpRTEs3U3ZiVlQ3Kzhwa2FLaUpOVk96WmltRGg4WkhKClhwU0JFWlY3US91aG0yMnF0cTlIYVdPay9BSmRZR3g1MjVMc3lsdURHY3RVZ0g1cC9LVlpJNU9sNCs0NGdQWloKcDg1eFRvWThuNXkyeWNqMG9JVzByR2dDdUQrcnFyZmhtMG0xU3NwMG1VNjU0N3dPMTNCWTNjNGhjWHRBNjZWUQpGeWlTaWxPNW03QVVkVm5wWnRMaEk4WTdxNGFWcE02YThuaUZVandjdjZYdE15Y0Q4a2xsakFnbW5RSURBUUFCCm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3SHdZRFZSMGoKQkJnd0ZvQVVPZDNIektIbk9JZTFybFdyWVFVNGxJeloyTDR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUg0dQpaU3BOa0NFYkEzOHR6cmdXUy9HYmI0UXRPR2N5TXlmTkZ3ejczcnJaTmJSUTFPajNPdi81bHdYYWg1cCtQMzBRClRtRFVDb3prSW56T004dXV3R00vQVFXKzJDQTFRTEJ6Uk8vbFlyQkNhWnpxL2ljbWI4YnNUMzhhdjFRR3dncncKRnpjWTVjM21iYUZNTnFrb1pCZVloWDN3aDcyT2RDU0xLaWZkYmY0RUNuU0g5MTc1U050VC9QSGNiMzNwVC8yRwpEOFpOU2xxclhEZWhJZzFPN0lmU2wwS0RkL2NRY2lOUkZDT1pMQVFsSThkV2dvWVVUK1RRSGlSWGswQWdYeVRrCk5qZFFIaXQyNkdkV2k2b2laZjlPL0hjbzZ4K0s1WnJ5NFZjbVNCbnVxSnV3WjZ0QmVIdUpMK0NNVm9ybjR5bWQKN3BjUEZOQUZLcFJPUjBlcEZrcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdTc4amZWMHBEUHJGejFlL3VrSnBqQmRaamliYlFZTjFrbmpSa2wvU0g3Q3ZVS1IrCis1eXRydmhUcVRwcTlVanRvWTRDeUdlUWZUK1R4OVF0RStMZUxqSE9FRlJRVG5BMllneFZLL2xhTk9RRjVLZTMKbjJ1b1FCd2l0UW5telFMSzdTdmJWVDcrOHBrYUtpSk5WT3paaW1EaDhaSEpYcFNCRVpWN1EvdWhtMjJxdHE5SAphV09rL0FKZFlHeDUyNUxzeWx1REdjdFVnSDVwL0tWWkk1T2w0KzQ0Z1BaWnA4NXhUb1k4bjV5MnljajBvSVcwCnJHZ0N1RCtycXJmaG0wbTFTc3AwbVU2NTQ3d08xM0JZM2M0aGNYdEE2NlZRRnlpU2lsTzVtN0FVZFZucFp0TGgKSThZN3E0YVZwTTZhOG5pRlVqd2N2Nlh0TXljRDhrbGxqQWdtblFJREFRQUJBb0lCQUNqNjZxTTFqMzFPeTVpYgpmYlVKUkFLWklpb2VIeU9vcnlRZWpSZ1hKRVZZaXB2ZW0vME4wUGR0S3MyNGU1bzRwZTNxa243dDVDTUNtcDQyCm1QUkxROVh2ZHh3bld6UVQyRHNFbUI2MkdkT0xwaUduM2pQRkN2K2JaSlFCcWtnN2dOSE9EZDBJbUJ1YUFaVUsKMGJoa3pvTWU3SktQRU5ZOU1nTUZqdGRpK0g1MVRDcDEwMzZXOXlvYkprVzEwcEtocjhST1VRN2l5TWQySkljMgpQSENXaHUzRnVLUHRsNUxmTHRmY3BUMW55STBVb24yNlE5c05wajFtWjJsLzZmbFJQWXRDRVlpZERFM0YwZ3NwClVYeWowNzNtaEQyejNEMEVHWXJLMDdBaC9lMmhzU0RlaU02SFp2a0RVa0FON3VISytXZi9iYlplRURsbFU5ZmgKejc4VGtrRUNnWUVBd2J3azhSRlBMNFhmU2Q3RVJ1K21Ea0lWNk41ekFYSzF6Z0grWGtabThleTFTMzN3eHBFdAptditSNnVNWTd6Q3ZhVGxhSVdZUVNMQkE0eTJJazRnSW1pV2RFN0xtcCtDVmZhWmJtd0lHbXFoZ0Y1Y2lmWEs3ClpoeWVDVUF0T21Lb0psMlJwN29EdzJ1Q29QR04zcS91MGY4TUxhdVJWNWtlNjdNTU0zQUZ3SzBDZ1lFQStCWk0KQmxlYlBKOTlHbmh3WUhEQzdxT0NBeWJKRjVUaGQ0QkVUdnBCaGtXenJ4U2dpeWI0dGZoR3JKTWpvRWlIaGF0NwpwaFBlRERDdHlqb0VEMlpiMkV1YlI5VnB0OS9KUjRoamF3VDRVbFoxOGN1ZnVjVHY2UnM0bjhsaElvN1RwSnBwCitwajdwK1kxZUF5SUxUM0RKSCt0NDlmdnZvRGNOYmU4NkRnd2k3RUNnWUVBc2NXY1BGeit4WVBaYmVadFF3NWEKMk5DSlhFTHJVeFBZZ2UzUVpOL0RUUkZCRnNHODgraDU2YlhFUnI0V3ZqMTFhRi9KTmNaN0FNaEM4bk53MUxmSgo5UEM0M3orVmFjeXFRRDhyNWVRSS9WZXR2VmZndlM1UGlaYU82YndyQkYxTklNOVJmWkF5TGRyMFpnemhlc3NECm9VeWc5ek5zemUzaXNyTjhhYUxNbEkwQ2dZRUF2ZnI5THlJcGUrdzZ0bW1pelFldEQyaGhLSjZzQWdYOS96QlgKbnc5ZjNENUdVbjMrVDNHQnBvQkJSdWpLc0hTNmEyK2RtZG0vQWlESkJZTVdGdUR3MXB0WGgxUHp5RjUwV2ZZbApCQkJqUlZKMnNicVlUMzl6cFZRMk1ZN2Fkc2RmWmI3bUI0VGR1bjY5VlhoclZCSG0vVzFWTVpUc1FEdVg1djhVCmg5UjN3SkVDZ1lCcjNRVjAvSUFxYVpINzZ6VWxmaWVKeEZSUFplZFl4TTYvcGl3MjdmOHVYWCs3YUVCM1VSSUUKSXR6QlJKcmNPb1ZoVEZITjdDaDdiVnZIbndhdTdYWkVmRWZEOWZmbjAvWnozbEdzSmIraEdsdXlrVzQ4L2RXawpRa3BNQ1AwU2tyVHRXdk0vbk1sUTBNOGxmWXd3ZTZBL1dEdVN3WDIvc3NGMldWQkV4UVJZMEE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=EOF

#通过kubectl查询nodes状态

[root@rke-k8s-master1 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONrke-k8s-master1 Ready controlplane,etcd 46m v1.26.6rke-k8s-master2 Ready controlplane,etcd 46m v1.26.6rke-k8s-master3 Ready controlplane,etcd 46m v1.26.6rke-k8s-worker1 Ready worker 46m v1.26.6rke-k8s-worker2 Ready worker 46m v1.26.6rke-k8s-worker3 Ready worker 32m v1.26.6

## 2.2 查看资源信息

[root@rke-k8s-master1 ~]# kubectl get pods -ANAMESPACE NAME READY STATUS RESTARTS AGEingress-nginx nginx-ingress-controller-gl2nv 1/1 Running 6 (36m ago) 46mingress-nginx nginx-ingress-controller-x9bvz 1/1 Running 6 (40m ago) 46mingress-nginx nginx-ingress-controller-z4m2r 1/1 Running 0 33mkube-system calico-kube-controllers-777cbf8f9c-94kpj 1/1 Running 0 46mkube-system canal-8stv9 2/2 Running 0 33mkube-system canal-cnn56 2/2 Running 2 (40m ago) 46mkube-system canal-thlr7 2/2 Running 2 (36m ago) 46mkube-system canal-vrhlj 2/2 Running 0 46mkube-system canal-wtq6g 2/2 Running 0 46mkube-system canal-znbd9 2/2 Running 0 46mkube-system coredns-66b64c55d4-ftn6v 1/1 Running 0 40mkube-system coredns-66b64c55d4-kdn9r 1/1 Running 1 (36m ago) 46mkube-system coredns-autoscaler-5567d8c485-4288l 1/1 Running 1 (36m ago) 46mkube-system metrics-server-7886b5f87c-d4gx6 1/1 Running 1 (36m ago) 41mkube-system rke-coredns-addon-deploy-job-m5fcz 0/1 Completed 0 46mkube-system rke-ingress-controller-deploy-job-9kqb8 0/1 Completed 0 46mkube-system rke-metrics-addon-deploy-job-g4xx9 0/1 Completed 0 46mkube-system rke-network-plugin-deploy-job-zzfnl 0/1 Completed 0 46m```

三、部署应用

## 3.1 安装k8s web管理工具 rancher

#安装helm软件

cd /usr/local/bin && sudo wget  && sudo tar -zxf helm-v3.10.3-linux-amd64.tar.gz && sudo cp linux-amd64/helm ./ && sudo chmod +x helm && sudo chown -R rancher.rancher helm

# 创建证书目录

mkdir -p /data1/rancher/certcd /data1/rancher/cert

## 3.2 一键生成证书脚本,生成证书复制到上述目录

#!/bin/bash -ehelp (){echo ' ================================================================ 'echo ' --ssl-domain: 生成ssl证书需要的主域名,如不指定则默认为rancher.toutiao.com,如果是ip访问服务,则可忽略;'echo ' --ssl-trusted-ip: 一般ssl证书只信任域名的访问请求,有时候需要使用ip去访问server,那么需要给ssl证书添加扩展IP,多个IP用逗号隔开;'echo ' --ssl-trusted-domain: 如果想多个域名访问,则添加扩展域名(SSL_TRUSTED_DOMAIN),多个扩展域名用逗号隔开;'echo ' --ssl-size: ssl加密位数,默认2048;'echo ' --ssl-cn: 国家代码(2个字母的代号),默认CN;'echo ' 使用示例:'echo ' ./create_self-signed-cert.sh --ssl-domain=rancher.toutiao.com --ssl-trusted-domain=rancher.toutiao.com \ 'echo ' --ssl-trusted-ip=1.1.1.1,2.2.2.2,3.3.3.3 --ssl-size=2048 --ssl-date=36500'echo ' ================================================================'}case "$1" in-h|--help) help; exit;;esacif [[ $1 == '' ]];thenhelp;exit;fiCMDOPTS="$*"for OPTS in $CMDOPTS;dokey=$(echo ${OPTS} | awk -F"=" '{print $1}' )value=$(echo ${OPTS} | awk -F"=" '{print $2}' )case "$key" in--ssl-domain) SSL_DOMAIN=$value ;;--ssl-trusted-ip) SSL_TRUSTED_IP=$value ;;--ssl-trusted-domain) SSL_TRUSTED_DOMAIN=$value ;;--ssl-size) SSL_SIZE=$value ;;--ssl-date) SSL_DATE=$value ;;--ca-date) CA_DATE=$value ;;--ssl-cn) CN=$value ;;esacdone# CA相关配置CA_DATE=${CA_DATE:-36500}CA_KEY=${CA_KEY:-cakey.pem}CA_CERT=${CA_CERT:-cacerts.pem}CA_DOMAIN=cattle-ca# ssl相关配置SSL_CONFIG=${SSL_CONFIG:-$PWD/openssl.cnf}SSL_DOMAIN=${SSL_DOMAIN:-';}SSL_DATE=${SSL_DATE:-36500}SSL_SIZE=${SSL_SIZE:-2048}## 国家代码(2个字母的代号),默认CN;CN=${CN:-CN}SSL_KEY=$SSL_DOMAIN.keySSL_CSR=$SSL_DOMAIN.csrSSL_CERT=$SSL_DOMAIN.crtecho -e "\033[32m ---------------------------- \033[0m"echo -e "\033[32m | 生成 SSL Cert | \033[0m"echo -e "\033[32m ---------------------------- \033[0m"if [[ -e ./${CA_KEY} ]]; thenecho -e "\033[32m ====> 1. 发现已存在CA私钥,备份"${CA_KEY}"为"${CA_KEY}"-bak,然后重新创建 \033[0m"mv ${CA_KEY} "${CA_KEY}"-bakopenssl genrsa -out ${CA_KEY} ${SSL_SIZE}elseecho -e "\033[32m ====> 1. 生成新的CA私钥 ${CA_KEY} \033[0m"openssl genrsa -out ${CA_KEY} ${SSL_SIZE}fiif [[ -e ./${CA_CERT} ]]; thenecho -e "\033[32m ====> 2. 发现已存在CA证书,先备份"${CA_CERT}"为"${CA_CERT}"-bak,然后重新创建 \033[0m"mv ${CA_CERT} "${CA_CERT}"-bakopenssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"elseecho -e "\033[32m ====> 2. 生成新的CA证书 ${CA_CERT} \033[0m"openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"fiecho -e "\033[32m ====> 3. 生成Openssl配置文件 ${SSL_CONFIG} \033[0m"cat > ${SSL_CONFIG} <<EOM[req]req_extensions = v3_reqdistinguished_name = req_distinguished_name[req_distinguished_name][ v3_req ]basicConstraints = CA:FALSEkeyUsage = nonRepudiation, digitalSignature, keyEnciphermentextendedKeyUsage = clientAuth, serverAuthEOMif [[ -n ${SSL_TRUSTED_IP} || -n ${SSL_TRUSTED_DOMAIN} || -n ${SSL_DOMAIN} ]]; thencat >> ${SSL_CONFIG} <<EOMsubjectAltName = @alt_names[alt_names]EOMIFS=","dns=(${SSL_TRUSTED_DOMAIN})dns+=(${SSL_DOMAIN})for i in "${!dns[@]}"; doecho DNS.$((i+1)) = ${dns[$i]} >> ${SSL_CONFIG}doneif [[ -n ${SSL_TRUSTED_IP} ]]; thenip=(${SSL_TRUSTED_IP})for i in "${!ip[@]}"; doecho IP.$((i+1)) = ${ip[$i]} >> ${SSL_CONFIG}donefifiecho -e "\033[32m ====> 4. 生成服务SSL KEY ${SSL_KEY} \033[0m"openssl genrsa -out ${SSL_KEY} ${SSL_SIZE}echo -e "\033[32m ====> 5. 生成服务SSL CSR ${SSL_CSR} \033[0m"openssl req -sha256 -new -key ${SSL_KEY} -out ${SSL_CSR} -subj "/C=${CN}/CN=${SSL_DOMAIN}" -config ${SSL_CONFIG}echo -e "\033[32m ====> 6. 生成服务SSL CERT ${SSL_CERT} \033[0m"openssl x509 -sha256 -req -in ${SSL_CSR} -CA ${CA_CERT} \-CAkey ${CA_KEY} -CAcreateserial -out ${SSL_CERT} \-days ${SSL_DATE} -extensions v3_req \-extfile ${SSL_CONFIG}echo -e "\033[32m ====> 7. 证书制作完成 \033[0m"echoecho -e "\033[32m ====> 8. 以YAML格式输出结果 \033[0m"echo "----------------------------------------------------------"echo "ca_key: |"cat $CA_KEY | sed 's/^/ /'echoecho "ca_cert: |"cat $CA_CERT | sed 's/^/ /'echoecho "ssl_key: |"cat $SSL_KEY | sed 's/^/ /'echoecho "ssl_csr: |"cat $SSL_CSR | sed 's/^/ /'echoecho "ssl_cert: |"cat $SSL_CERT | sed 's/^/ /'echoecho -e "\033[32m ====> 9. 附加CA证书到Cert文件 \033[0m"cat ${CA_CERT} >> ${SSL_CERT}echo "ssl_cert: |"cat $SSL_CERT | sed 's/^/ /'echoecho -e "\033[32m ====> 10. 重命名服务证书 \033[0m"echo "cp ${SSL_DOMAIN}.key tls.key"cp ${SSL_DOMAIN}.key tls.keyecho "cp ${SSL_DOMAIN}.crt tls.crt"cp ${SSL_DOMAIN}.crt tls.crt

* 生成证书,rancher证书(非k8s集群证书) 时间为100年

[root@rke-k8s-master1 data]# chmod a+x create_self-signed-cert.sh[root@rke-k8s-master1 data]# ./create_self-signed-cert.sh --ssl-domain=rancher.toutiao.com -ssl-trusted-domain= --ssl-size=2048 --ssl-date=36500

* 复制证书

[root@rke-k8s-master1 create]# cp tls.* ../cert/[root@rke-k8s-master1 data]# cp ca.* rancher/cert/[root@rke-k8s-master1 data]# chown -R rancher.docker rancher```

## 3.3 在k8s中创建密钥相关信息

cd /data1/rke/rancher/cert/kubectl create ns cattle-system# 创建ingress 密钥kubectl -n cattle-system create secret tls tls-rancher-ingress --cert=./tls.crt --key=./tls.key#创建证书密钥kubectl -n cattle-system create secret generic tls-ca --from-file=./cacerts.pem# #给helm提供rancher的安装包地址helm repo add rancher-latest 

## 3.4 安装rancher

[rancher@rke-k8s-master1 cert]$ helm repo add rancher-latest ;rancher-latest" has been added to your repositorieshelm install rancher rancher-latest/rancher \--namespace cattle-system \--set systemDefaultRegistry=registry.cn-hangzhou.aliyuncs.com \--set hostname=rancher.toutiao.com \--set ingress.tls.source=secret \--set privateCA=true

* 等待安装完成

[root@rke-k8s-master1 cert]# helm install rancher rancher-latest/rancher \--namespace cattle-system \--set systemDefaultRegistry=registry.cn-hangzhou.aliyuncs.com \--set hostname=rancher.toutiao.com \--set ingress.tls.source=secret \--set privateCA=trueWARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/configWARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/configNAME: rancherLAST DEPLOYED: Sat Jul 8 23:17:22 2023NAMESPACE: cattle-systemSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:Rancher Server has been installed.NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued, Containers are started and the Ingress rule comes up.Check out our docs at  you provided your own bootstrap password during installation, browse to  to get started.If this is the first time you installed Rancher, get started by running this command and clicking the URL it generates:```echo (kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}')```To get just the bootstrap password on its own, run:```kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'```Happy Containering![root@rke-k8s-master1 cert]# kubectl get ingress -ANAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGEcattle-system rancher nginx rancher.toutiao.com 172.16.213.161,172.16.213.163,172.16.213.165 80, 443 10m

## 3.5 安装监控(rancher中安装即可),查看pod信息

kubectl --namespace cattle-monitoring-system get pods -l "release=rancher-monitoring"

## 3.6 新增、删除节的流程:

1、修改cluster.yml的配置文件,增加节点或者修改当前节点角色后保存2、执行rke命令rke up --update-only --config ./cluster.yml

## 3.7 集群证书管理

#轮换全部证书 rke cert rotate [--config cluster.yml]#轮换CA证书和全部服务证 rke cert rotate --rotate-ca#轮换单个服务证书 rke cert rotate --service etcd#查看证书有效时间 openssl x509 -in /etc/kubernetes/ssl/kube-apiserver.pem -noout -dates轮换全部证书后KUBECONFIG文件有所修改,需进行替换更新cp kube_config_cluster.yml $HOME/.kube/config

4.配置hosts,访问rancher

rancher安装完成后可以通过所有的worker节点地址访问,因为通过配置nginx反向代理到worker 即可。

cat > /etc/nginx/conf.d/rancher.conf <<EOFupstream rke-worker {        server 172.16.213.161:443;        server 172.16.213.163:443;        server 172.16.213.165:443;}server {        listen 443 ssl;        server_name rancher.toutiao.com;        access_log /usr/share/nginx/logs/rancher.access.log main;        error_log  /usr/share/nginx/logs/rancher.error.log;        index index.html index.htm;        ssl_certificate cert/server.pem;        ssl_certificate_key cert/server.key;        ssl_session_timeout 5m;        ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;        ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;        ssl_prefer_server_ciphers on;        location / {                proxy_pass ;                proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for;                proxy_set_header  X-Real-IP  $remote_addr;                proxy_set_header Host $host;                proxy_set_header X-Forwarded-Proto  $scheme;        }}

首次访问需要配置密码,默认用户admin

标签: #zjl免流脚本 #htkubuntu安装