前言:
现在你们对“nginxcabundlecrt”都比较讲究,看官们都需要知道一些“nginxcabundlecrt”的相关知识。那么小编同时在网上网罗了一些对于“nginxcabundlecrt””的相关知识,希望你们能喜欢,兄弟们一起来学习一下吧!kubeadm是Kubernetes官方提供的用于快速安部署Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。
1.准备1.1 系统配置
在安装之前,需要先做好如下准备。3台CentOS 7.9主机如下:
cat /etc/hosts192.168.96.151 node1192.168.96.152 node2192.168.96.153 node3
在各个主机上完成下面的系统配置。
如果各个主机启用了防火墙策略,需要开放Kubernetes各个组件所需要的端口,可以查看Ports and Protocols中的内容, 开放相关端口或者关闭主机的防火墙。
禁用SELINUX:
setenforce 0
vi /etc/selinux/configSELINUX=disabled
创建/etc/modules-load.d/containerd.conf配置文件:
cat << EOF > /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF
执行以下命令使配置生效:
modprobe overlaymodprobe br_netfilter
创建/etc/sysctl.d/99-kubernetes-cri.conf配置文件:
cat << EOF > /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1user.max_user_namespaces=28633EOF
执行以下命令使配置生效:
sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf1.2 配置服务器支持开启ipvs的前提条件
由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:
ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrack_ipv4
在各个服务器节点上执行以下脚本:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOFchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了ipset软件包,为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。
yum install -y ipset ipvsadm
如果不满足以上前提条件,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。
1.3 部署容器运行时Containerd
在各个服务器节点上安装容器运行时Containerd。
下载Containerd的二进制包:
wget
cri-containerd-cni-1.6.4-linux-amd64.tar.gz压缩包中已经按照官方二进制部署推荐的目录结构布局好。 里面包含了systemd配置文件,containerd以及cni的部署文件。 将解压缩到系统的根目录/中:
tar -zxvf cri-containerd-cni-1.6.4-linux-amd64.tar.gz -C /etc/etc/systemd/etc/systemd/system/etc/systemd/system/containerd.serviceetc/crictl.yamletc/cni/etc/cni/net.d/etc/cni/net.d/10-containerd-net.conflistusr/usr/local/usr/local/sbin/usr/local/sbin/runcusr/local/bin/usr/local/bin/critestusr/local/bin/containerd-shimusr/local/bin/containerd-shim-runc-v1usr/local/bin/ctd-decoderusr/local/bin/containerdusr/local/bin/containerd-shim-runc-v2usr/local/bin/containerd-stressusr/local/bin/ctrusr/local/bin/crictl......opt/cni/opt/cni/bin/opt/cni/bin/bridge......
注意经测试cri-containerd-cni-1.6.4-linux-amd64.tar.gz包中包含的runc在CentOS 7下的动态链接有问题,这里从runc的github上单独下载runc,并替换上面安装的containerd中的runc:
wget
接下来生成containerd的配置文件:
mkdir -p /etc/containerdcontainerd config default > /etc/containerd/config.toml
根据文档Container runtimes 中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为容器的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里配置各个节点上containerd的cgroup driver为systemd。
修改前面生成的配置文件/etc/containerd/config.toml:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
再修改/etc/containerd/config.toml中的
[plugins."io.containerd.grpc.v1.cri"] ... # sandbox_image = "k8s.gcr.io/pause:3.6" sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"
配置containerd开机启动,并启动containerd
systemctl enable containerd --now
使用crictl测试一下,确保可以打印出版本信息并且没有错误信息输出:
crictl versionVersion: 0.1.0RuntimeName: containerdRuntimeVersion: v1.6.4RuntimeApiVersion: v1alpha22.使用kubeadm部署Kubernetes2.1 安装kubeadm和kubelet
下面在各节点安装kubeadm和kubelet:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=
yum makecache fastyum install kubelet kubeadm kubectl
运行kubelet --help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,官方推荐我们使用--config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file。最初Kubernetes这么做是为了支持动态Kubelet配置(Dynamic Kubelet Configuration),但动态Kubelet配置特性从k8s 1.22中已弃用,并在1.24中被移除。如果需要调整集群汇总所有节点kubelet的配置,还是推荐使用ansible等工具将配置分发到各个节点。
kubelet的配置文件必须是json或yaml格式,具体可查看这里。
Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。 关闭系统的Swap方法如下:
swapoff -a
修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。
swappiness参数调整,修改/etc/sysctl.d/99-kubernetes-cri.conf添加下面一行:
vm.swappiness=0
执行sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf使修改生效。
2.2 使用kubeadm init初始化集群
在各节点开机启动kubelet服务:
systemctl enable kubelet.service
使用kubeadm config print init-defaults --component-configs KubeletConfiguration可以打印集群初始化默认的使用的配置:
apiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authenticationkind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: node taints: null---apiServer: timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}etcd: local: dataDir: /var/lib/etcdimageRepository: k8s.gcr.iokind: ClusterConfigurationkubernetesVersion: 1.24.0networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12scheduler: {}---apiVersion: kubelet.config.k8s.io/v1beta1authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crtauthorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0scgroupDriver: systemdclusterDNS:- 10.96.0.10clusterDomain: cluster.localcpuManagerReconcilePeriod: 0sevictionPressureTransitionPeriod: 0sfileCheckFrequency: 0shealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 0simageMinimumGCAge: 0skind: KubeletConfigurationlogging: flushFrequency: 0 options: json: infoBufferSize: "0" verbosity: 0memorySwap: {}nodeStatusReportFrequency: 0snodeStatusUpdateFrequency: 0srotateCertificates: trueruntimeRequestTimeout: 0sshutdownGracePeriod: 0sshutdownGracePeriodCriticalPods: 0sstaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 0ssyncFrequency: 0svolumeStatsAggPeriod: 0s
从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml:
apiVersion: kubeadm.k8s.io/v1beta3kind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 192.168.96.151 bindPort: 6443nodeRegistration: criSocket: unix:///run/containerd/containerd.sock taints: - effect: PreferNoSchedule key: node-role.kubernetes.io/master---apiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: v1.24.0imageRepository: registry.aliyuncs.com/google_containersnetworking: podSubnet: 10.244.0.0/16---apiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationcgroupDriver: systemdfailSwapOn: false---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: ipvs
这里定制了imageRepository为阿里云的registry,避免因gcr被墙,无法直接拉取镜像。criSocket设置了容器运行时为containerd。 同时设置kubelet的cgroupDriver为systemd,设置kube-proxy代理模式为ipvs。
在开始初始化集群之前可以使用kubeadm config images pull --config kubeadm.yaml预先在各个服务器节点上拉取所k8s需要的容器镜像。
kubeadm config images pull --config kubeadm.yaml[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.0[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.0[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.0[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.24.0[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.7[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.3-0[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
接下来使用kubeadm初始化集群,选择node1作为Master Node,在node1上执行下面的命令:
kubeadm init --config kubeadm.yamlW0526 10:22:26.657615 24076 common.go:83] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.W0526 10:22:26.660300 24076 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration![init] Using Kubernetes version: v1.24.0[preflight] Running pre-flight checks [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 192.168.96.151][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [192.168.96.151 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [192.168.96.151 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 17.506804 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule][bootstrap-token] Using token: uufqmm.bvtfj4drwfvvbcev[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.96.151:6443 --token uufqmm.bvtfj4drwfvvbcev \ --discovery-token-ca-cert-hash sha256:5814415567d93f6d2d41fe4719be8221f45c29c482b5059aec2e27a832ac36e6
上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:
[certs]生成相关的各种证书[kubeconfig]生成相关的kubeconfig文件[kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"[control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod[bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到下面的命令是配置常规用户如何使用kubectl访问集群:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config最后给出了将节点加入集群的命令kubeadm join 192.168.96.151:6443 --token uufqmm.bvtfj4drwfvvbcev \ --discovery-token-ca-cert-hash sha256:5814415567d93f6d2d41fe4719be8221f45c29c482b5059aec2e27a832ac36e6
查看一下集群状态,确认个组件都处于healthy状态,结果出现了错误:
kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME STATUS MESSAGE ERRORscheduler Healthy okcontroller-manager Healthy oketcd-0 Healthy {"health":"true","reason":""}
集群初始化如果遇到问题,可以使用kubeadm reset命令进行清理。
2.3 安装包管理器helm 3
Helm是Kubernetes的包管理器,后续流程也将使用Helm安装Kubernetes的常用组件。 这里先在master节点node1上安装helm。
wget -zxvf helm-v3.9.0-linux-amd64.tar.gzmv linux-amd64/helm /usr/local/bin/
执行helm list确认没有错误输出。
2.4 部署Pod Network组件Calico
选择calico作为k8s的Pod网络组件,下面使用helm在k8s集群中安装calico。
下载tigera-operator的helm chart:
wget
查看这个chart的中可定制的配置:
helm show values tigera-operator-v3.23.1.tgzimagePullSecrets: {}installation: enabled: true kubernetesProvider: ""apiServer: enabled: truecerts: node: key: cert: commonName: typha: key: cert: commonName: caBundle:resources: {}# Configuration for the tigera operatortigeraOperator: image: tigera/operator version: v1.27.1 registry: quay.iocalicoctl: image: docker.io/calico/ctl tag: v3.23.1
定制的values.yaml如下:
# 可针对上面的配置进行定制,例如calico的镜像改成从私有库拉取。# 这里只是个人本地环境测试k8s新版本,因此保留value.yaml为空即可
使用helm安装calico:
helm install calico tigera-operator-v3.23.1.tgz -n kube-system --create-namespace -f values.yaml
等待并确认所有pod处于Running状态:
kubectl get pod -n kube-system | grep tigera-operatortigera-operator-5fb55776df-wxbph 1/1 Running 0 5m10skubectl get pods -n calico-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-68884f975d-5d7p9 1/1 Running 0 5m24scalico-node-twbdh 1/1 Running 0 5m24scalico-typha-7b4bdd99c5-ssdn2 1/1 Running 0 5m24s
查看一下calico向k8s中添加的api资源:
kubectl api-resources | grep calicobgpconfigurations crd.projectcalico.org/v1 false BGPConfigurationbgppeers crd.projectcalico.org/v1 false BGPPeerblockaffinities crd.projectcalico.org/v1 false BlockAffinitycaliconodestatuses crd.projectcalico.org/v1 false CalicoNodeStatusclusterinformations crd.projectcalico.org/v1 false ClusterInformationfelixconfigurations crd.projectcalico.org/v1 false FelixConfigurationglobalnetworkpolicies crd.projectcalico.org/v1 false GlobalNetworkPolicyglobalnetworksets crd.projectcalico.org/v1 false GlobalNetworkSethostendpoints crd.projectcalico.org/v1 false HostEndpointipamblocks crd.projectcalico.org/v1 false IPAMBlockipamconfigs crd.projectcalico.org/v1 false IPAMConfigipamhandles crd.projectcalico.org/v1 false IPAMHandleippools crd.projectcalico.org/v1 false IPPoolipreservations crd.projectcalico.org/v1 false IPReservationkubecontrollersconfigurations crd.projectcalico.org/v1 false KubeControllersConfigurationnetworkpolicies crd.projectcalico.org/v1 true NetworkPolicynetworksets crd.projectcalico.org/v1 true NetworkSetbgpconfigurations bgpconfig,bgpconfigs projectcalico.org/v3 false BGPConfigurationbgppeers projectcalico.org/v3 false BGPPeercaliconodestatuses caliconodestatus projectcalico.org/v3 false CalicoNodeStatusclusterinformations clusterinfo projectcalico.org/v3 false ClusterInformationfelixconfigurations felixconfig,felixconfigs projectcalico.org/v3 false FelixConfigurationglobalnetworkpolicies gnp,cgnp,calicoglobalnetworkpolicies projectcalico.org/v3 false GlobalNetworkPolicyglobalnetworksets projectcalico.org/v3 false GlobalNetworkSethostendpoints hep,heps projectcalico.org/v3 false HostEndpointippools projectcalico.org/v3 false IPPoolipreservations projectcalico.org/v3 false IPReservationkubecontrollersconfigurations projectcalico.org/v3 false KubeControllersConfigurationnetworkpolicies cnp,caliconetworkpolicy,caliconetworkpolicies projectcalico.org/v3 true NetworkPolicynetworksets netsets projectcalico.org/v3 true NetworkSetprofiles projectcalico.org/v3 false Profile
这些api资源是属于calico的,因此不建议使用kubectl来管理,推荐按照calicoctl来管理这些api资源。 将calicoctl安装为kubectl的插件:
cd /usr/local/bincurl -o kubectl-calico -O -L "; chmod +x kubectl-calico
验证插件正常工作:
kubectl calico -h2.5 验证k8s DNS是否可用
kubectl run curl --image=radial/busyboxplus:curl -itIf you don't see a command prompt, try pressing enter.[ root@curl:/ ]$
进入后执行nslookup kubernetes.default确认解析正常:
nslookup kubernetes.defaultServer: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: kubernetes.defaultAddress 1: 10.96.0.1 kubernetes.default.svc.cluster.local2.6 向Kubernetes集群中添加Node节点
下面将node2, node3添加到Kubernetes集群中,分别在node2, node3上执行:
kubeadm join 192.168.96.151:6443 --token uufqmm.bvtfj4drwfvvbcev \ --discovery-token-ca-cert-hash sha256:5814415567d93f6d2d41fe4719be8221f45c29c482b5059aec2e27a832ac36e6
node2和node3加入集群很是顺利,在master节点上执行命令查看集群中的节点:
kubectl get nodeNAME STATUS ROLES AGE VERSIONnode1 Ready control-plane,master 29m v1.24.0node2 Ready <none> 70s v1.24.0node3 Ready <none> 58s v1.24.03.Kubernetes常用组件部署3.1 使用Helm部署ingress-nginx
为了便于将集群中的服务暴露到集群外部,需要使用Ingress。接下来使用Helm将ingress-nginx部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的边缘节点上。
这里将node1(192.168.96.151)作为边缘节点,打上Label:
kubectl label node node1 node-role.kubernetes.io/edge=
下载ingress-nginx的helm chart:
wget
查看ingress-nginx-4.1.2.tgz这个chart的可定制配置:
helm show values ingress-nginx-4.1.2.tgz
对values.yaml配置定制如下:
controller: ingressClassResource: name: nginx enabled: true default: true controllerValue: "k8s.io/ingress-nginx" admissionWebhooks: enabled: false replicaCount: 1 image: # registry: k8s.gcr.io # image: ingress-nginx/controller # tag: "v1.1.0" registry: docker.io image: unreachableg/k8s.gcr.io_ingress-nginx_controller tag: "v1.2.0" digest: sha256:314435f9465a7b2973e3aa4f2edad7465cc7bcdc8304be5d146d70e4da136e51 hostNetwork: true nodeSelector: node-role.kubernetes.io/edge: '' affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx-ingress - key: component operator: In values: - controller topologyKey: kubernetes.io/hostname tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule
nginx ingress controller的副本数replicaCount为1,将被调度到node1这个边缘节点上。这里并没有指定nginx ingress controller service的externalIPs,而是通过hostNetwork: true设置nginx ingress controller使用宿主机网络。 因为k8s.gcr.io被墙,这里替换成unreachableg/k8s.gcr.io_ingress-nginx_controller提前拉取一下镜像:
crictl pull unreachableg/k8s.gcr.io_ingress-nginx_controller:v1.2.0
helm install ingress-nginx ingress-nginx-4.1.2.tgz --create-namespace -n ingress-nginx -f values.yaml
kubectl get pod -n ingress-nginxNAME READY STATUS RESTARTS AGEingress-nginx-controller-7f574989bc-xwbf4 1/1 Running 0 117s
测试访问返回默认的nginx 404页,则部署完成。
3.2 使用Helm部署dashboard
先部署metrics-server:
wget
修改components.yaml中的image为docker.io/unreachableg/k8s.gcr.io_metrics-server_metrics-server:v0.5.2。 修改components.yaml中容器的启动参数,加入--kubelet-insecure-tls。
kubectl apply -f components.yaml
metrics-server的pod正常启动后,等一段时间就可以使用kubectl top查看集群和pod的metrics信息:
kubectl top nodeNAME CPU(cores) CPU% MEMORY(bytes) MEMORY%node1 509m 12% 3654Mi 47%node2 133m 3% 1786Mi 23%node3 117m 2% 1810Mi 23%kubectl top pod -n kube-systemNAME CPU(cores) MEMORY(bytes)coredns-74586cf9b6-575nl 6m 16Micoredns-74586cf9b6-mbn8s 5m 17Mietcd-node1 49m 91Mikube-apiserver-node1 142m 490Mikube-controller-manager-node1 38m 54Mikube-proxy-k5lzs 26m 19Mikube-proxy-rb5pf 9m 15Mikube-proxy-w5zpk 27m 16Mikube-scheduler-node1 7m 18Mimetrics-server-8dfd488f5-r8pbh 8m 21Mitigera-operator-5fb55776df-wxbph 10m 38Mi
接下来使用helm部署k8s的dashboard,添加chart repo:
helm repo add kubernetes-dashboard repo update
查看chart的可定制配置:
helm show values kubernetes-dashboard/kubernetes-dashboard
对values.yaml定制配置如下:
image: repository: kubernetesui/dashboard tag: v2.5.1ingress: enabled: true annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" hosts: - k8s.example.com tls: - secretName: example-com-tls-secret hosts: - k8s.example.commetricsScraper: enabled: true
先创建存放k8s.example.comssl证书的secret:
kubectl create secret tls example-com-tls-secret \ --cert=cert.pem \ --key=key.pem \ -n kube-system
使用helm部署dashboard:
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \-n kube-system \-f values.yaml
确认上面的命令部署成功。
创建管理员sa:
kubectl create serviceaccount kube-dashboard-admin-sa -n kube-systemkubectl create clusterrolebinding kube-dashboard-admin-sa \--clusterrole=cluster-admin --serviceaccount=kube-system:kube-dashboard-admin-sa
创建集群管理员登录dashboard所需token:
kubectl create token kube-dashboard-admin-sa -n kube-system --duration=87600heyJhbGciOiJSUzI1NiIsImtpZCI6IlU1SlpSTS1YekNuVzE0T1k5TUdTOFFqN25URWxKckt6OUJBT0xzblBsTncifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxOTY4OTA4MjgyLCJpYXQiOjE2NTM1NDgyODIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYSIsInVpZCI6IjY0MmMwMmExLWY1YzktNDFjNy04Mjc5LWQ1ZmI3MGRjYTQ3ZSJ9fSwibmJmIjoxNjUzNTQ4MjgyLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZS1kYXNoYm9hcmQtYWRtaW4tc2EifQ.Xqxlo2vJ9Hb6UUVIqwvc8I5bahdxKzSRSaQI_67Yt7_YEHmkkHApxUGlwJYTKF9ufww3btlCmM8PtRn5_Q1yv-HAFyTOYKo8WHZ9UCm1bT3X8V8g4GQwZIl2dwmlUmKb1unBz2-em2uThQ015bMPDE8a42DV_bOwWjljVXat0nwV14nGorC8vKLjXbohrIJ3G1pgCJvlBn99F1RelmSUSQLlolUFoxpN6MamYTElwR6FfD-AGmFXvZSbcFaqVW0oxJHV70Gjs2igOtpqHFxxPlHT8aQzlRiybPtFyBf9Ll87TmVJimT89z8wv2si2Nee8bB2jhsApLn8TJyUSlbTXA
使用上面的token登录k8s dashboard。
参考Installing kubeadmCreating a cluster with kubeadm
标签: #nginxcabundlecrt