龙空技术网

Kubernetes/K8S的安装部署方法总结

琼杰笔记 1288

前言:

当前兄弟们对“debian98安装”大约比较关注,你们都需要知道一些“debian98安装”的相关知识。那么小编也在网络上汇集了一些有关“debian98安装””的相关内容,希望小伙伴们能喜欢,咱们快快来了解一下吧!

安装部署Kubernetes/K8S前提条件:

准备好系统环境(需linux内核版本较高);所有master和node主机需要安装docker、kubeadm、kubectl和kubelet程序并启动服务;master主机执行kubeadm初始化操作,其余节点执行加入操作即可构成集群;

一、系统环境要求:

1.各节点配置时间同步

通过安装chrony与网络时间同步

[root@master1 ~]# yum install chrony -y

2.主机名称解析

编辑/etc/hosts文件,添加主机名解析记录

3.关闭防火墙

关闭iptables或ufw或firewalld相关服务

[root@master1 ~]# systemctl stop firewalld[root@master1 ~]# systemctl disable firewalldUbuntu系统:root@master2:~# ufw disableFirewall stopped and disabled on system startup

4.禁用SELinx

getenforce或sestatus查看selinux状态

[root@master1 ~]# getenforceEnforcing[root@master1 ~]# sestatusSELinux status: enabledSELinuxfs mount: /sys/fs/selinuxSELinux root directory: /etc/selinuxLoaded policy name: targetedCurrent mode: enforcingMode from config file: enforcingPolicy MLS status: enabledPolicy deny_unknown status: allowedMax kernel policy version: 31

编辑/etc/sysconfig/selinux的SELINUX=enforcing选项,使SELINUX的值为disabled,重启服务器生效;若不想重启可临时设置如下:

[root@master1 ~]# setenforceusage: setenforce [ Enforcing | Permissive | 1 | 0 ][root@master1 ~]# setenforce 0

5.禁用Swap设备;

swapoff -a然后,在/etc/fstab中注释掉swap挂载记录行

6.配置bridge-nf-call-iptables

设置/proc/sys/net/bridge/bridge-nf-call-iptables和/proc/sys/net/bridge/bridge-nf-call-ip6tables文件内容为1,创建/etc/sysctl.d/k8s.conf文件,并添加如下内容:

vm.swappiness = 0net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1sysctl -p /etc/sysctl.d/k8s.conf  # 使配置生效
二、安装应用程序

所有主机需执行相同的操作

1.安装docker

查看:Docker安装部署方法总结【二】

2.Docker配置

各节点安装并启用Docker容器运行时,配置好容器加速服务;

建议使用阿里云容器加速;需要制定cgroup驱动为systemd;

{  "exec-opts":["native.cgroupdriver=systemd"],  "log-driver":"json-file",  "log-opts":{    "max-size":"100m"  },  "storage-driver":"overlay2",  "registry-mirrors": [";,";,";]}

配置生效

配置完docker配置文件,需要daemon-reload和重启docker生效,可以用docker info命令查看

[root@master1 ~]# systemctl daemon-reload[root@master1 ~]# systemctl restart docker

3.安装kubeadm、kubectl和kubelet;

由于kubenetes使用Google主导研发,所有程序镜像包大陆无法访问(如gcr.io),只能通过内陆的某些比如阿里巴巴和清华大学镜像来获取。阿里巴巴k8s镜像访问点击这里查看,根据提示进行配置,然后进行安装。

参考资料:

1.For Debian/Ubuntu

apt-get update && apt-get install -y apt-transport-httpscurl  | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb  kubernetes-xenial mainEOFapt-get updateapt-get install -y kubelet kubeadm kubectl  # 默认安装当前最新版本root@master2:/etc/docker# apt-get install -y kubectl=1.18.1-00 kubeadm=1.18.1-00 kubelet=1.18.1-00  # 安装指定版本程序

2.For CentOS / RHEL / Fedora

cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=  0yum install -y kubelet kubeadm kubectlsystemctl enable kubelet && systemctl start kubelet  # 默认安装最新版本软件[root@node1 system]# yum install -y kubectl-1.18.1 kubeadm-1.18.1 kubelet-1.18.1 # 安装指定版本的软件包

ps: 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安装

4.设置开机自启动

root@master2:~# systemctl enable docker kubelet
三、初始化控制平面

1..初始化命令选项查看帮助

[root@master1 lib]# kubeadm init --helpRun this command in order to set up the Kubernetes control planeThe "init" command executes the following phases:```preflight Run pre-flight checkskubelet-start Write kubelet settings and (re)start the kubeletcerts Certificate generation/ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components/apiserver Generate the certificate for serving the Kubernetes API/apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet/front-proxy-ca Generate the self-signed CA to provision identities for front proxy/front-proxy-client Generate the certificate for the front proxy client/etcd-ca Generate the self-signed CA to provision identities for etcd/etcd-server Generate the certificate for serving etcd/etcd-peer Generate the certificate for etcd nodes to communicate with each other/etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd/apiserver-etcd-client Generate the certificate the apiserver uses to access etcd/sa Generate a private key for signing service account tokens along with its public keykubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file/admin Generate a kubeconfig file for the admin to use and for kubeadm itself/kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes/controller-manager Generate a kubeconfig file for the controller manager to use/scheduler Generate a kubeconfig file for the scheduler to usecontrol-plane Generate all static Pod manifest files necessary to establish the control plane/apiserver Generates the kube-apiserver static Pod manifest/controller-manager Generates the kube-controller-manager static Pod manifest/scheduler Generates the kube-scheduler static Pod manifestetcd Generate static Pod manifest file for local etcd/local Generate the static Pod manifest file for a local, single-node local etcd instanceupload-config Upload the kubeadm and kubelet configuration to a ConfigMap/kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap/kubelet Upload the kubelet component config to a ConfigMapupload-certs Upload certificates to kubeadm-certsmark-control-plane Mark a node as a control-planebootstrap-token Generates bootstrap tokens used to join a node to a clusterkubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap/experimental-cert-rotation Enable kubelet client certificate rotationaddon Install required addons for passing Conformance tests/coredns Install the CoreDNS addon to a Kubernetes cluster/kube-proxy Install the kube-proxy addon to a Kubernetes cluster```Usage:  kubeadm init [flags]  kubeadm init [command]Available Commands:  phase       Use this command to invoke single phase of the init workflowFlags:      --apiserver-advertise-address string   The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.      --apiserver-bind-port int32            Port for the API Server to bind to. (default 6443)      --apiserver-cert-extra-sans strings    Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.      --cert-dir string                      The path where to save and store the certificates. (default "/etc/kubernetes/pki")      --certificate-key string               Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.      --config string                        Path to a kubeadm configuration file.      --control-plane-endpoint string        Specify a stable IP address or DNS name for the control plane.      --cri-socket string                    Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.      --dry-run                              Don't apply any changes; just output what would be done.  -k, --experimental-kustomize string        The path where kustomize patches for static pod manifests are stored.      --feature-gates string                 A set of key=value pairs that describe feature gates for various features. Options are:                                             IPv6DualStack=true|false (ALPHA - default=false)                                             PublicKeysECDSA=true|false (ALPHA - default=false)  -h, --help                                 help for init      --ignore-preflight-errors strings      A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.      --image-repository string              Choose a container registry to pull control plane images from (default "k8s.gcr.io")      --kubernetes-version string            Choose a specific Kubernetes version for the control plane. (default "stable-1")      --node-name string                     Specify the node name.      --pod-network-cidr string              Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.      --service-cidr string                  Use alternative range of IP address for service VIPs. (default "10.96.0.0/12")      --service-dns-domain string            Use alternative domain for services, e.g. "myorg.internal". (default "cluster.local")      --skip-certificate-key-print           Don't print the key used to encrypt the control-plane certificates.      --skip-phases strings                  List of phases to be skipped      --skip-token-print                     Skip printing of the default bootstrap token generated by 'kubeadm init'.      --token string                         The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef      --token-ttl duration                   The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s)      --upload-certs                         Upload control-plane certificates to the kubeadm-certs Secret.Global Flags:      --add-dir-header           If true, adds the file directory to the header      --log-file string          If non-empty, use this log file      --log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)      --rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.      --skip-headers             If true, avoid header prefixes in the log messages      --skip-log-headers         If true, avoid headers when opening log files  -v, --v Level                  number for the log level verbosityUse "kubeadm init [command] --help" for more information about a command.

2.开始初始化

初始化命令(可根据如下选项释义和自己实际情况进行修改)

[erphpdown]

[root@master1 ~]# kubeadm init \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.18.1 \--control-plane-endpoint k8s-api.ilinux.io \--apiserver-advertise-address 192.168.222.153 \--pod-network-cidr 10.244.0.0/16 \--token-ttl 0

选项意义:

--image-repository: 指定要使用的镜像仓库;--kubernetes-version: kubernetes程序组件的版本号,应该与kubelet的版本号相同--control-plane-endpoint: 控制平面固定端点,可以是IP地址或DNS名称,会被用于集群管理员即集群组件的kubeconflg配置文件的API Server的访问地址;单控制平面部署可以不使用该选项;--pod-network-cidr: Pod网络的地址范围,其值为CIDR格式的网络地址,flannel网络插件的默认为10.244.0.0/16, callco插件的默认值为192.168.0.0/16;--service-cidr: Service的网络地址范围,其值为CIDR格式的网络地址,默认为10.96.0.0/12;--apiserver-advertise-address: apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示节点上所有可用地址;--token-ttl: 共享令牌(token)的过期时长,默认为24小时,0表示不过期;为防止不安全存储等原因导致的令牌泄露危机到集群安全,建议为其设定过期时长;

1.若初始化报错,可以在/etc/sysconfig/kubelet文件中添加参数,忽略报错。

[root@master1 package]# more /etc/sysconfig/kubeletKUBELET_EXTRA_ARGS=

如设置KUBELET_EXTRA_ARGS="--fail-swap-on=false"可忽略交换空间报错,表示swap出错的时候,不让其出错,相当于忽略。初始化的时候可添加忽略选项--ignore-preflight-errors=Swap

2.若报错如下

May 14 13:44:55 master1 kubelet: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See  for more information.May 14 13:44:55 master1 kubelet: F0514 13:44:55.571137 16841 server.go:199] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory

这是因为初始化还没有完成,相关配置文件还没有自动生成,可以忽略,等初始化成功即可。

[/erphpdown]

初始化成功会有如下提示(包括安装环境预检、生成证书、私钥、配置文件、pod清单文件、部署addon等过程):

W0514 16:26:38.198975 3142 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.1[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-api.ilinux.io] and IPs [10.96.0.1 192.168.222.150][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.222.150 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.222.150 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W0514 16:28:24.490184 3142 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W0514 16:28:24.494827 3142 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 28.512491 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: ezl1f1.ha4fc5werojbu359[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root:kubeadm join k8s-api.ilinux.io:6443 --token ezl1f1.ha4fc5werojbu359 \--discovery-token-ca-cert-hash sha256:c84d70dd5e0ce8f9305f09955c524a8aac9bb189445b58d026dba78d9981ed43 \--control-planeThen you can join any number of worker nodes by running the following on each as root:kubeadm join k8s-api.ilinux.io:6443 --token ezl1f1.ha4fc5werojbu359 \--discovery-token-ca-cert-hash sha256:c84d70dd5e0ce8f9305f09955c524a8aac9bb189445b58d026dba78d9981ed43

初始化成功后操作

安装提示执行提示要执行的命令即可:

[root@master1 opt]# mkdir -p $HOME/.kube[root@master1 opt]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@master1 opt]# chown $(id -u):$(id -g) $HOME/.kube/config[root@master1 opt]# ll -h $HOME/.kube/config -rw------- 1 root root 5.4K May 14 16:58 /root/.kube/config

node加入集群

在node节点执行命令,使其添加到集群中:

[root@node1 system]# kubeadm join k8s-api.ilinux.io:6443 --token ezl1f1.ha4fc5werojbu359 \> --discovery-token-ca-cert-hash sha256:c84d70dd5e0ce8f9305f09955c524a8aac9bb189445b58d026dba78d9981ed43W0514 17:42:23.395760    4416 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
四、安装网络插件Flannel

1.Flannel在Github上的项目

地址:

2.开始安装部署flannel

[root@master1 opt]# kubectl apply -f 

可能由于国内网络被qiang等原因,很难下载kube-flannel.yml和pull到flannel镜像。本人费九牛二虎之力用global服务器下载了kube-flannel.yml和pull到相关镜像,然后docker save打包为一个文件导入所需服务器load后即可使用,做如下操作记录:

[root@ecs-e84a ~]# docker pull quay.io/coreos/flannel:v0.12.0-arm64 && \> docker pull quay.io/coreos/flannel:v0.12.0-arm && \> docker pull quay.io/coreos/flannel:v0.12.0-ppc64le && \> docker pull quay.io/coreos/flannel:v0.12.0-s390xv0.12.0-arm64: Pulling from coreos/flannel8fa90b21c985: Pull complete c4b41df13d81: Pull complete a73758d03943: Pull complete d09921139b63: Pull complete 17ca61374c07: Pull complete 6da2b4782d50: Pull complete Digest: sha256:a2f5081b71ee4688d0c7693d7e5f2f95e9eea5ea3b4147a12179f55ede42c185Status: Downloaded newer image for quay.io/coreos/flannel:v0.12.0-arm64quay.io/coreos/flannel:v0.12.0-arm64v0.12.0-arm: Pulling from coreos/flannel832e07764099: Pull complete d0888001d791: Pull complete f0e1b4ffe531: Pull complete a0af26dd2937: Pull complete 8701ece6be98: Pull complete bf9c02e8240d: Pull complete Digest: sha256:ca8249ab50424e07d15298a2fda29003be3fd930750565d7b068b5ef9d36bd53Status: Downloaded newer image for quay.io/coreos/flannel:v0.12.0-armquay.io/coreos/flannel:v0.12.0-armv0.12.0-ppc64le: Pulling from coreos/flannelcd95c8a93e39: Pull complete 324d08c752b6: Pull complete 7e5cf18f4805: Pull complete eef736355019: Pull complete 9d84e68c2ffe: Pull complete aeb40b202208: Pull complete Digest: sha256:28314b287f84b62f62e2bdb30e0cdba17ac97dd4822efb4664a7391637b0f052Status: Downloaded newer image for quay.io/coreos/flannel:v0.12.0-ppc64lequay.io/coreos/flannel:v0.12.0-ppc64lev0.12.0-s390x: Pulling from coreos/flannel176bad61a3a4: Pull complete 13b80a37370b: Pull complete 42d8e66fa893: Pull complete 266247e2e603: Pull complete 1b56fbc8a8e1: Pull complete 85ecb68de469: Pull complete Digest: sha256:3ce5b8d40451787e1166bf6b207c7834c13f7a0712b46ddbfb591d8b5906bfa6Status: Downloaded newer image for quay.io/coreos/flannel:v0.12.0-s390xquay.io/coreos/flannel:v0.12.0-s390x[root@ecs-e84a ~]# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEquay.io/coreos/flannel v0.12.0-s390x 57eade024bfb 2 months ago 56.9MBquay.io/coreos/flannel v0.12.0-ppc64le 9225b871924d 2 months ago 70.3MBquay.io/coreos/flannel v0.12.0-arm64 7cf4a417daaa 2 months ago 53.6MBquay.io/coreos/flannel v0.12.0-arm 767c3d1f8cba 2 months ago 47.8MBquay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 2 months ago 52.8MB[root@ecs-e84a flannel]# docker save quay.io/coreos/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-arm64 quay.io/coreos/flannel:v0.12.0-arm quay.io/coreos/flannel:v0.12.0-ppc64le quay.io/coreos/flannel:v0.12.0-s390x -o flannelv0.12.0.tar[root@ecs-e84a flannel]# ll -htotal 274M-rw------- 1 root root 274M May 15 22:24 flannelv0.12.0.tar

查看node几点状态,如下显示Ready表示集群正常,K8S已安装成功!(k8s的相关管理和操作使用等请参考其他相关文章...)

[root@master1 flannel]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster1 Ready master 2d5h v1.18.1node1 Ready <none> 9h v1.18.1node2 Ready <none> 2d4h v1.18.1

标签: #debian98安装