前言:
此刻咱们对“nginx harbor”大约比较关心,我们都想要知道一些“nginx harbor”的相关资讯。那么小编同时在网摘上收集了一些关于“nginx harbor””的相关知识,希望看官们能喜欢,看官们快快来了解一下吧!一、部署ingress-nginx
rancher默认使用ingress暴露UI到集群外部供用户访问,所以需要自行部署ingress-controller,以部署ingress-nginx-controller为例。
1、安装helm
version=v3.3.1#从华为开源镜像站下载curl -LO curl -LO -zxvf helm-v3.1.0-linux-amd64.tar.gzmv linux-amd64/helm /usr/local/bin/helm && rm -rf linux-amd64
添加ingress-nginx helm repo
helm repo add ingress-nginx
使用helm部署ingress-nginx,默认镜像为gcr.io,可自行在dockerhub搜索镜像替换:
helm install ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --set controller.image.repository=lizhenliang/nginx-ingress-controller \ --set controller.image.tag=0.30.0 \ --set controller.image.digest=null \ --set controller.service.type=NodePort \ ingress-nginx/ingress-nginx
确认ingress-nginx就绪
[root@k8s-master2 ~]# kubectl -n ingress-nginx get podsNAME READY STATUS RESTARTS AGEingress-nginx-controller-98bc9c78c-gpklh 1/1 Running 0 45m[root@k8s-master2 ~]# kubectl -n ingress-nginx get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEingress-nginx-controller NodePort 10.0.0.98 <none> 80:30381/TCP,443:32527/TCP 45mingress-nginx-controller-admission ClusterIP 10.0.0.47 <none> 443/TCP 45m
2、部署rancher容器平台
参考:
添加rancher helm chart
helm repo add rancher-latest
安装cert-manager
kubectl apply --validate=false -f repo add jetstack repo updatehelm install cert-manager \--namespace cert-manager \--create-namespace \--version v0.15.0 \jetstack/cert-manager
部署rancher,注意hostname必须为dns域名形式。
helm install rancher \--namespace cattle-system \--create namespace \--set hostname=rancher.bbdops.com \--version 2.5.1 \rancher-latest/rancher
查看创建的资源
[root@k8s-master2 ~]# kubectl get pods -A -o wideNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScattle-system helm-operation-45x2l 1/2 Error 0 15m 10.244.36.69 k8s-node1 <none> <none>cattle-system helm-operation-7cr5c 2/2 Running 0 76s 10.244.224.7 k8s-master2 <none> <none>cattle-system helm-operation-cwbl4 0/2 Completed 0 12m 10.244.224.6 k8s-master2 <none> <none>cattle-system helm-operation-l5tlm 0/2 Completed 0 11m 10.244.107.199 k8s-node3 <none> <none>cattle-system helm-operation-n6t87 0/2 Completed 0 12m 10.244.107.198 k8s-node3 <none> <none>cattle-system helm-operation-npdhj 0/2 Completed 0 14m 10.244.107.197 k8s-node3 <none> <none>cattle-system helm-operation-tq8hl 2/2 Running 0 9s 10.244.107.202 k8s-node3 <none> <none>cattle-system helm-operation-vg45m 0/2 Completed 0 16s 10.244.36.70 k8s-node1 <none> <none>cattle-system helm-operation-zlm7b 0/2 Completed 0 13m 10.244.224.5 k8s-master2 <none> <none>cattle-system rancher-7797675cb7-hqw8n 1/1 Running 3 26m 10.244.224.4 k8s-master2 <none> <none>cattle-system rancher-7797675cb7-w58nw 1/1 Running 2 26m 10.244.159.131 k8s-master1 <none> <none>cattle-system rancher-7797675cb7-x24jm 1/1 Running 2 26m 10.244.169.131 k8s-node2 <none> <none>cattle-system rancher-webhook-6f69f5fd94-qkhg8 1/1 Running 0 12m 10.244.159.133 k8s-master1 <none> <none>cattle-system rancher-webhook-b5b7b76c4-wrrcz 0/1 ContainerCreating 0 73s <none> k8s-node3 <none> <none>cert-manager cert-manager-86b8b4f4b7-v5clv 1/1 Running 3 36m 10.244.224.3 k8s-master2 <none> <none>cert-manager cert-manager-cainjector-7f6686b94-whmsn 1/1 Running 5 36m 10.244.107.195 k8s-node3 <none> <none>cert-manager cert-manager-webhook-66d786db8c-jfz7g 1/1 Running 0 36m 10.244.107.196 k8s-node3 <none> <none>fleet-system fleet-agent-64d854c5b-jh4tf 1/1 Running 0 11m 10.244.107.200 k8s-node3 <none> <none>fleet-system fleet-controller-5db6bcbb9-5tz6f 1/1 Running 0 14m 10.244.169.132 k8s-node2 <none> <none>fleet-system fleet-controller-ccc95b8cd-xq28r 0/1 ContainerCreating 0 6s <none> k8s-node1 <none> <none>fleet-system gitjob-5997858b9c-tp6kz 0/1 ContainerCreating 0 6s <none> k8s-master1 <none> <none>fleet-system gitjob-68cbf8459-bcbgt 1/1 Running 0 14m 10.244.159.132 k8s-master1 <none> <none>ingress-nginx ingress-nginx-controller-98bc9c78c-gpklh 1/1 Running 0 47m 10.244.159.130 k8s-master1 <none> <none>kube-system calico-kube-controllers-97769f7c7-4h8sp 1/1 Running 1 165m 10.244.224.2 k8s-master2 <none> <none>kube-system calico-node-29f2b 1/1 Running 1 165m 192.168.112.112 k8s-node1 <none> <none>kube-system calico-node-854tr 1/1 Running 1 165m 192.168.112.114 k8s-node3 <none> <none>kube-system calico-node-n4b54 1/1 Running 1 165m 192.168.112.113 k8s-node2 <none> <none>kube-system calico-node-qdcc9 1/1 Running 1 165m 192.168.112.110 k8s-master1 <none> <none>kube-system calico-node-zf6gt 1/1 Running 1 165m 192.168.112.111 k8s-master2 <none> <none>kube-system coredns-6cc56c94bd-hdzmb 1/1 Running 1 160m 10.244.169.130 k8s-node2 <none> <none>kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-plg6v 1/1 Running 1 163m 10.244.36.67 k8s-node1 <none> <none>kubernetes-dashboard kubernetes-dashboard-5dbf55bd9d-kq4sw 1/1 Running 1 163m 10.244.36.68 k8s-node1 <none> <none>rancher-operator-system rancher-operator-6659bbb889-88fz6 1/1 Running 0 12m 10.244.169.133 k8s-node2 <none> <none>
查看rancher自带的ingress
[root@k8s-master2 ~]# kubectl -n cattle-system get ingressNAME CLASS HOSTS ADDRESS PORTS AGErancher <none> rancher.bbdops.com 10.0.0.98 80, 443 26m
查看ingress controller 暴露的nodeport类型service
[root@k8s-master2 ~]# kubectl -n ingress-nginx get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEingress-nginx-controller NodePort 10.0.0.98 <none> 80:30381/TCP,443:32527/TCP 48mingress-nginx-controller-admission ClusterIP 10.0.0.47 <none> 443/TCP 48m
windows本地修改hosts,添加域名解析,由于是nodeport,地址为任意节点IP即可。
C:\Windows\System32\drivers\etc\hosts
192.168.112.110 rancher.bbdops.com
浏览器登录rancher UI
首先修改密码,然后选择直接使用现有集群:
3、删除更新
[root@k8s-master1 ~]# helm list -n cattle-systemNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONrancher cattle-system 1 2021-04-12 05:35:35.715062637 -0400 EDT deployed rancher-2.5.1 v2.5.1 rancher-webhook cattle-system 1 2021-04-12 10:00:18.825402952 +0000 UTC deployed rancher-webhook-0.1.0-beta900 0.1.0-beta9[root@k8s-master1 ~]# helm delete rancher rancher-webhook -n cattle-systemrelease "rancher" uninstalledrelease "rancher-webhook" uninstalled
二、安装本地仓库(docker私有仓库服务器上)
1、修改配置文件
harbor私有仓库下载地址: -zxf harbor-offline-installer-v1.9.2.tgz -C /opt/cd /opt/mv harbor harbor-1.9.2ln -s /opt/harbor-1.9.2 /opt/harbor #方便版本管理编辑harbor配置文件:vim /opt/harbor/harbor.ymlhostname: harbor.od.com #这里添加的是我们开始在hdss7-11的自建dns上添加的域名解析port: 180 #避免和nginx端口冲突data_volume: /data/harborlocation: /data/harbor/logs创建数据目录和日志目录mkdir -p /data/harbor/logs
2、接下来安装docker-compose
Docker Compose是 docker 提供的一个命令行工具,用来定义和运行由多个容器组成的应用。使用 compose,我们可以通过 YAML 文件声明式的定义应用程序的各个服务,并由单个命令完成应用的创建和启动。
yum install docker-compose -y #根据网络情况不同,可能需要一些时间,也可以下载下来放到/usr/local/bin/下,给个执行权限就行执行harbor脚本:sh /opt/harbor/install.sh #根据网络情况不同,可能需要一些时间cd /opt/harbordocker-compose ps
3、编辑nginx配置文件
vi /etc/nginx/conf.d/harbor.bbdops.com.confserver { listen 80; server_name harbor.bbdops.com; client_max_body_size 4000m; location / { proxy_pass ; }}
启动nginx并设置开机启动:systemctl start nginxsystemctl enable nginx试着访问harbor,使用宿主机浏览器打开harbor.od.com,如果访问不了,检查dns是否是10.4.7.11,也就是部署bind服务的服务器IP,也可以做host解析:harbor.od.com默认账号:admin默认密码:Harbor12345登录后创建一个新的仓库,一会测
4、私有仓库推镜像
docker pull nginx:1.7.9docker login harbor.od.comdocker tag 84581e99d807 harbor.od.com/public/nginx:v1.7.9docker push harbor.od.com/public/nginx:v1.7.9
5、重启harbor命令
docker-compose down 关闭cd /opt/harbor ./prepare 更新配置文件docker-compose up -d 启动
6、修改每台上的/etc/docker/daemon.json文件
{ "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.bbdops.com"], "registry-mirrors": [";]}
7、hosts文件做解析
cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.112.110 k8s-master1 harbor.bbdops.com 192.168.112.111 k8s-master2 dashboard.bbdops.com192.168.112.112 k8s-node1 rancher.bbdops.com192.168.112.113 k8s-node2192.168.112.114 k8s-node3192.168.112.115 k8s-node4
标签: #nginx harbor