在kubernetes集群用helm离线安装harbor

背景说明

在公司内部局域网环境kubernetes集群(未连接互联网)通过helm离线安装harbor

实施步骤

一、kubernetes集群安装helm(已安装的直接跳过此节)

1. 关于helm

我们知道,容器应用在Kubernetes集群发布实际上需要创建不同的资源,写不同类型的yaml文件,如果应用架构比较庞大复杂,管理起来就更加麻烦。所以这时候Helm应运而生,由CNCF孵化和管理,用于对需要在k8s上部署复杂应用进行定义、安装和更新。

Helm 可以理解为 Kubernetes 的包管理工具,可以方便地发现、共享和使用为Kubernetes构建的应用,简单来说,Helm的任务是在仓库(Repository)中查找需要的Chart,然后将Chart以Release的形式安装到K8S集群中。

Helm的架构图如下图所示:
在这里插入图片描述

2. 安装helm和Tiller

Helm由以下两个组件组成:

  • HelmClient 客户端,管理Repository、Chart、Release等对象
  • TillerServer 负责客户端指令和K8S集群之间的交互,根据Chart的定义生成和管理各种K8S的资源对象

(1)安装Helm Client
下载最新版二进制文件:https://github.com/helm/helm/releases
解压并拷贝到/usr/local/bin/路径下:

1[root@k8s-master01 ~]# tar xf helm-v2.11.0-linux-amd64.tar.gz 2[root@k8s-master01 ~]# cp linux-amd64/helm linux-amd64/tiller /usr/local/bin/ 3 4# 如果你的机器可以联网也可以直接下载拷贝 5# wget -qO- https://kubernetes-helm.storage.googleapis.com/helm-v2.9.1-linux-amd64.tar.gz | tar -zx 6# mv linux-amd64/helm /usr/local/bin 7 8

(2)在K8S Master上创建Helm的ServiceAccount,即sa,并设定RBAC

1$ kubectl -n kube-system create sa tiller 2$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller 3 4

(3)安装Tiller
在所有K8S节点上下载tiller:v[helm-version]镜像,helm-version为上述Helm的版本,如下:

1docker pull dotbalo/tiller:v2.11.0 2 3

因为我们是公司内部离线环境,所以我们可以提前先下载好并存到镜像仓库当中,并在所有节点拉取该镜像:

1# 先在可以联网的机器上下载 2docker pull dotbalo/tiller:v2.11.0 3 4# 打包 5docker save -o tiller-v2.11.0.tar dotbalo/tiller:v2.11.0 6 7# 拷贝tar包转移到内网机器上并解压 8docker load -i tiller-v2.11.0.tar 9docker images |grep tiller 10docker tag dotalo/tiller:v2.11.0 harbor.xxx.com.cn/baseimg/tiller:v2.11.0 11docker push harbor.xxx.com.cn/baseimg/tiller:v2.11.0 12 13# 在所有K8S节点上拉取该镜像 14docker pull harbor.xxx.com.cn/baseimg/tiller:v2.11.0 15 16

然后使用helm init安装tiller:

1# 指定sa和指定镜像安装 2[root@k8s-master01 ~]# helm init --service-account tiller --tiller-image dotbalo/tiller:v2.11.0 3Creating /root/.helm 4Creating /root/.helm/repository 5Creating /root/.helm/repository/cache 6Creating /root/.helm/repository/local 7Creating /root/.helm/plugins 8Creating /root/.helm/starters 9Creating /root/.helm/cache/archive 10Creating /root/.helm/repository/repositories.yaml 11Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 12Adding local repo with URL: http://127.0.0.1:8879/charts 13$HELM_HOME has been configured at /root/.helm. 14 15Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. 16 17Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. 18To prevent this, run `helm init` with the --tiller-tls-verify flag. 19For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation 20Happy Helming! 21 22

PS:如果你一开始未指定sa也可以在安装tiller之后更新deploy即可

1kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec": {"template":{"spec":{"serviceAccount":"tiller"}}}}' 2 3

(4)查看确认

1[root@k8s-master01 ~]# helm version 2Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} 3Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} 4 5 6[root@k8s-master01 ~]# kubectl get pod,svc -n kube-system | grep tiller 7pod/tiller-deploy-5d7c8fcd59-d4djx 1/1 Running 0 3m 8 9service/tiller-deploy ClusterIP 10.106.28.190 <none> 44134/TCP 5m 10 11

3. 常用命令

这里只列举最常用以及这里会用到的命令,更多请查阅官方文档

1# 查看helm仓库 2helm repo 3 4# 搜索Chart 5helm search xxx 6 7# 查看Chart包详情 8helm inspect xxxx/xxxx 9 10# 下载Chart包 11helm fetch ChartName 12 13# 校验配置是否正常 14helm lint ${chart_name} 15 16# 打包Chart 17helm package ${chart_name} --debug 18 19# 查看当前安装的Chart 20helm ls 21 22# 查看所有的Chart 23helm ls -a 24 25# 安装Chart 26helm install --debug 27 28# 卸载Chart release 29helm delete xxxx # 适用helm 2.0,保留原来的release名称,可以通过helm ls看到状态deleted 30helm delete --purge xxxx # 适用helm 2.0,参数purge代表彻底删除,释放原来的release名称 31helm uninstall # 适用heml 3.0 32 33

关于Helm就介绍到这,网上很多文章,大家可以查阅,这里推荐两篇文章,梳理的比较清晰,供参考:
(1)Helm安装和项目使用
(2)使用helm部署release到kubernetes

二、下载harbor-helm chart以及harbor镜像

因为我们是公司内部网络,离线环境,所以我们需要提前下载好Chart包以及harbor镜像,自己可以到harbor官网选择版本release:https://github.com/goharbor/harbor-helm/releases

先找一台可以联网的机器:

1# 方法一 2# 可以直接在上述release页面下载对应版本的源码 3wget https://github.com/goharbor/harbor-helm/archive/v1.1.4.tar.gz 4tar zxvf v1.1.4.tar.gz 5cd harbor-helm-1.1.4 6 7# 方法二 8helm repo add harbor https://helm.goharbor.io 9helm fetch harbor/harbor --version 1.1.4 10tar xf harbor-1.1.4.tgz 11 12# 方法三 13git clone https://github.com/goharbor/harbor-helm 14cd harbor-helm 15git checkout 1.1.4 16 17

进入Chart包目录,我们可以看到Chart文件结构:

1. 2├── charts 3├── Chart.yaml 4├── templates 5│ ├── deployment.yaml 6│ ├── _helpers.tpl 7│ ├── ingress.yaml 8│ ├── NOTES.txt 9│ └── service.yaml 10└── values.yaml 11 12

其中:
Chart.yaml描述了chart的信息,包括名字,版本,描述信息等等。
values.yaml存储变量,给templates文件中定义的资源使用。
templates文件夹中使用go语言的模版语法,定义了各类kubernetes资源,结合values.yaml中的变量值,声称实际的资源声明文件。
NOTES.txt:在执行helm instll安装此Chart之后会被输出到屏幕的一些自定义信息

可选的文件结构:
LICENSE: chart的license信息
README.md: 友好的chart介绍文档
requirements.yaml: charts的依赖说明

我们下面主要修改的就是values.yaml文件中的配置

对于harbor镜像,需要提前下载好,这里我们同样使用离线安装包的方式统一下载:

1[root@localhost harbor]# wget https://github.com/goharbor/harbor/releases/download/v1.8.6/harbor-offline-installer-v1.8.6.tgz 2 3[root@localhost harbor]# tar zxvf harbor-offline-installer-v1.8.6.tgz 4[root@localhost harbor]# cd harbor-offline-installer-v1.8.6 5[root@localhost harbor-offline-installer-v1.8.6]# ls 6common docker-compose.yml harbor.v1.8.6.tar.gz harbor.yml install.sh LICENSE prepare 7 8# 里面的harbor.v1.8.6.tar.gz文件就是harbor 1.8.6的镜像文件,我们把里面的镜像文件load出来即可 9[root@localhost harbor-offline-installer-v1.8.6]# docker load -i harbor.v1.8.6.tar.gz 10 11

三、架构设计及安装

(1)设计

这里就需要提前结合自己的k8s集群设计harbor的架构,主要思考的点

  • 镜像

镜像较多,因为Harbor使用的容器镜像多达10个(其中registry会用到多个容器镜像),会被集群分别调度到多个节点上运行,需要保证所有的节点都有需要的容器镜像,会带来大量的下载流量,完整运行起来的时间比较长。最好是在一个节点上下载,然后上传到所有节点。

  • 数据持久化

K8S上POD生命周期是短暂的,需要考虑你的数据持久化怎么做,Harbor可以使用本地存储、外置存储或者网络存储。
因我们集群已经提供了ceph RBD,所以db/redis等pod的数据使用内部PVC提供持久化存储,即通过ceph RBD的卷进行挂载保存。而镜像我们则放到ceph S3存储上。

  • 访问服务

使用 Ingress 对外提供服务

  • 是否开启TLS

我们公司内部环境,暂时并没有打算启用https,而是http即可

  • 域名解析及信任

域名解析什么的之前就已经配置好了,所以也不需要关注,用新申请的域名即可,但是因为是http,所以需要在所有节点上配置/etc/docker/daemon.json

1cat <<EOF> /etc/docker/daemon.json 2{ 3 "insecure-registries": [ 4 "harbor.xxx.com.cn:80", ###harbor的地址 5 "harbor.xxx.com.cn" 6 ] 7} 8 9# 然后重启docker 10systemctl restart docker 11systemctl daemon-reload 12 13

(2)自定义配置

那么我们就可以开始配置values文件了,下面改动部分及敏感信息使用[ 高圆圆 ]备注或替代,主要改动点如下,其它都没改动,比如db和redis都使用内部的:

  • 用ingress做服务发现

  • 禁用tls

  • 配置harbor及notary域名

  • 数据PVC采用"ceph-rbd"这个StorageClass

  • 镜像数据采用ceph s3保存

1expose: 2 # Set the way how to expose the service. Set the type as "ingress", 3 # "clusterIP", "nodePort" or "loadBalancer" and fill the information 4 # in the corresponding section 5 type: ingress 6 tls: 7 # Enable the tls or not. Note: if the type is "ingress" and the tls 8 # is disabled, the port must be included in the command when pull/push 9 # images. Refer to https://github.com/goharbor/harbor/issues/5291 10 # for the detail. 11 enabled: false #[高圆圆] 12 # Fill the name of secret if you want to use your own TLS certificate. 13 # The secret must contain keys named: 14 # "tls.crt" - the certificate 15 # "tls.key" - the private key 16 # "ca.crt" - the certificate of CA 17 # These files will be generated automatically if the "secretName" is not set 18 secretName: "" 19 # By default, the Notary service will use the same cert and key as 20 # described above. Fill the name of secret if you want to use a 21 # separated one. Only needed when the type is "ingress". 22 notarySecretName: "" 23 # The common name used to generate the certificate, it's necessary 24 # when the type isn't "ingress" and "secretName" is null 25 commonName: "" 26 ingress: 27 hosts: 28 core: harbor2.xxxx.com.cn #[高圆圆] 29 notary: notary2.xxxx.com.cn #[高圆圆] 30 # set to the type of ingress controller if it has specific requirements. 31 # leave as `default` for most ingress controllers. 32 # set to `gce` if using the GCE ingress controller 33 # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller 34 controller: default 35 annotations: 36 ingress.kubernetes.io/ssl-redirect: "false" 37 ingress.kubernetes.io/proxy-body-size: "0" 38 nginx.ingress.kubernetes.io/ssl-redirect: "false" 39 nginx.ingress.kubernetes.io/proxy-body-size: "0" 40 nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0 41 42# The external URL for Harbor core service. It is used to 43# 1) populate the docker/helm commands showed on portal 44# 2) populate the token service URL returned to docker/notary client 45# 46# Format: protocol://domain[:port]. Usually: 47# 1) if "expose.type" is "ingress", the "domain" should be 48# the value of "expose.ingress.hosts.core" 49# 2) if "expose.type" is "clusterIP", the "domain" should be 50# the value of "expose.clusterIP.name" 51# 3) if "expose.type" is "nodePort", the "domain" should be 52# the IP address of k8s node 53# 54# If Harbor is deployed behind the proxy, set it as the URL of proxy 55externalURL: http://harbor2.xxxx.com.cn #[高圆圆] 56 57# The persistence is enabled by default and a default StorageClass 58# is needed in the k8s cluster to provision volumes dynamicly. 59# Specify another StorageClass in the "storageClass" or set "existingClaim" 60# if you have already existing persistent volumes to use 61# 62# For storing images and charts, you can also use "azure", "gcs", "s3", 63# "swift" or "oss". Set it in the "imageChartStorage" section 64persistence: 65 enabled: true 66 # Setting it to "keep" to avoid removing PVCs during a helm delete 67 # operation. Leaving it empty will delete PVCs after the chart deleted 68 resourcePolicy: "keep" 69 persistentVolumeClaim: 70 registry: 71 # Use the existing PVC which must be created manually before bound, 72 # and specify the "subPath" if the PVC is shared with other components 73 existingClaim: "" 74 # Specify the "storageClass" used to provision the volume. Or the default 75 # StorageClass will be used(the default). 76 # Set it to "-" to disable dynamic provisioning 77 storageClass: "ceph-rbd" #[高圆圆] 78 subPath: "" 79 accessMode: ReadWriteOnce 80 size: 5Gi 81 chartmuseum: 82 existingClaim: "" 83 storageClass: "ceph-rbd" #[高圆圆] 84 subPath: "" 85 accessMode: ReadWriteOnce 86 size: 5Gi 87 jobservice: 88 existingClaim: "" 89 storageClass: "ceph-rbd" #[高圆圆] 90 subPath: "" 91 accessMode: ReadWriteOnce 92 size: 1Gi 93 # If external database is used, the following settings for database will 94 # be ignored 95 database: 96 existingClaim: "" 97 storageClass: "ceph-rbd" #[高圆圆] 98 subPath: "" 99 accessMode: ReadWriteOnce 100 size: 1Gi 101 # If external Redis is used, the following settings for Redis will 102 # be ignored 103 redis: 104 existingClaim: "" 105 storageClass: "ceph-rbd" #[高圆圆] 106 subPath: "" 107 accessMode: ReadWriteOnce 108 size: 1Gi 109 # Define which storage backend is used for registry and chartmuseum to store 110 # images and charts. Refer to 111 # https://github.com/docker/distribution/blob/master/docs/configuration.md#storage 112 # for the detail. 113 imageChartStorage: 114 # Specify whether to disable `redirect` for images and chart storage, for 115 # backends which not supported it (such as using minio for `s3` storage type), please disable 116 # it. To disable redirects, simply set `disableredirect` to `true` instead. 117 # Refer to 118 # https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect 119 # for the detail. 120 disableredirect: false 121 # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift", 122 # "oss" and fill the information needed in the corresponding section. The type 123 # must be "filesystem" if you want to use persistent volumes for registry 124 # and chartmuseum 125 type: s3 126 filesystem: 127 rootdirectory: /storage 128 #maxthreads: 100 129 azure: 130 accountname: accountname 131 accountkey: base64encodedaccountkey 132 container: containername 133 #realm: core.windows.net 134 gcs: 135 bucket: bucketname 136 # The base64 encoded json file which contains the key 137 encodedkey: base64-encoded-json-key-file 138 #rootdirectory: /gcs/object/name/prefix 139 #chunksize: "5242880" 140 s3: 141 region: default 142 bucket: [高圆圆] 143 accesskey: [高圆圆] 144 secretkey: [高圆圆] 145 regionendpoint: [高圆圆] 146 #encrypt: false 147 #keyid: mykeyid 148 secure: false 149 #v4auth: true 150 #chunksize: "5242880" 151 rootdirectory: /registry 152 #storageclass: STANDARD 153 154 155imagePullPolicy: IfNotPresent 156 157logLevel: debug 158# The initial password of Harbor admin. Change it from portal after launching Harbor 159harborAdminPassword: "Harbor12345" 160# The secret key used for encryption. Must be a string of 16 chars. 161secretKey: "not-a-secure-key" 162 163 164database: 165 # if external database is used, set "type" to "external" 166 # and fill the connection informations in "external" section 167 type: internal 168 internal: 169 image: 170 repository: [高圆圆] 171 tag: v1.8.6 172 # The initial superuser password for internal database 173 password: [高圆圆] 174 # resources: 175 # requests: 176 # memory: 256Mi 177 # cpu: 100m 178 nodeSelector: {} 179 tolerations: [] 180 affinity: {} 181 182 183 184

(3)安装Chart包

配置完就可以开始安装啦

1cd helm-harbor 2# 可以先dry-run试运行并debug确认 3helm install --debug --dry-run --namespace goharbor --name harbor-1-8-6 . 4 5# 正式安装,并将编排文件写入deploy.yaml文件以便日后检查及使用 6helm install --namespace goharbor --name harbor-1-8-6 . |sed 'w ../deploy.yaml' 7 8

四、验证

1[root@SYSOPS00065318 bankdplyop]# kubectl -n harbor get po 2NAME READY STATUS RESTARTS AGE 3harbor-1-8-harbor-chartmuseum-59687f9974-rxk4q 1/1 Running 0 3d8h 4harbor-1-8-harbor-clair-6c65bd97b-lbnb5 1/1 Running 0 3d8h 5harbor-1-8-harbor-core-f9d44d6b9-pl5fl 1/1 Running 0 3d8h 6harbor-1-8-harbor-database-0 1/1 Running 0 3d8h 7harbor-1-8-harbor-jobservice-d98454d6c-tt77v 1/1 Running 0 3d8h 8harbor-1-8-harbor-notary-server-86895f5744-x6v49 1/1 Running 0 3d8h 9harbor-1-8-harbor-notary-signer-59d4bf5b58-kl778 1/1 Running 0 3d8h 10harbor-1-8-harbor-portal-77d57c4764-qqgw9 1/1 Running 0 3d8h 11harbor-1-8-harbor-redis-0 1/1 Running 0 3d8h 12harbor-1-8-harbor-registry-848bc49fb7-prwkw 2/2 Running 0 3d8h 13 14 15[root@SYSOPS00065318 bankdplyop]# kubectl -n harbor get pvc 16NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 17data-harbor-1-8-harbor-redis-0 Bound pvc-ce7f3e96-7440-11ea-b523-06687c008cdb 1Gi RWO ceph-rbd 3d9h 18database-data-harbor-1-8-harbor-database-0 Bound pvc-ce7a7fb9-7440-11ea-b523-06687c008cdb 1Gi RWO ceph-rbd 3d9h 19harbor-1-8-harbor-jobservice Bound pvc-23459419-741e-11ea-aaee-005056918741 1Gi RWO ceph-rbd 3d9h 20 21

PS:部署完需要等待一小段时间等所有pod都完成,如果有用到ceph storage class的话并且提前配置好ceph secret,如果有用到ImagePullSecret也提前配置好。期间可以根据describe及log查看各个pod的启动情况:

1kubectl logs harbor-1-8-harbor-database-0 -n goharbor 2 3kubectl describe po harbor-1-8-harbor-database-0 -n goharbor 4 5

如果db一直无法起来,报找不到以上database的错误,是有db还未启动成功,在Pod探针失败的情况下强制重启了Pod,导致3个Database脚本没有执行完毕,所以加长探针开始扫描时间和超时时间。
在这里插入图片描述

登陆验证:
在这里插入图片描述


延伸阅读

以下知识点根据自己需要添加,以适应自己的具体环境

1. helm下线、更新、回滚

1# 下线 2helm delete xxxxxx 3 4# 更新,这里稍微留意资源的复用性 5helm upgrade --set "key=value" ${chart_name} ${char_repo/name} 6 7# 回滚上一个版本 8helm rollback ${chart_name} 1 9 10 11

2. 关于证书

如果通过ingress做服务发现并启用了https,在浏览器访问的时候则还需要点击信任该证书,另外通过docker cli访问harbor会提示错误:

1[root@k8s-master harbor-helm]# docker login harbor.xxxx.com.cn 2Username: admin 3Password: 4Error response from daemon: Get https://harbor.xxxx.com.cn/v2/: x509: certificate signed by unknown authority 5[root@k8s-master harbor-helm]# 6 7

这是因为我们没有提供证书文件,我们将使用到的ca.crt文件复制,拷贝harbor的证书到k8s各个节点,哪里可以看harbor证书呢?在secret里面:

1$ kubectl get secret harbor-harbor-ingress -n goharbor -o yaml 2apiVersion: v1 3data: 4 ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5VENDQWQyZ0F3SUJBZ0lSQUtNbWp6QUlHcFZKUmZxNnJDdDMySGN3RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXhNSmFHRnlZbTl5TFdOaE1CNFhEVEU1TVRBeU1UQTJOVE14T0ZvWERUSXdNVEF5TURBMgpOVE14T0Zvd0ZERVNNQkFHQTFVRUF4TUphR0Z5WW05eUxXTmhNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DCkFROEFNSUlCQ2dLQ0FRRUF1enp5anRXaStzbm55b0hWc0R1cHRERDF6ZnFvWWpsaFNGTkxBMUdtNmorWkRuR0gKVWJKeGpCMlJ5aDRvN2N5cUF6VFE3YnBhM2ZUeXJmQWo4K0RTZzdCVGNQcy8ycng1aHhrTHowWjJ5ODJ4ZmpaZgplMndPVFRvZ0NtakljZGMwOFNCVVBvSWVQRzBka2NOUnR2U2tENGFVZkx3UE42MmNWNUNidEFSazh4TGwrN3N3ClBLQUx4ZlFVTkt1RGlobXY3blRTUnVIWkoxQXFsS09CdWE4NnhpSVF6T2hlOVhaYUZTRjBGZEZUNVc0eUVkaXkKOGExelNIV0l3eklUaWs2WU82ZFE2T3N1QVV4SUFBREtMVlJYTWszMWlvZ2s0K24zYmxzTnlYR3R0OStwa0FyVwphWXJvcXRXK2FmUGxlUndDQ25DZGUydk9WMndqSTh3a291YW5Xd0lEQVFBQm8wSXdRREFPQmdOVkhROEJBZjhFCkJBTUNBcVF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUYKTUFNQkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQlBoMnVIRm0xcnp6ekdydkFLQnRjZnMxK3I0MzFrNQpWMWtjampraUl4U0FFQ1d2dW9TSGdXVkVHNVpGRjdScVRnVFl4bEduQ0hUUXpOOXlxRnUxSXU3MmdTcXBNSkZiCmkrRG9kMllDNThhL2h6WC9mWFNhREtqRnVRaWl4QlhGM0E4SlFxL2hucWVXU2pFYnRFaWhQMVlKMGNmQXZaTDAKSmJLVXgyRUluVkhReVh0bXMvcWVHZkpLbHVCTmRVWDFoNklvM0ZzLzVjeVUxeFB3ZEpJaERmMHlYMDlKb3pRMgpydzVaeEhyNGpxT2xpRTBTZCtjRVRFVCsvR2xMUGJWRmZDVDlKZms4S1VXeHZLMnN6aDdYTmRvaGErSjNDZTgrClVPZTIvZkpmdWRWRVdjRlY4a2pOOHVudHd6SWJ2cHpJa3RmY2hWT2NHMEFUZnNXNCs5ZmFRK2c9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K 5 tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLakNDQWhLZ0F3SUJBZ0lRTFNicmtJYlEzVFQrT3hrSlBEMldMREFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFERXdsb1lYSmliM0l0WTJFd0hoY05NVGt4TURJeE1EWTFNekU0V2hjTk1qQXhNREl3TURZMQpNekU0V2pBY01Sb3dHQVlEVlFRREV4Rm9ZWEppYjNJdWQyRnVaM2gxTG1OdmJUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFQUXhVTXc3ZEUzMDJLL1NyRTFkL3pmVWZLcmF5MHB3MlFOcFZNL0kKNFR4TVNjR2dYc0Y0RXZGcHM0NUYvajhIRXVrNWJIanlyR0RzaUxocjRpU294UkNLQk1pOEUvMnppVTVvSWRqUgpSWWU2aysvQU8rNUI4VXdmUHQ1VC9JM29ReUdyVmtDWXRxK2ROTHczYldVRHBJNFBoTmdETHRURTV2Y1BhRGFGCk1xRUszVlJKQmJqeHJIUndHTWNXNmhraGIvb3dNZmVGOXo3WjRUWHpPM2ZIcHpnMWhnU3dDcU9JeEhQNE5LYW4KOVNOdW0xMkpKeGgyVlZyOXFBNWg5K29tNXpySjQ0bFB5UFlFRStWRjc0MXF1QUl5eGtpcC9JMkxZYmdZZkNpUwpyRWVXM0Y2TlMrbnRDWkd6OUpySjJRc2lhVHUwR1JmZEIvNDZIN2c3SGxjQmJVMENBd0VBQWFOd01HNHdEZ1lEClZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUZCUWNEQWpBTUJnTlYKSFJNQkFmOEVBakFBTUM4R0ExVWRFUVFvTUNhQ0VXaGhjbUp2Y2k1M1lXNW5lSFV1WTI5dGdoRnViM1JoY25rdQpkMkZ1WjNoMUxtTnZiVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBYmJaakw5cG1Gd3RRL2tNdVhNZmNtNnpzCkVRVVBQcmhwZXREVk9sYWtOYjhlNERTaTNDcWhicTFLTmxFdEQwd0kwOVhESG1IUEw1dXpXK2dkbzRXWnRhNUYKTjhJVnJJMStwc2R1bkdYZWxNODdJNlh6YzFzMnpIVGY3VHUreHZ4V3VsYnVZRVU5OEpheXVpa0N6L2VDMm5FdApQaVg3anZvNVBwQ1RYb2tFUG5DTlVZODZYQ3ptbWVhbTNpTFIza3pSOEdmNjRydWh4MXl2VkxEdjZiQ3NkdXJ0Ck1ORE5wbTc0aXp4aWJjRmtwWGhWZWY1ZGNvNVExakVTRmxXOEd5MnRBbXYyZFdTdW5tSWRaaXZweHpGaXdyWU4KWjkxLzR2RHdJZE5OeHo5SUIrN0ZYVVpTazQveW9PZ25pTWZTTnJhMHdwclZMZjZTdysrQUdtRnB5VTYyNUE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== 6 tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBOURGUXpEdDBUZlRZcjlLc1RWMy9OOVI4cXRyTFNuRFpBMmxVejhqaFBFeEp3YUJlCndYZ1M4V216amtYK1B3Y1M2VGxzZVBLc1lPeUl1R3ZpSktqRkVJb0V5THdUL2JPSlRtZ2gyTkZGaDdxVDc4QTcKN2tIeFRCOCszbFA4amVoRElhdFdRSmkycjUwMHZEZHRaUU9ramcrRTJBTXUxTVRtOXc5b05vVXlvUXJkVkVrRgp1UEdzZEhBWXh4YnFHU0Z2K2pBeDk0WDNQdG5oTmZNN2Q4ZW5PRFdHQkxBS280akVjL2cwcHFmMUkyNmJYWWtuCkdIWlZXdjJvRG1IMzZpYm5Pc25qaVUvSTlnUVQ1VVh2aldxNEFqTEdTS244all0aHVCaDhLSktzUjViY1hvMUwKNmUwSmtiUDBtc25aQ3lKcE83UVpGOTBIL2pvZnVEc2VWd0Z0VFFJREFRQUJBb0lCQVFDOHgzZElQRnBnZmY0YQpod3JQVVBDaVQ3SUZQOXBUZFVRLzMrbENMWEQ2OVpzN2htaGF0eUlsNGVwKy9kdGRESEh4UFlSL1NGUTlKZjlZClc0YmJnbUcrdElTWVR0WkJsczk2ZndSVG93MVdyY1g2WGltMnV1SDVVRnFBOUhyVmxnNTM5QVpkTC9KamQyd3kKYWNNM2lZWm9rTlRKVGtTaEZvdmJ5ZHh0OGJFL1R5MDFoY2dtSCtTZmxPT0g1bTJyYVNBTFF3SVY5bG9QSWlsWgpzTGFtSzZFNU1rS0pITmVuQmFjVDJwSHl0R2VEbzlXQW1uSGQ2THp0SkxTNzk2dHFZZ1FSMDdDS1VSa09xZ2tUClJWRW9WbzgyWDRHeWI5VmJxQVlIWDNYY3l0VW9QaTZvNE5qRlFUWW9yWGttRHBub25DWXRYaWp0bXljdEZmbEcKc3ZHYXpzT2RBb0dCQVBSQmFMZzhqWGJ3S0p2TzRhSUVDemhLUzlhRDlDOHd6Kzd6SmsrbmZwTDhvbzFYdmlHQwpOMStuRjNaR2NZVk9nNzViOStXdGFHQyswRnY3MjA5em00TittVWFIejAxaExoSjhwZnppc1hoN1JZTDJjQkZPCitTVW9tNlBvcDNFTmtjMEl3ZDZZU1dpUm5MOCtYT0R2UXR4VWtXNUtEVEVIZThndHRVS2V4ay9MQW9HQkFQL3YKSWZzSTZpSjlrNStHMXJYbFNJYWo0RTdsT0VhdU9MWnI0U0FsWWg1anhFU1JFNjRBeENya2h0cjZjOXVJS0p1WApJLzNRcEp4TmpCSzI4NzJ3aWMrU25kK0dvb1E1RG5Ub3QxN2FwVko2L2xkUXZqdUduZjJ2d0p0S25MVEJreE1IClJ0SzdyQmNhNGVoSWNXK2ZJN3ZpMzh3Rys5N3lNTENqaEprOXdtUkhBb0dBWXlZbUJ4dDFaVUZwaW8yNUk1WTIKbzd2cyt3QUhZQnlWVzI3U0wyVlRTUUZLVHN1K1AwWG5pbWwrYWFHQXRWZEF2VVlCNC9hM053WmQ5K2pOaG52cwpOYjF2SktVK2JpK3pqd2VRTFk0cjhqYy82VUIyRDJDYVhBNFcxN3M2TlBjSUowMlZ2UERlWTVjd0pLV0ErRUhIClJ6OEE1ZDhqYWJLYStaQXNVd1cyaEc4Q2dZQkJTWEV6cG95RGkrRXltcVQrOWFSUXBGRStEdjhTR0xOaTVaWWkKS3ljaWRYVEZ3UFJ5T01QUjVVWDVhbFpQdENZWHVyQjF1Tm1rL2Fzend2UGVlY0JOOFNyUXNIbVluUzF3NlVTTgpyOXpvYzNPYU5vQ3drcUNPN0Z5SHdMckU2WFJwTUR3QzJka0djOWNZK0JIbjFZSzZGUi9kM2hJMlJ6WGdlWFlECjJWdFRWUUtCZ1FEYXd3dStQbXNOT2JTcUJQMWtUUXp4cHpIM2svZjF1U0REczJWSmd2dEt2UlZ5YmF0WXd5SFcKVTVabmVjUi91RWZPK1VkeHQrYW9sWEtXMG5mZFo2Q0RSOW5wTmFTcDF5SVJJeStlYUpvK21ySVVrdnhKNFl1cwpETnJTbGU2QWhGNUE1QkphbjlsbUFGZ0VleWRpR2tHKzg1bEVVYTJlNElSbk1FSjRYOHJITHc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= 7kind: Secret 8metadata: 9 creationTimestamp: "2019-10-21T06:53:21Z" 10 labels: 11 app: harbor 12 chart: harbor 13 heritage: Tiller 14 release: harbor 15 name: harbor-harbor-ingress 16 namespace: kube-ops 17 resourceVersion: "7297296" 18 selfLink: /api/v1/namespaces/kube-ops/secrets/harbor-harbor-ingress 19 uid: c35d0829-3a35-425c-ac45-fdd5db7d7694 20type: kubernetes.io/tls 21 22# 其中 data 区域中 ca.crt 对应的值就是我们需要证书,不过需要注意还需要做一个 base64 的解码 23$ kubectl get secrets/harbor-v1-harbor-ingress -n goharbor -o jsonpath="{.data.ca\.crt}" | base64 --decode 24-----BEGIN CERTIFICATE----- 25MIIC9DCCAdygAwIBAgIQffFj8E2+DLnbT3a3XRXlBjANBgkqhkiG9w0BAQsFADAU 26MRIwEAYDVQQDEwloYXJib3ItY2EwHhcNMTgxMTE2MTYwODA5WhcNMjgxMTEzMTYw 27ODA5WjAUMRIwEAYDVQQDEwloYXJib3ItY2EwggEiMA0GCSqGSIb3DQEBAQUAA4IB 28DwAwggEKAoIBAQDw1WP6S3O+7zrhVAAZGcrAEdeQxr0c53eyDGcPL6my/h+FhZ1Y 29KBvY5CLDVES957u/GtEXFfZr9aQT/PZECcccPcyZvt8NscEAuQONfrQFH/VLCvwm 30XOcbFDR5BXDJR8nqGT6DVq8a1HUEOxiY39bp/Jz2HrDIfD9IMwEuyh/2IVXYHwD0 31deaBpOY1slSylpOYWPFfy9UMfCsd+Jc7UCzRaiP3XWP9HMFKc4JTU8CDRR80s9UM 32siU8QheVXn/Y9SxKaDfrYjaVUkEfJ6cAZkkDLmM1OzSU73N7I4nmm1SUS99vdSiZ 33yu/R4oDFMezOkvYGBeDhLmmkK3sqWRh+dNoNAgMBAAGjQjBAMA4GA1UdDwEB/wQE 34AwICpDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDwYDVR0TAQH/BAUw 35AwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAJjANauFSPZ+Da6VJSV2lGirpQN+EnrTl 36u5VJxhQQGr1of4Je7aej6216KI9W5/Q4lDQfVOa/5JO1LFaiWp1AMBOlEm7FNiqx 37LcLZzEZ4i6sLZ965FdrPGvy5cOeLa6D8Vx4faDCWaVYOkXoi/7oH91IuH6eEh+1H 38u/Kelp8WEng4vfEcXRKkq4XTO51B1Mg1g7gflxMIoeSpXYSO5qwIL5ZqvoAD9H7J 39CnQFO2xO3wrLq6TXH5Z7+0GWNghGk0GIOvF/ULHLWpsyhU5asKLK//MvORwQNHzL 40b5LHG9uYeI+Jf12X4TI9qDaTCstiqM8vk1JPvgtSPJ9M62nRKY4ang== 41-----END CERTIFICATE----- 42 43

把证书保存到本地ca.crt,并拷贝到k8s所有节点的 /etc/docker/certs.d/harbor.xxxx.com.cn/ca.crt 里,这样证书配置上以后就可以正常访问了。

1# 1.本地创建ca.crt,并把上述解码的数据复制锦ca.crt文件内 2kubectl get secrets/harbor-v1-harbor-ingress -n goharbor -o jsonpath="{.data.ca\.crt}" | base64 --decode|sed 'w ca.crt' 3 4# 2.循环在k8s集群所有节点上创建目录 5for n in `seq -w 01 06`;do ssh node-$n "mkdir -p /etc/docker/certs.d/harbor.xxxx.com.cn/ca.crt";done 6 7# 3.将下载下来的harbor CA证书拷贝到每个node节点的etc/docker/certs.d/harbor.xxxx.com.cn/ca.crt目录下 8for n in `seq -w 01 06`;do scp ca.crt node-$n:/etc/docker/certs.d/harbor.xxxx.com.cn/ca.crt;done 9 10

重启docker并重新docker login

如果报证书不信任错误x509: certificate signed by unknown authority,则可以添加信任:

1chmod 644 /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2 3

将上述ca.crt添加到/etc/pki/tls/certs/ca-bundle.crt即可

1cp /etc/docker/certs.d/core.harbor.domain/ca.crt /etc/pki/tls/certs/ca-bundle.crt 2chmod 444 /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 3 4

不过由于上面的方法较为繁琐,所以可以参照未配置https的情况,在使用 docker cli 的时候是在 docker 启动参数后面添加一个–insecure-registry参数来忽略证书的校验的,在 docker 启动配置文件/etc/docker/daemon.json中修改启动参数insecure-registries:

1$ cat /etc/docker/daemon.json 2{ 3 "insecure-registries": ["harbor.xxxx.com.cn"] 4} 5 6# 然后重启docker 7$ systemctl restart docker 8 9# 重新登陆 10$ docker login harbor.xxxx.com.cn 11 12

登陆成功后,登陆授权信息会出现在/root/.docker/config.json内

1$ cat /root/.docker/config.json 2{ 3 "auths": { 4 "h.cnlinux.club": { 5 "auth": "YWRtaW46SGFyYm9yMTIzNDU=" 6 } 7 } 8} 9 10

3. 关于pull secret

如果我的k8s集群很多的node节点是不是每个node节点都要上去登录才能pull harbor仓库的镜像?这样是不是就非常麻烦了?

No,K8S提供了一种secret的类型是kubernetes.io/dockerconfigjson就是用来解决这种问题的。如何使用呢?

方法一:手工添加secret
(1)首先将上述docker的登录成功的登陆信息转换成base64格式

1[root@node-06 ~]# cat .docker/config.json |base64 2ewoJImF1dGhzIjogewoJCSJoLmNubGludXguY2x1YiI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOC4wNi4xLWNlIChsaW51eCkiCgl9Cn0= 3 4

(2)创建secret
secret声明如下:

1apiVersion: v1 2kind: Secret 3metadata: 4name: harbor-registry-secret 5namespace: default 6data: 7.dockerconfigjson: ewoJImF1dGhzIjogewoJCSJoLmNubGludXguY2x1YiI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOC4wNi4xLWNlIChsaW51eCkiCgl9Cn0= 8type: kubernetes.io/dockerconfigjson 9 10

创建该secret

1[root@node-01 ~]# kubectl create -f harbor-registry-secret.yaml 2secret/harbor-registry-secret created 3 4

方法二:通过命令创建pull secret

1$ kubectl create secret docker-registry harbor-registry-secret --docker-server=${server} --docker-username=${username} --docker-password=${pwd} secret/harbor-registry-secret created 2 3

使用secret
在部署deployment的时候声明使用imagePullSecrets即可,如下所示:

1apiVersion: apps/v1 2kind: Deployment 3metadata: 4 name: deploy-nginx 5 labels: 6 app: nginx 7spec: 8 replicas: 3 9 selector: 10 matchLabels: 11 app: nginx 12 template: 13 metadata: 14 labels: 15 app: nginx 16 spec: 17 containers: 18 - name: nginx 19 image: harbor.xxxx.com.cn/test/nginx:latest 20 ports: 21 - containerPort: 80 22 imagePullSecrets: 23 - name: harbor-registry-secret 24 25

参考文章:

  1. harbor官方安装文档
  2. harbor-helm官方文档
  3. kubernetes搭建Harbor无坑及Harbor仓库同步
  4. Kubernetes - - k8s - v1.12.3 使用Helm安装harbor
  5. 使用 Helm 在 Kubernetes 上部署 Harbor
  6. Kubernetes部署(十二):helm部署harbor企业级镜像仓库

代码交流 2021