文章目录
- @[toc]
- 提前准备
- 什么是 Harbor
- Harbor 架构描述
- Harbor 安装的先决条件
- 硬件资源
- 软件依赖
- 端口依赖
- Harbor 在 k8s 的高可用
- Harbor 部署
- Helm 编排
- YAML 编排
- 创建 namespace
- 导入镜像
- 部署 Redis
- 部署 PostgreSQL
- 部署 Harbor core
- 部署 Harbor trivy
- 部署 Harbor jobservice
- 部署 Harbor registry
- 部署 Harbor portal
- 部署 Harbor exporter
- Harbor 的配置和验证
- 创建用户
- 创建项目
- 项目分配成员
- docker login 配置
- containerd 配置
- 推送镜像验证
- docker 验证
- containerd 验证
- 拉取镜像验证
- docker 验证
- containerd 验证
- 遗留问题
文章目录
- @[toc]
- 提前准备
- 什么是 Harbor
- Harbor 架构描述
- Harbor 安装的先决条件
- 硬件资源
- 软件依赖
- 端口依赖
- Harbor 在 k8s 的高可用
- Harbor 部署
- Helm 编排
- YAML 编排
- 创建 namespace
- 导入镜像
- 部署 Redis
- 部署 PostgreSQL
- 部署 Harbor core
- 部署 Harbor trivy
- 部署 Harbor jobservice
- 部署 Harbor registry
- 部署 Harbor portal
- 部署 Harbor exporter
- Harbor 的配置和验证
- 创建用户
- 创建项目
- 项目分配成员
- docker login 配置
- containerd 配置
- 推送镜像验证
- docker 验证
- containerd 验证
- 拉取镜像验证
- docker 验证
- containerd 验证
- 遗留问题
提前准备
- 涉及到镜像拉取的问题,建议提前去 github 下载好 harbor 的离线包,离线包里面包含了镜像,可以提前导入好,避免镜像拉取超时,下载地址:harbor-offline-installer-v2.11.1.tgz
- 我的实验环境是下面这两个博客部署的
- k8s 部署可以参考我之前的博客:openeuler 22.03 lts sp4 使用 kubeadm 部署 k8s-v1.28.2 高可用集群
- ingress 部署可以参考我之前的博客:k8s 1.28.2 集群部署 ingress 1.11.1 包含 admission-webhook
- MinIO 部署可以参考我之前的博客:k8s 1.28.2 集群部署 MinIO 分布式集群
什么是 Harbor
- Harbor 官网
- Harbor Github
- Harbor 是一个开源的制品仓库
- 相比较 docker registry,它可以通过策略和基于角色的访问控制来保护镜像
- Harbor 是 CNCF 毕业项目,提供合规性、性能和互操作性,帮助您跨 Kubernetes 和 Docker 等云原生计算平台一致、安全地管理镜像
Harbor 架构描述
- Architecture Overview of Harbor
Proxy
- 由 Nginx Server 组成的反向代理,提供 API 路由能力
- Harbor 的组件,如核心、注册中心、Web 门户和 Token 服务等,都位于这个反向代理的后面
Core
- Harbor 的核心服务,主要提供以下功能
API Server
:接受REST API请求并响应这些请求的HTTP服务器依赖于其子模块,如’身份验证和授权’,‘中间件’和’API处理程序’Config Manager
:涵盖所有系统配置的管理,如身份验证类型设置、电子邮件设置和证书等Project Management
:管理项目的基础数据和相应的元数据,创建该项目是为了隔离托管项目Quota Manage
:管理项目的配额设置,并在发生新推送时执行配额验证Chart Controller
:将 chart 相关请求代理到后端chartmuseum
,并提供多个扩展来改善 chart 管理体验Retention Manager
:管理标签保留策略并执行和监控标签保留流程Content Trust
:为后端 Notary 提供的信任能力添加扩展,以支持内容信任过程的顺利进行。目前仅支持对容器镜像进行签名Replication Controller
:管理复制策略和注册表适配器,触发和监控并发复制过程Scan Manager
:管理由不同提供商调整的多个已配置扫描程序,并为指定对象提供扫描摘要和报告Notification Manager(webhook)
:在 Harbor 中配置的机制,以便可以将 Harbor 中的工件状态更改填充到 Harbor 中配置的 Webhook 终端节点。相关方可以通过侦听相关的 webhook 事件来触发一些后续操作OCI Artifact Manager
:用于管理整个 Harbor 注册表中所有 OCI Artifact 生命周期的核心组件。它提供了 CRUD 操作来管理制品的元数据和相关添加,例如扫描报告、容器镜像和自述文件的构建历史、依赖项以及 helm 图表的 value.yaml 等,它还支持管理制品标签和其他有用的操作的能力Registry Driver
:实现为 Registry Client SDK,用于与底层 Registry 进行通信(目前为 docker 分发)。“OCI Artifact Manager” 依赖此驱动程序从清单中获取其他信息,甚至从位于底层注册表的指定对象的配置 JSON 中获取其他信息Job Service
- 通用作业执行队列服务,允许其他组件 / 服务使用简单的 restful API 同时提交运行异步任务的请求
Log collector
- 日志收集器,负责将其他模块的日志收集到一个地方
GC Controller
:管理在线 GC 计划设置,并启动和跟踪 GC 进度Chart Museum
:提供图表管理和访问 API 的第三方图表存储库服务器Docker Registry
:第三方注册服务器,负责存储 Docker 镜像和处理 Docker 推送 / 拉取命令。由于 Harbor 需要对镜像实施访问控制,Registry 会将客户端定向到 Token 服务,以获取每个 pull 或 push 请求的有效 TokenNotary
:第三方内容信任服务器,负责安全地发布和验证内容Web Portal
:一个图形用户界面,可帮助用户管理 Registry 上的镜像- 数据存储相关
k-v storage
:由 Redis 组成,提供数据缓存功能,并支持临时持久化 Job 服务的 Job 元数据data storage
:支持多种存储,作为 Registry 和 Chart Museum 的后端存储进行数据持久化(比如 s3 的 MinIO)Database
:存储 Harbor 模型的相关元数据,如项目、用户、角色、复制策略、标签保留策略、扫描仪、图表和图像。采用 PostgreSQL
- 下面是 Harbor 2.11.1 版本相关组件对应的版本
组件 | 版本 |
---|---|
Postgresql | 14.10 |
Redis | 7.2.2 |
Beego | 2.0.6 |
Distribution/Distribution | 2.8.3 |
Helm | 2.9.1 |
Swagger-ui | 5.9.1 |
Harbor 安装的先决条件
硬件资源
硬件类型 | 最小配置 | 推荐配置 |
---|---|---|
CPU | 2 CPU | 4 CPU |
内存 | 4 GB | 8 GB |
磁盘存储 | 40 GB | 160 GB |
软件依赖
软件 | 版本 | 描述 |
---|---|---|
Docker | 20.10.10-ce+ | Docker 安装手册 Docker Engine documentation |
Docker Compose | v1.18.0+ 或者 docker compose v2 (docker-compose-plugin) | Docker Compose 安装手册 Docker Compose documentation |
OpenSSL | 越新越好 | 用于为 Harbor 生成证书和密钥 |
端口依赖
端口可以在配置文件中定义
端口 | 协议 | 描述 |
---|---|---|
443 | HTTPS | 用户访问页面和接口 api 的 https 请求 |
4443 | HTTPS | 与 Harbor 的 Docker Content Trust 服务的连接 |
80 | HTTP | 用户访问页面和接口 api 的 http 请求 |
Harbor 在 k8s 的高可用
- Harbor 的大部分组件现在是无状态的。因此,我们可以简单地增加 Pod 的副本,以确保组件分布到多个 worker 节点,并利用 K8S 的 “Service” 机制来确保 Pod 之间的连接
- 至于存储层,预计用户为应用程序数据提供高可用性的 PostgreSQL、Redis 集群,以及用于存储图像和图表的 PVC 或对象存储
Harbor 部署
Helm 编排
这块可以直接看官方文档,这边不做详细的操作:Deploying Harbor with High Availability via Helm
YAML 编排
因为我的 pvc 是 MinIO 提供的,直接用 helm 会有很多问题,这里只能通过 YAML 编排来慢慢调整,下面的 YAML 文件都是基于 helm template 生成后做的修改
创建 namespace
namespace 的名字大家可以自己定义,这个没有什么指定的
kubectl create ns registry
导入镜像
如果没有针对机器做规划的话,可以每个节点先都导入进去
ctr -n k8s.io image import harbor.v2.11.1.tar.gz
这个时候会有下面的报错
ctr: archive/tar: invalid tar header
通过 file 命令查看压缩包
file harbor.v2.11.1.tar.gz
可以看到是一个 gzip 压缩类型的,这个不是 ctr 支持的格式,ctr 要求的是无压缩类型的 tar 包
harbor.v2.11.1.tar.gz: gzip compressed data, was "harbor.v2.11.1.tar", last modified: Thu Aug 15 10:07:54 2024, from Unix, original size modulo 2^32 1811445248
这个时候需要解压,然后重新压缩
tar xvf harbor.v2.11.1.tar.gz
rm -f harbor.v2.11.1.tar.gz
tar cvf harbor.v2.11.1.tar.gz ./
可以用 file 命令检查一下,正常是返回类似下面这样的内容,然后重新 import 导入镜像就可以了
harbor.v2.11.1.tar.gz: POSIX tar archive (GNU)
部署 Redis
- 问了下 GPT,因为 MinIO 是对象存储,并不提供完全符合 POSIX 标准的文件系统功能(比如常规文件系统的权限管理),而 Redis 依赖于传统文件系统(如 ext4、xfs 等)来存储其数据文件(RDB、AOF)
- 由于是自己练习的,这里 Redis 就直接绑定节点,用 local pv 来处理持久化
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-harbor-redis-0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: data-harbor-redis-0
namespace: registry
hostPath:
path: /approot/k8s_data/harbor-redis
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- 192.168.22.125
---
# Source: harbor/templates/redis/service.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-redis
namespace: registry
labels:
app: harbor
spec:
ports:
- port: 6379
selector:
app: harbor
component: redis
---
# Source: harbor/templates/redis/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: harbor-redis
namespace: registry
labels:
app: harbor
component: redis
spec:
replicas: 1
serviceName: harbor-redis
selector:
matchLabels:
app: harbor
component: redis
template:
metadata:
labels:
app: harbor
component: redis
spec:
securityContext:
runAsUser: 999
fsGroup: 999
automountServiceAccountToken: false
terminationGracePeriodSeconds: 120
initContainers:
- name: init-dir
image: goharbor/redis-photon:v2.11.1
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "chown -R 999:999 /var/lib/redis"]
securityContext:
runAsUser: 0
volumeMounts:
- name: data
mountPath: /var/lib/redis
containers:
- name: redis
image: goharbor/redis-photon:v2.11.1
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
livenessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 1
periodSeconds: 10
volumeMounts:
- name: data
mountPath: /var/lib/redis
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "1Gi"
部署 PostgreSQL
PostgreSQL 和 Redis 一样,数据持久化目录涉及权限问题,这里也先绑定节点
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: database-data-harbor-database-0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: database-data-harbor-database-0
namespace: registry
hostPath:
path: /approot/k8s_data/harbor-database
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- 192.168.22.124
---
# Source: harbor/templates/database/database-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-database
namespace: registry
labels:
app: harbor
type: Opaque
data:
POSTGRES_PASSWORD: "Y2hhbmdlaXQ="
---
# Source: harbor/templates/database/database-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-database
namespace: registry
labels:
app: harbor
spec:
ports:
- port: 5432
selector:
app: harbor
component: database
---
# Source: harbor/templates/database/database-ss.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: harbor-database
namespace: registry
labels:
app: harbor
component: database
spec:
replicas: 1
serviceName: harbor-database
selector:
matchLabels:
app: harbor
component: database
template:
metadata:
labels:
app: harbor
component: database
spec:
securityContext:
runAsUser: 999
fsGroup: 999
automountServiceAccountToken: false
terminationGracePeriodSeconds: 120
initContainers:
# with "fsGroup" set, each time a volume is mounted, Kubernetes must recursively chown() and chmod() all the files and directories inside the volume
# this causes the postgresql reports the "data directory /var/lib/postgresql/data/pgdata has group or world access" issue when using some CSIs e.g. Ceph
# use this init container to correct the permission
# as "fsGroup" applied before the init container running, the container has enough permission to execute the command
- name: "data-permissions-ensurer"
image: goharbor/harbor-db:v2.11.1
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command: ["sh", "-c", "mkdir -p /var/lib/postgresql/data/pgdata && chmod -R 700 /var/lib/postgresql/data/pgdata && chown -R 999:999 /var/lib/postgresql/data"]
volumeMounts:
- name: database-data
mountPath: /var/lib/postgresql/data
subPath:
containers:
- name: database
image: goharbor/harbor-db:v2.11.1
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
livenessProbe:
exec:
command:
- /docker-healthcheck.sh
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 1
readinessProbe:
exec:
command:
- /docker-healthcheck.sh
initialDelaySeconds: 1
periodSeconds: 10
timeoutSeconds: 1
envFrom:
- secretRef:
name: harbor-database
env:
# put the data into a sub directory to avoid the permission issue in k8s with restricted psp enabled
# more detail refer to https://github.com/goharbor/harbor-helm/issues/756
- name: PGDATA
value: "/var/lib/postgresql/data/pgdata"
volumeMounts:
- name: database-data
mountPath: /var/lib/postgresql/data
subPath:
- name: shm-volume
mountPath: /dev/shm
volumes:
- name: shm-volume
emptyDir:
medium: Memory
sizeLimit: 512Mi
volumeClaimTemplates:
- metadata:
name: "database-data"
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "1Gi"
部署 Harbor core
---
# Source: harbor/templates/core/core-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-core
labels:
app: "harbor"
type: Opaque
data:
secretKey: "bm90LWEtc2VjdXJlLWtleQ=="
secret: "OVQzOHVXZmtybTRTZFVUcQ=="
tls.key: "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBMENVM0Y1bXFHSktra0ZVd09aNzZoRGpJSjhEUlo4cm5hY1p4YXNpa2ZnK2RkZndlCk5SWUI3RTFRbzZDQlA4bk9CdWN0L3JiRVNVank4S1JlaWRUTFZwRG41OFk4dW9iN3BqSTVLbENMdVE2NEN0QW0KSVVEUzJWMTdqeTRFTmFRWjlNK0V4NjNpVnlNVGFpMjhYeHhZaytCMXp0ZEtxR2dhRXhHY2x5bG5Xb1FXU3RCUgpaeUlpblNUblJmdGFIeDlSQ0x6Z0lFUDlKamVSUEV4NmRLa0FFMHdaVmJSOWRUWWd6M21XSFdLc3BFbS8vWGVYClpzTTFYR3ZLQzNPNHo4Q3lWZ2FIMzdRMWNPM3NxSFNTMGwwaVVtZCs2azBoZGd0VGxUVU5mNVFMQWx6d25SV1oKbDR6d3I3N2x1bTh1c0h2emNra3NnalUzSU5XVHBnQmFGV25oOHdJREFRQUJBb0lCQUN1OVpsSmpURWRWcVpkYgpENE5NVVVDdjNmL2NtU1RDa3RhN2lPSHp2LzF0c3AwMG1mUjE1M21NMWNGTTNWeFdRQ0ZiTzJNbmJTQXBZRVFKCmhvUllYMUtWcU9ZZjFtc3NLbjNHV0JUNFVDUlhYMzJHT0QwTXJrSlhUcnZMNDc2UitaSmtlWGFzcDcrLzh6aUEKMi9Ed3QveDdVc1pnbjZPOEhKNmRPTmJiTUlqb2o1enVLSVdCampleFMybHVCdEFYNzduZXhmUzNpV3RrQS9USgpwcUpsNEJETFV1WEtralJKQzVEWnBBdHdtTVpQeGQrSTQzYnc3bVRpemppaEEzaXo4SkJKclBnTTE2b1V3SnQ2CmdMVVp5ZkZGTFNPbjEyZHhPZUxPNXZFNTJKV0JtVzRuRW5IOUxxb3hDWExic00xT04zN0ZwcmhzUXUvdko0M0wKaFJoMWFtRUNnWUVBMUh3Z241V1pjd1l1Vkw0TnRvMTlzVytncFZrUzloem1ObGtNcXR5bnFoUWE1ODh4Sk95dwpLUDdncEdZOGhIbnNmQ0NGbUpDV21CdUJRbUxrRWlNUU83eG8xQ1dMdzYydTRZeGdtenlJZDQxWmVDdVlpZHNFCnVPMlpjVEUrazc4Qy9CcmFUZHlqVk9SMnIvbk1IZGh6SzlhSkM0WlFOU2tudURwNUpVY091NDhDZ1lFQStzV1YKRElsTVBjNGtpaWNYeXdMbE50L1pQWGJjbUtRWWZJNFZsclpWQXpFRlFaQ3NGMzY0K2p1NEFlZUttdGJhMkZ4RApEMFdmaWxWOVpTczNCUDloc3ZpWk42eExaVjJZMEJHSlN6Mlp5L0x5aTRaNXk3MnB0aW83bGxyMWx4azZBTFVVCmkrQ3c4RmlQVElHMlozS1BSVko5b1B1S3JnUzUvSDFqNktzUTBWMENnWUI4NXlwV0pKNDdHeHNJL1Y4YVBEbnkKbjJlVFNyVDJyeTQwTEV4aDg2c3JNdjVOM1dGS0QwZk9FV1VEdm9VOGFsODA1L2tnSVg0a2s2WjcyNTJ0ZTZjRApObEY0dzBsUkVUdUhvZmozeDdHQWRUcHVoVkg1VnlHRGcwZDdYak1tcmxXVzFFSVhHdWQzODRSQkZWbURBY1ZSCnM1NkRnOFNLTzFMNTNJVnlBRDhNeVFLQmdERzBoQXlPRWp5VjVZdzBuM1N2eURzT040TUZVa2czRGx0eDFqbWYKUGs1NW91OFIrK3BVUmRuamlGOW9RNExaWDF0UFBrT0NxMUxDQ3k3SVdBbDNqU2ZxT29SY2REMU5SZ0xIMXd6QQowd0VuMElkelNpVG1IUU5zYjQ4bnpGSDh3QkJ2MC9pOXVwU0pHUzR5NzdLbGRGeHJNMWQ3UkV1bHlDK1Jzd0hsCkZscEpBb0dBZFRmSzZKTVJabWNBbGZCdTlUSnh2NjVKdFcrMDI5Wi94eDJ1NUprVzFPUnVwNVJlekJBM2NiQzQKRmc0Y1h6SHJ1S0sxWVhGcERyS0tGYTFMSzFGaFpjMkZCSnN5dGVLNHFQeVNLOTZVb1BlbHA1VzVBMDVZTjBBaQpLTDB5MzhNYWlYb1AyTWFvb2pSR29xWU9sTVVXRlU5RnJQSm9aSXNGKzRuUjVWZHNUUzQ9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg=="
tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURIekNDQWdlZ0F3SUJBZ0lRTGRpZ2xmZXNGaFhvVldOaTRKYkNwVEFOQmdrcWhraUc5dzBCQVFzRkFEQWEKTVJnd0ZnWURWUVFERXc5b1lYSmliM0l0ZEc5clpXNHRZMkV3SGhjTk1qUXhNREUwTURFMU16QTRXaGNOTWpVeApNREUwTURFMU16QTRXakFhTVJnd0ZnWURWUVFERXc5b1lYSmliM0l0ZEc5clpXNHRZMkV3Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURRSlRjWG1hb1lrcVNRVlRBNW52cUVPTWdud05Gbnl1ZHAKeG5GcXlLUitENTExL0I0MUZnSHNUVkNqb0lFL3ljNEc1eTMrdHNSSlNQTHdwRjZKMU10V2tPZm54ank2aHZ1bQpNamtxVUl1NURyZ0swQ1loUU5MWlhYdVBMZ1ExcEJuMHo0VEhyZUpYSXhOcUxieGZIRmlUNEhYTzEwcW9hQm9UCkVaeVhLV2RhaEJaSzBGRm5JaUtkSk9kRisxb2ZIMUVJdk9BZ1EvMG1ONUU4VEhwMHFRQVRUQmxWdEgxMU5pRFAKZVpZZFlxeWtTYi85ZDVkbXd6VmNhOG9MYzdqUHdMSldCb2ZmdERWdzdleW9kSkxTWFNKU1ozN3FUU0YyQzFPVgpOUTEvbEFzQ1hQQ2RGWm1YalBDdnZ1VzZieTZ3ZS9OeVNTeUNOVGNnMVpPbUFGb1ZhZUh6QWdNQkFBR2pZVEJmCk1BNEdBMVVkRHdFQi93UUVBd0lDcERBZEJnTlZIU1VFRmpBVUJnZ3JCZ0VGQlFjREFRWUlLd1lCQlFVSEF3SXcKRHdZRFZSMFRBUUgvQkFVd0F3RUIvekFkQmdOVkhRNEVGZ1FVR2dIa3dDQ1JZaGhTTEFGNDAvdkJTczVPbHd3dwpEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRzVJajhkZjZOcDY0NjZlYTJjcFlDZk9Vc3BRc21kMFJNd0dRTHZ2CndIak5kSDR5NWw2TjkwQUNQNHBEWmF4MUx4TEJqcHlNeGVCbzJ6TkF6NjFYQ0tJZ3RQU1RsK0NmTStqenRkVVQKVlRUNmw4emZRbVZCQk56WlVwMlhUTXdyVkowUHZML2FIbk94NGRDb0pxd2tobGNrY3JRM0ErN1haNmtGYnl1WQpBQ200cnppSHRJVWpyZ25veUVtUGFxWTJTYzJ3a3JRZklLVXRDVkl4WFdZbW51WHF6d0MwSVdqOXV5VGlTNzdECkg0V1NFdjh4ajVId3ZkK1JvaGtYaGQrbkM5WUhVQVRGSWpsclpxYkRUZU5vdjBQNG81d3N5RmJMOFN4YTFJNVoKRENqc2ZUeGx3NTJCYUI1V0YxZEJLYnBtUmRPWWprN2xEVHpqd0tRSmVkVHhnYW89Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
# Harbor admin 用户的密码
HARBOR_ADMIN_PASSWORD: "MXFAVzNlJFI="
POSTGRESQL_PASSWORD: "Y2hhbmdlaXQ="
REGISTRY_CREDENTIAL_PASSWORD: "aGFyYm9yX3JlZ2lzdHJ5X3Bhc3N3b3Jk"
CSRF_KEY: "b0wxSjdQZ2F1OFBxWWNLYXpkU2plUDNNemtzdG9nZ1U="
---
# Source: harbor/templates/core/core-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: harbor-core
labels:
app: "harbor"
data:
app.conf: |+
appname = Harbor
runmode = prod
enablegzip = true
[prod]
httpport = 8080
PORT: "8080"
DATABASE_TYPE: "postgresql"
POSTGRESQL_HOST: "harbor-database"
POSTGRESQL_PORT: "5432"
POSTGRESQL_USERNAME: "postgres"
POSTGRESQL_DATABASE: "registry"
POSTGRESQL_SSLMODE: "disable"
POSTGRESQL_MAX_IDLE_CONNS: "100"
POSTGRESQL_MAX_OPEN_CONNS: "900"
EXT_ENDPOINT: "http://harbor.devops.icu"
CORE_URL: "http://harbor-core:80"
JOBSERVICE_URL: "http://harbor-jobservice"
REGISTRY_URL: "http://harbor-registry:5000"
TOKEN_SERVICE_URL: "http://harbor-core:80/service/token"
CORE_LOCAL_URL: "http://127.0.0.1:8080"
WITH_TRIVY: "true"
TRIVY_ADAPTER_URL: "http://harbor-trivy:8080"
REGISTRY_STORAGE_PROVIDER_NAME: "s3"
LOG_LEVEL: "info"
CONFIG_PATH: "/etc/core/app.conf"
CHART_CACHE_DRIVER: "redis"
_REDIS_URL_CORE: "redis://harbor-redis:6379/0?idle_timeout_seconds=30"
_REDIS_URL_REG: "redis://harbor-redis:6379/2?idle_timeout_seconds=30"
PORTAL_URL: "http://harbor-portal"
REGISTRY_CONTROLLER_URL: "http://harbor-registry:8080"
REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user"
HTTP_PROXY: ""
HTTPS_PROXY: ""
NO_PROXY: "harbor-core,harbor-jobservice,harbor-database,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
PERMITTED_REGISTRY_TYPES_FOR_PROXY_CACHE: "docker-hub,harbor,azure-acr,aws-ecr,google-gcr,quay,docker-registry,github-ghcr,jfrog-artifactory"
METRIC_ENABLE: "true"
METRIC_PATH: "/metrics"
METRIC_PORT: "8001"
METRIC_NAMESPACE: harbor
METRIC_SUBSYSTEM: core
QUOTA_UPDATE_PROVIDER: "db"
---
# Source: harbor/templates/core/core-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-core
labels:
app: "harbor"
spec:
ports:
- name: http-web
port: 80
targetPort: 8080
- name: http-metrics
port: 8001
selector:
app: "harbor"
component: core
---
# Source: harbor/templates/core/core-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: harbor-core
labels:
app: "harbor"
component: core
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: "harbor"
component: core
template:
metadata:
labels:
app: "harbor"
component: core
spec:
securityContext:
runAsUser: 10000
fsGroup: 10000
automountServiceAccountToken: false
terminationGracePeriodSeconds: 120
containers:
- name: core
image: goharbor/harbor-core:v2.11.1
imagePullPolicy: IfNotPresent
startupProbe:
httpGet:
path: /api/v2.0/ping
scheme: HTTP
port: 8080
failureThreshold: 360
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
httpGet:
path: /api/v2.0/ping
scheme: HTTP
port: 8080
failureThreshold: 2
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/v2.0/ping
scheme: HTTP
port: 8080
failureThreshold: 2
periodSeconds: 10
envFrom:
- configMapRef:
name: "harbor-core"
- secretRef:
name: "harbor-core"
env:
- name: CORE_SECRET
valueFrom:
secretKeyRef:
name: harbor-core
key: secret
- name: JOBSERVICE_SECRET
valueFrom:
secretKeyRef:
name: harbor-jobservice
key: JOBSERVICE_SECRET
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
ports:
- containerPort: 8080
volumeMounts:
- name: config
mountPath: /etc/core/app.conf
subPath: app.conf
- name: secret-key
mountPath: /etc/core/key
subPath: key
- name: token-service-private-key
mountPath: /etc/core/private_key.pem
subPath: tls.key
- name: psc
mountPath: /etc/core/token
volumes:
- name: config
configMap:
name: harbor-core
items:
- key: app.conf
path: app.conf
- name: secret-key
secret:
secretName: harbor-core
items:
- key: secretKey
path: key
- name: token-service-private-key
secret:
secretName: harbor-core
- name: psc
emptyDir: {}
部署 Harbor trivy
同 Redis,不支持 MinIO 对象存储,只能先绑定到本地
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-harbor-trivy-0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: data-harbor-trivy-0
namespace: registry
hostPath:
path: /approot/k8s_data/harbor-trivy
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- 192.168.22.123
---
# Source: harbor/templates/trivy/trivy-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-trivy
namespace: registry
labels:
app: "harbor"
type: Opaque
data:
redisURL: cmVkaXM6Ly9oYXJib3ItcmVkaXM6NjM3OS81P2lkbGVfdGltZW91dF9zZWNvbmRzPTMw
gitHubToken: ""
---
# Source: harbor/templates/trivy/trivy-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: "harbor-trivy"
namespace: registry
labels:
app: "harbor"
spec:
ports:
- name: http-trivy
protocol: TCP
port: 8080
selector:
app: "harbor"
component: trivy
---
# Source: harbor/templates/trivy/trivy-sts.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: harbor-trivy
namespace: registry
labels:
app: "harbor"
component: trivy
spec:
replicas: 1
serviceName: harbor-trivy
selector:
matchLabels:
app: "harbor"
component: trivy
template:
metadata:
labels:
app: "harbor"
component: trivy
spec:
securityContext:
runAsUser: 10000
fsGroup: 10000
automountServiceAccountToken: false
initContainers:
- name: init-dir
image: goharbor/trivy-adapter-photon:v2.11.1
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "chown -R 10000:10000 /home/scanner/.cache"]
securityContext:
runAsUser: 0
volumeMounts:
- name: data
mountPath: /home/scanner/.cache
containers:
- name: trivy
image: goharbor/trivy-adapter-photon:v2.11.1
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
env:
- name: HTTP_PROXY
value: ""
- name: HTTPS_PROXY
value: ""
- name: NO_PROXY
value: "harbor-core,harbor-jobservice,harbor-database,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
- name: "SCANNER_LOG_LEVEL"
value: "info"
- name: "SCANNER_TRIVY_CACHE_DIR"
value: "/home/scanner/.cache/trivy"
- name: "SCANNER_TRIVY_REPORTS_DIR"
value: "/home/scanner/.cache/reports"
- name: "SCANNER_TRIVY_DEBUG_MODE"
value: "false"
- name: "SCANNER_TRIVY_VULN_TYPE"
value: "os,library"
- name: "SCANNER_TRIVY_TIMEOUT"
value: "5m0s"
- name: "SCANNER_TRIVY_GITHUB_TOKEN"
valueFrom:
secretKeyRef:
name: harbor-trivy
key: gitHubToken
- name: "SCANNER_TRIVY_SEVERITY"
value: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"
- name: "SCANNER_TRIVY_IGNORE_UNFIXED"
value: "false"
- name: "SCANNER_TRIVY_SKIP_UPDATE"
value: "false"
- name: "SCANNER_TRIVY_SKIP_JAVA_DB_UPDATE"
value: "false"
- name: "SCANNER_TRIVY_OFFLINE_SCAN"
value: "false"
- name: "SCANNER_TRIVY_SECURITY_CHECKS"
value: "vuln"
- name: "SCANNER_TRIVY_INSECURE"
value: "false"
- name: "SCANNER_API_SERVER_ADDR"
value: ":8080"
- name: "SCANNER_REDIS_URL"
valueFrom:
secretKeyRef:
name: harbor-trivy
key: redisURL
- name: "SCANNER_STORE_REDIS_URL"
valueFrom:
secretKeyRef:
name: harbor-trivy
key: redisURL
- name: "SCANNER_JOB_QUEUE_REDIS_URL"
valueFrom:
secretKeyRef:
name: harbor-trivy
key: redisURL
ports:
- name: api-server
containerPort: 8080
volumeMounts:
- name: data
mountPath: /home/scanner/.cache
readOnly: false
livenessProbe:
httpGet:
scheme: HTTP
path: /probe/healthy
port: api-server
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 10
readinessProbe:
httpGet:
scheme: HTTP
path: /probe/ready
port: api-server
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 200m
memory: 512Mi
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "5Gi"
部署 Harbor jobservice
---
# Source: harbor/templates/jobservice/jobservice-cm-env.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "harbor-jobservice-env"
namespace: registry
labels:
app: "harbor"
data:
CORE_URL: "http://harbor-core:80"
TOKEN_SERVICE_URL: "http://harbor-core:80/service/token"
REGISTRY_URL: "http://harbor-registry:5000"
REGISTRY_CONTROLLER_URL: "http://harbor-registry:8080"
REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user"
JOBSERVICE_WEBHOOK_JOB_MAX_RETRY: "3"
JOBSERVICE_WEBHOOK_JOB_HTTP_CLIENT_TIMEOUT: "3"
HTTP_PROXY: ""
HTTPS_PROXY: ""
NO_PROXY: "harbor-core,harbor-jobservice,harbor-database,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
METRIC_NAMESPACE: harbor
METRIC_SUBSYSTEM: jobservice
---
# Source: harbor/templates/jobservice/jobservice-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "harbor-jobservice"
namespace: registry
labels:
app: "harbor"
data:
config.yml: |+
#Server listening port
protocol: "http"
port: 8080
worker_pool:
workers: 10
backend: "redis"
redis_pool:
redis_url: "redis://harbor-redis:6379/1"
namespace: "harbor_job_service_namespace"
idle_timeout_second: 3600
job_loggers:
- name: "FILE"
level: INFO
settings: # Customized settings of logger
base_dir: "/var/log/jobs"
sweeper:
duration: 14 #days
settings: # Customized settings of sweeper
work_dir: "/var/log/jobs"
metric:
enabled: true
path: /metrics
port: 8001
#Loggers for the job service
loggers:
- name: "STD_OUTPUT"
level: INFO
reaper:
# the max time to wait for a task to finish, if unfinished after max_update_hours, the task will be mark as error, but the task will continue to run, default value is 24
max_update_hours: 24
# the max time for execution in running state without new task created
max_dangling_hours: 168
---
# Source: harbor/templates/jobservice/jobservice-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: harbor-jobservice
namespace: registry
annotations:
helm.sh/resource-policy: keep
labels:
app: "harbor"
component: jobservice
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-s3
---
# Source: harbor/templates/jobservice/jobservice-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: "harbor-jobservice"
namespace: registry
labels:
app: "harbor"
spec:
ports:
- name: http-jobservice
port: 80
targetPort: 8080
- name: http-metrics
port: 8001
selector:
app: "harbor"
component: jobservice
---
# Source: harbor/templates/jobservice/jobservice-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "harbor-jobservice"
namespace: registry
labels:
app: "harbor"
component: jobservice
spec:
replicas: 1
revisionHistoryLimit: 10
strategy:
type: RollingUpdate
selector:
matchLabels:
app: "harbor"
component: jobservice
template:
metadata:
labels:
app: "harbor"
component: jobservice
spec:
securityContext:
runAsUser: 10000
fsGroup: 10000
automountServiceAccountToken: false
terminationGracePeriodSeconds: 120
containers:
- name: jobservice
image: goharbor/harbor-jobservice:v2.11.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /api/v1/stats
scheme: HTTP
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/v1/stats
scheme: HTTP
port: 8080
initialDelaySeconds: 20
periodSeconds: 10
env:
- name: CORE_SECRET
valueFrom:
secretKeyRef:
name: harbor-core
key: secret
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
envFrom:
- configMapRef:
name: "harbor-jobservice-env"
- secretRef:
name: "harbor-jobservice"
ports:
- containerPort: 8080
volumeMounts:
- name: jobservice-config
mountPath: /etc/jobservice/config.yml
subPath: config.yml
- name: job-logs
mountPath: /var/log/jobs
subPath:
volumes:
- name: jobservice-config
configMap:
name: "harbor-jobservice"
- name: job-logs
persistentVolumeClaim:
claimName: harbor-jobservice
部署 Harbor registry
以下几个参数,从 chatgpt 摘抄的,用作参考
chunksize: 10485760
- 解释:每个分块上传的大小,单位是字节。10,485,760 字节等于 10 MB。
- 建议值:默认建议设置在 5 MB 到 50 MB之间,取决于你的网络环境和负载。太小会增加分块的数量和请求次数,太大会增加单次上传的时间。
- 如果网络速度较快且可靠,可以增大
chunksize
,比如设置为 25 MB 或 50 MB。- 如果网络不稳定,可以适当减小,但避免设置过低。
multipartcopychunksize: 33554432
- 解释:拷贝大文件时每个分块的大小,单位是字节。33,554,432 字节等于 32 MB。
- 建议值:默认建议设置在 16 MB 到 64 MB之间,具体取决于文件大小和存储性能。拷贝大文件时,较大的
multipartcopychunksize
可以加快进度,但会增加内存和带宽占用。
- 对于大型文件,可以增大到 64 MB,甚至更高。
- 对于频繁的小文件传输,32 MB 是比较合适的值。
multipartcopymaxconcurrency: 100
- 解释:并发上传的最大线程数,控制了同时上传的分块数量。
- 建议值:并发数应根据网络带宽和服务器的吞吐能力来设置。建议值通常为 5 到 20
- 100 并发是非常高的值,适合超高性能的网络环境。大多数情况下,50 以下更为合适。
- 如果服务器资源有限或者网络带宽受限,建议减少到 10 或 20,以避免因过多的并发导致超时或失败。
multipartcopythresholdsize: 5368709120
- 解释:决定何时启用分块上传。5,368,709,120 字节等于 5 GB。当文件超过这个大小时,会启用分块上传。
- 建议值:5 GB 是一个较为常见的默认值。如果大多数文件都远小于 5 GB,可以调小这个值,例如 1 G或2 GB,以更早触发分块上传。
- 如果你处理的文件一般较小,可以调低这个值以利用分块上传的优势。
- 对于经常传输大型文件的环境,5 GB 是合适的阈值。
---
# Source: harbor/templates/registry/registry-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: "harbor-registry"
namespace: registry
labels:
app: "harbor"
type: Opaque
data:
REGISTRY_HTTP_SECRET: "Rk9MZGxiTnVhVUNWMW9Naw=="
REGISTRY_REDIS_PASSWORD: ""
REGISTRY_STORAGE_S3_ACCESSKEY: "N2l2a3VCcnJwWWZMWU50NUZvNUw="
REGISTRY_STORAGE_S3_SECRETKEY: "aDF6NHFFQUc5Y3U2M1lWRTE2eDZsTXNtckREbVEzeGN3dGdmQmJ4cw=="
---
# Source: harbor/templates/registry/registry-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: "harbor-registry-htpasswd"
namespace: registry
labels:
app: "harbor"
type: Opaque
data:
REGISTRY_HTPASSWD: "aGFyYm9yX3JlZ2lzdHJ5X3VzZXI6JDJhJDEwJGpGWm93Qk94NC5iZ3JGQnR4dlBidHVEQmhmWlhUV0tiTnVoamd6bXVLZ0xvVm1seXU3MzRt"
---
# Source: harbor/templates/registry/registryctl-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: "harbor-registryctl"
namespace: registry
labels:
app: "harbor"
type: Opaque
data:
---
# Source: harbor/templates/registry/registry-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "harbor-registry"
namespace: registry
labels:
app: "harbor"
data:
config.yml: |+
version: 0.1
log:
level: info
fields:
service: registry
storage:
s3:
region: "default"
# region: "asia-east1"
bucket: harbor
regionendpoint: http://minio-svc.storage.svc.cluster.local:9000
# v4auth: true
# chunksize: 5242880
rootdirectory: /
# multipartcopychunksize: 8388608
multipartcopymaxconcurrency: 10
# multipartcopythresholdsize: 5368709120
cache:
layerinfo: redis
maintenance:
uploadpurging:
enabled: true
age: 168h
interval: 24h
dryrun: false
delete:
enabled: true
redirect:
disable: true
redis:
addr: harbor-redis:6379
db: 2
readtimeout: 10s
writetimeout: 10s
dialtimeout: 10s
pool:
maxidle: 100
maxactive: 500
idletimeout: 60s
http:
timeout: 300s
addr: :5000
relativeurls: false
# set via environment variable
# secret: placeholder
debug:
addr: :8001
prometheus:
enabled: true
path: /metrics
auth:
htpasswd:
realm: harbor-registry-basic-realm
path: /etc/registry/passwd
validation:
disabled: true
compatibility:
schema1:
enabled: true
ctl-config.yml: |+
---
protocol: "http"
port: 8080
log_level: info
registry_config: "/etc/registry/config.yml"
---
# Source: harbor/templates/registry/registryctl-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "harbor-registryctl"
namespace: registry
labels:
app: "harbor"
data:
---
# Source: harbor/templates/registry/registry-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: "harbor-registry"
namespace: registry
labels:
app: "harbor"
spec:
ports:
- name: http-registry
port: 5000
- name: http-controller
port: 8080
- name: http-metrics
port: 8001
selector:
app: "harbor"
component: registry
---
# Source: harbor/templates/registry/registry-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "harbor-registry"
namespace: registry
labels:
app: "harbor"
component: registry
spec:
replicas: 1
revisionHistoryLimit: 10
strategy:
type: RollingUpdate
selector:
matchLabels:
app: "harbor"
component: registry
template:
metadata:
labels:
app: "harbor"
component: registry
spec:
securityContext:
runAsUser: 10000
fsGroup: 10000
fsGroupChangePolicy: OnRootMismatch
automountServiceAccountToken: false
terminationGracePeriodSeconds: 120
containers:
- name: registry
image: goharbor/registry-photon:v2.11.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /
scheme: HTTP
port: 5000
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /
scheme: HTTP
port: 5000
initialDelaySeconds: 1
periodSeconds: 10
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
args: ["serve", "/etc/registry/config.yml"]
envFrom:
- secretRef:
name: "harbor-registry"
env:
ports:
- containerPort: 5000
- containerPort: 8001
volumeMounts:
- name: registry-data
mountPath: /storage
subPath:
- name: registry-htpasswd
mountPath: /etc/registry/passwd
subPath: passwd
- name: registry-config
mountPath: /etc/registry/config.yml
subPath: config.yml
- name: registryctl
image: goharbor/harbor-registryctl:v2.11.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /api/health
scheme: HTTP
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/health
scheme: HTTP
port: 8080
initialDelaySeconds: 1
periodSeconds: 10
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
envFrom:
- configMapRef:
name: "harbor-registryctl"
- secretRef:
name: "harbor-registry"
- secretRef:
name: "harbor-registryctl"
env:
- name: CORE_SECRET
valueFrom:
secretKeyRef:
name: harbor-core
key: secret
- name: JOBSERVICE_SECRET
valueFrom:
secretKeyRef:
name: harbor-jobservice
key: JOBSERVICE_SECRET
ports:
- containerPort: 8080
volumeMounts:
- name: registry-data
mountPath: /storage
subPath:
- name: registry-config
mountPath: /etc/registry/config.yml
subPath: config.yml
- name: registry-config
mountPath: /etc/registryctl/config.yml
subPath: ctl-config.yml
volumes:
- name: registry-htpasswd
secret:
secretName: harbor-registry-htpasswd
items:
- key: REGISTRY_HTPASSWD
path: passwd
- name: registry-config
configMap:
name: "harbor-registry"
- name: registry-data
emptyDir: {}
部署 Harbor portal
到这一步结束,就可以打开浏览器输入自己配置的 Harbor 域名来访问了
- 用户名:
admin
- 密码:
1q@W3e$R
(在 Harbor core 的 configmap 里面可以自己定义)
---
# Source: harbor/templates/portal/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "harbor-portal"
namespace: registry
labels:
app: "harbor"
data:
nginx.conf: |+
worker_processes auto;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
client_body_temp_path /tmp/client_body_temp;
proxy_temp_path /tmp/proxy_temp;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
server {
listen 8080;
listen [::]:8080;
server_name localhost;
# server_name harbor.devops.icu;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
location /devcenter-api-2.0 {
try_files $uri $uri/ /swagger-ui-index.html;
}
location / {
try_files $uri $uri/ /index.html;
}
location = /index.html {
add_header Cache-Control "no-store, no-cache, must-revalidate";
}
}
}
---
# Source: harbor/templates/portal/service.yaml
apiVersion: v1
kind: Service
metadata:
name: "harbor-portal"
namespace: registry
labels:
app: "harbor"
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: "harbor"
component: portal
---
# Source: harbor/templates/portal/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "harbor-portal"
namespace: registry
labels:
app: "harbor"
component: portal
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: "harbor"
component: portal
template:
metadata:
labels:
app: "harbor"
component: portal
spec:
securityContext:
runAsUser: 10000
fsGroup: 10000
automountServiceAccountToken: false
containers:
- name: portal
image: goharbor/harbor-portal:v2.11.1
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
livenessProbe:
httpGet:
path: /
scheme: HTTP
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /
scheme: HTTP
port: 8080
initialDelaySeconds: 1
periodSeconds: 10
ports:
- containerPort: 8080
volumeMounts:
- name: portal-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: portal-config
configMap:
name: "harbor-portal"
---
# Source: harbor/templates/ingress/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "harbor-ingress"
namespace: registry
labels:
app: "harbor"
annotations:
ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /api/
pathType: Prefix
backend:
service:
name: harbor-core
port:
number: 80
- path: /service/
pathType: Prefix
backend:
service:
name: harbor-core
port:
number: 80
- path: /v2/
pathType: Prefix
backend:
service:
name: harbor-core
port:
number: 80
- path: /chartrepo/
pathType: Prefix
backend:
service:
name: harbor-core
port:
number: 80
- path: /c/
pathType: Prefix
backend:
service:
name: harbor-core
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: harbor-portal
port:
number: 80
host: harbor.devops.icu
登陆后就可以看到这些页面了
部署 Harbor exporter
exporter 是提供给 prometheus 的,这个不是必须的,看大家自己的情况
---
# Source: harbor/templates/exporter/exporter-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-exporter
namespace: registry
labels:
app: "harbor"
type: Opaque
data:
HARBOR_ADMIN_PASSWORD: "MXFAVzNlJFI="
HARBOR_DATABASE_PASSWORD: "Y2hhbmdlaXQ="
---
# Source: harbor/templates/exporter/exporter-cm-env.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "harbor-exporter-env"
namespace: registry
labels:
app: "harbor"
data:
HTTP_PROXY: ""
HTTPS_PROXY: ""
NO_PROXY: "harbor-core,harbor-jobservice,harbor-database,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
LOG_LEVEL: "info"
HARBOR_EXPORTER_PORT: "8001"
HARBOR_EXPORTER_METRICS_PATH: "/metrics"
HARBOR_EXPORTER_METRICS_ENABLED: "true"
HARBOR_EXPORTER_CACHE_TIME: "23"
HARBOR_EXPORTER_CACHE_CLEAN_INTERVAL: "14400"
HARBOR_METRIC_NAMESPACE: harbor
HARBOR_METRIC_SUBSYSTEM: exporter
HARBOR_REDIS_URL: "redis://harbor-redis:6379/1"
HARBOR_REDIS_NAMESPACE: harbor_job_service_namespace
HARBOR_REDIS_TIMEOUT: "3600"
HARBOR_SERVICE_SCHEME: "http"
HARBOR_SERVICE_HOST: "harbor-core"
HARBOR_SERVICE_PORT: "80"
HARBOR_DATABASE_HOST: "harbor-database"
HARBOR_DATABASE_PORT: "5432"
HARBOR_DATABASE_USERNAME: "postgres"
HARBOR_DATABASE_DBNAME: "registry"
HARBOR_DATABASE_SSLMODE: "disable"
HARBOR_DATABASE_MAX_IDLE_CONNS: "100"
HARBOR_DATABASE_MAX_OPEN_CONNS: "900"
---
# Source: harbor/templates/exporter/exporter-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: "harbor-exporter"
namespace: registry
labels:
app: "harbor"
spec:
ports:
- name: http-metrics
port: 8001
selector:
app: "harbor"
component: exporter
---
# Source: harbor/templates/exporter/exporter-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: harbor-exporter
namespace: registry
labels:
app: "harbor"
component: exporter
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: "harbor"
component: exporter
template:
metadata:
labels:
app: "harbor"
component: exporter
spec:
securityContext:
runAsUser: 10000
fsGroup: 10000
automountServiceAccountToken: false
containers:
- name: exporter
image: goharbor/harbor-exporter:v2.11.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /
port: 8001
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 8001
initialDelaySeconds: 30
periodSeconds: 10
args: ["-log-level", "info"]
envFrom:
- configMapRef:
name: "harbor-exporter-env"
- secretRef:
name: "harbor-exporter"
env:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
ports:
- containerPort: 8001
volumeMounts:
volumes:
- name: config
secret:
secretName: "harbor-exporter"
Harbor 的配置和验证
创建用户
尽量不要直接使用 admin 用户来拉取镜像
创建项目
个人有强迫症,喜欢分类,这里的配额可以设定一个大小,限制这个项目的空间,避免把磁盘跑满出现问题,测试的我就没配置,
-1
表示不限制
项目分配成员
给项目分配成员,同时配置应有的权限
关于权限这块,可以查看官方文档:User Permissions By Role,下面是翻译的内容
Action(动作) | Limited Guest(受限访客) | Guest(访客) | Developer(开发者) | Maintainer(维护人员) | Project Admin(项目管理员) | Admin(系统管理员) |
---|---|---|---|---|---|---|
查看项目配置 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
编辑项目配置 | ✓ | ✓ | ||||
查看项目成员列表 | ✓ | ✓ | ✓ | ✓ | ✓ | |
创建/编辑/删除项目成员 | ✓ | ✓ | ||||
查看项目日志列表 | ✓ | ✓ | ✓ | ✓ | ✓ | |
查看项目复制列表 | ✓ | ✓ | ✓ | |||
查看项目复制作业列表 | ✓ | ✓ | ||||
查看项目标签列表 | ✓ | ✓ | ✓ | |||
创建/编辑/删除项目标签 | ✓ | ✓ | ✓ | |||
查看存储库列表 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
创建存储库 | ✓ | ✓ | ✓ | ✓ | ||
编辑/删除仓库 | ✓ | ✓ | ✓ | |||
查看图片列表 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
重新标记图像 | ✓ | ✓ | ✓ | ✓ | ✓ | |
拉取图片 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
推送图片 | ✓ | ✓ | ✓ | ✓ | ||
扫描/删除图像 | ✓ | ✓ | ✓ | |||
将扫描仪添加到 | ✓ | |||||
在项目中编辑扫描仪 | ✓ | ✓ | ||||
查看映像漏洞列表 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
查看映像构建历史记录 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
添加/删除图像标签 | ✓ | ✓ | ✓ | ✓ | ||
查看舵手图表列表 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
下载舵手图 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
上传舵图 | ✓ | ✓ | ✓ | ✓ | ||
删除舵图 | ✓ | ✓ | ✓ | |||
查看 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
下载 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
上传 | ✓ | ✓ | ✓ | ✓ | ||
删除 | ✓ | ✓ | ✓ | |||
添加/删除 | ✓ | ✓ | ✓ | ✓ | ||
查看项目机器人列表 | ✓ | ✓ | ✓ | |||
创建/编辑/删除项目机器人 | ✓ | ✓ | ||||
查看配置的 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
创建/编辑/删除 | ✓ | ✓ | ||||
启用/禁用 | ✓ | ✓ | ✓ | ✓ | ||
创建/删除标记保留规则 | ✓ | ✓ | ✓ | ✓ | ||
启用/禁用标记保留规则 | ✓ | ✓ | ✓ | ✓ | ||
创建/删除标签不可变性规则 | ✓ | ✓ | ✓ | |||
启用/禁用标签不可变性规则 | ✓ | ✓ | ✓ | |||
查看项目配额 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
编辑项目配额 | ✓ | |||||
添加新扫描仪 | ✓ |
docker login 配置
在
/etc/docker/daemon.json
文件里面追加下面的内容,具体要注意 json 格式,以及内容要改成自己配置的 Harbor 地址
"insecure-registries": ["http://harbor.devops.icu"]
配置完成后,需要重启 docker 服务
systemctl restart docker
登录 Harbor
docker login http://harbor.devops.icu
输入用户名和密码,登录成功后,会有类似如下的返回
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
containerd 配置
暂时没验证出来和 docker login 一样的方式,只能每次 push 的时候,加上用户名和密码来推送
推送镜像验证
docker 验证
修改镜像 tag,这里选自己本地有的就行
docker tag m.daocloud.io/busybox:1.37 harbor.devops.icu/baseimage/busybox:1.37
推送镜像,返回 Pushed 后,可以去 Harbor 页面查看
docker push harbor.devops.icu/baseimage/busybox:1.37
containerd 验证
修改镜像 tag,这里选自己本地有的就行
ctr -n k8s.io image tag m.daocloud.io/busybox:1.37 harbor.devops.icu/baseimage/busybox:1.37
推送镜像,返回 Pushed 后,可以去 Harbor 页面查看
ctr -n k8s.io image push --user harboruser --plain-http harbor.devops.icu/baseimage/busybox:1.37
拉取镜像验证
docker 验证
先把本地 tag 过的镜像删了,或者换个机器直接 pull 也可以
docker rmi harbor.devops.icu/baseimage/busybox:1.37
拉取镜像
docker pull harbor.devops.icu/baseimage/busybox:1.37
containerd 验证
ctr -n k8s.io image pull --user harboruser --plain-http harbor.devops.icu/baseimage/busybox:1.37
遗留问题
Harbor 上删除镜像后,MinIO 的数据不会被删除,尝试过 Harbor 的垃圾清理,没有触发,如果有大佬知道的,希望赐教,后期如果有找到问题,再更新博客