k8s 1.28.2 集群部署 harbor v2.11.1 接入 MinIO 对象存储

文章目录

    • @[toc]
    • 提前准备
    • 什么是 Harbor
    • Harbor 架构描述
    • Harbor 安装的先决条件
      • 硬件资源
      • 软件依赖
      • 端口依赖
    • Harbor 在 k8s 的高可用
    • Harbor 部署
      • Helm 编排
      • YAML 编排
        • 创建 namespace
        • 导入镜像
        • 部署 Redis
        • 部署 PostgreSQL
        • 部署 Harbor core
        • 部署 Harbor trivy
        • 部署 Harbor jobservice
        • 部署 Harbor registry
        • 部署 Harbor portal
        • 部署 Harbor exporter
    • Harbor 的配置和验证
      • 创建用户
      • 创建项目
      • 项目分配成员
      • docker login 配置
      • containerd 配置
      • 推送镜像验证
        • docker 验证
        • containerd 验证
      • 拉取镜像验证
        • docker 验证
        • containerd 验证
    • 遗留问题

提前准备

  • 涉及到镜像拉取的问题,建议提前去 github 下载好 harbor 的离线包,离线包里面包含了镜像,可以提前导入好,避免镜像拉取超时,下载地址:harbor-offline-installer-v2.11.1.tgz
  • 我的实验环境是下面这两个博客部署的
    • k8s 部署可以参考我之前的博客:openeuler 22.03 lts sp4 使用 kubeadm 部署 k8s-v1.28.2 高可用集群
    • ingress 部署可以参考我之前的博客:k8s 1.28.2 集群部署 ingress 1.11.1 包含 admission-webhook
    • MinIO 部署可以参考我之前的博客:k8s 1.28.2 集群部署 MinIO 分布式集群

什么是 Harbor

  • Harbor 官网
  • Harbor Github
  • Harbor 是一个开源的制品仓库
  • 相比较 docker registry,它可以通过策略和基于角色的访问控制来保护镜像
  • Harbor 是 CNCF 毕业项目,提供合规性、性能和互操作性,帮助您跨 Kubernetes 和 Docker 等云原生计算平台一致、安全地管理镜像

Harbor 架构描述

  • Architecture Overview of Harbor
  • Proxy
    • 由 Nginx Server 组成的反向代理,提供 API 路由能力
    • Harbor 的组件,如核心、注册中心、Web 门户和 Token 服务等,都位于这个反向代理的后面
  • Core
    • Harbor 的核心服务,主要提供以下功能
      • API Server:接受REST API请求并响应这些请求的HTTP服务器依赖于其子模块,如’身份验证和授权’,‘中间件’和’API处理程序’
      • Config Manager:涵盖所有系统配置的管理,如身份验证类型设置、电子邮件设置和证书等
      • Project Management:管理项目的基础数据和相应的元数据,创建该项目是为了隔离托管项目
      • Quota Manage:管理项目的配额设置,并在发生新推送时执行配额验证
      • Chart Controller:将 chart 相关请求代理到后端 chartmuseum,并提供多个扩展来改善 chart 管理体验
      • Retention Manager:管理标签保留策略并执行和监控标签保留流程
      • Content Trust:为后端 Notary 提供的信任能力添加扩展,以支持内容信任过程的顺利进行。目前仅支持对容器镜像进行签名
      • Replication Controller:管理复制策略和注册表适配器,触发和监控并发复制过程
      • Scan Manager:管理由不同提供商调整的多个已配置扫描程序,并为指定对象提供扫描摘要和报告
      • Notification Manager(webhook):在 Harbor 中配置的机制,以便可以将 Harbor 中的工件状态更改填充到 Harbor 中配置的 Webhook 终端节点。相关方可以通过侦听相关的 webhook 事件来触发一些后续操作
      • OCI Artifact Manager:用于管理整个 Harbor 注册表中所有 OCI Artifact 生命周期的核心组件。它提供了 CRUD 操作来管理制品的元数据和相关添加,例如扫描报告、容器镜像和自述文件的构建历史、依赖项以及 helm 图表的 value.yaml 等,它还支持管理制品标签和其他有用的操作的能力
      • Registry Driver:实现为 Registry Client SDK,用于与底层 Registry 进行通信(目前为 docker 分发)。“OCI Artifact Manager” 依赖此驱动程序从清单中获取其他信息,甚至从位于底层注册表的指定对象的配置 JSON 中获取其他信息
  • Job Service
    • 通用作业执行队列服务,允许其他组件 / 服务使用简单的 restful API 同时提交运行异步任务的请求
  • Log collector
    • 日志收集器,负责将其他模块的日志收集到一个地方
  • GC Controller:管理在线 GC 计划设置,并启动和跟踪 GC 进度
  • Chart Museum:提供图表管理和访问 API 的第三方图表存储库服务器
  • Docker Registry:第三方注册服务器,负责存储 Docker 镜像和处理 Docker 推送 / 拉取命令。由于 Harbor 需要对镜像实施访问控制,Registry 会将客户端定向到 Token 服务,以获取每个 pull 或 push 请求的有效 Token
  • Notary:第三方内容信任服务器,负责安全地发布和验证内容
  • Web Portal:一个图形用户界面,可帮助用户管理 Registry 上的镜像
  • 数据存储相关
    • k-v storage:由 Redis 组成,提供数据缓存功能,并支持临时持久化 Job 服务的 Job 元数据
    • data storage:支持多种存储,作为 Registry 和 Chart Museum 的后端存储进行数据持久化(比如 s3 的 MinIO)
    • Database:存储 Harbor 模型的相关元数据,如项目、用户、角色、复制策略、标签保留策略、扫描仪、图表和图像。采用 PostgreSQL
  • 下面是 Harbor 2.11.1 版本相关组件对应的版本
组件版本
Postgresql14.10
Redis7.2.2
Beego2.0.6
Distribution/Distribution2.8.3
Helm2.9.1
Swagger-ui5.9.1

Harbor 安装的先决条件

硬件资源

硬件类型最小配置推荐配置
CPU2 CPU4 CPU
内存4 GB8 GB
磁盘存储40 GB160 GB

软件依赖

软件版本描述
Docker20.10.10-ce+Docker 安装手册
Docker Engine documentation
Docker Composev1.18.0+ 或者
docker compose v2 (docker-compose-plugin)
Docker Compose 安装手册
Docker Compose documentation
OpenSSL越新越好用于为 Harbor 生成证书和密钥

端口依赖

端口可以在配置文件中定义

端口协议描述
443HTTPS用户访问页面和接口 api 的 https 请求
4443HTTPS与 Harbor 的 Docker Content Trust 服务的连接
80HTTP用户访问页面和接口 api 的 http 请求

Harbor 在 k8s 的高可用

  • Harbor 的大部分组件现在是无状态的。因此,我们可以简单地增加 Pod 的副本,以确保组件分布到多个 worker 节点,并利用 K8S 的 “Service” 机制来确保 Pod 之间的连接
  • 至于存储层,预计用户为应用程序数据提供高可用性的 PostgreSQL、Redis 集群,以及用于存储图像和图表的 PVC 或对象存储

在这里插入图片描述

Harbor 部署

Helm 编排

这块可以直接看官方文档,这边不做详细的操作:Deploying Harbor with High Availability via Helm

YAML 编排

因为我的 pvc 是 MinIO 提供的,直接用 helm 会有很多问题,这里只能通过 YAML 编排来慢慢调整,下面的 YAML 文件都是基于 helm template 生成后做的修改

创建 namespace

namespace 的名字大家可以自己定义,这个没有什么指定的

kubectl create ns registry
导入镜像

如果没有针对机器做规划的话,可以每个节点先都导入进去

ctr -n k8s.io image import harbor.v2.11.1.tar.gz

这个时候会有下面的报错

ctr: archive/tar: invalid tar header

通过 file 命令查看压缩包

file harbor.v2.11.1.tar.gz

可以看到是一个 gzip 压缩类型的,这个不是 ctr 支持的格式,ctr 要求的是无压缩类型的 tar 包

harbor.v2.11.1.tar.gz: gzip compressed data, was "harbor.v2.11.1.tar", last modified: Thu Aug 15 10:07:54 2024, from Unix, original size modulo 2^32 1811445248

这个时候需要解压,然后重新压缩

tar xvf harbor.v2.11.1.tar.gz
rm -f harbor.v2.11.1.tar.gz
tar cvf harbor.v2.11.1.tar.gz ./

可以用 file 命令检查一下,正常是返回类似下面这样的内容,然后重新 import 导入镜像就可以了

harbor.v2.11.1.tar.gz: POSIX tar archive (GNU)
部署 Redis
  • 问了下 GPT,因为 MinIO 是对象存储,并不提供完全符合 POSIX 标准的文件系统功能(比如常规文件系统的权限管理),而 Redis 依赖于传统文件系统(如 ext4、xfs 等)来存储其数据文件(RDB、AOF)
  • 由于是自己练习的,这里 Redis 就直接绑定节点,用 local pv 来处理持久化
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-harbor-redis-0
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: data-harbor-redis-0
    namespace: registry
  hostPath:
    path: /approot/k8s_data/harbor-redis
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - 192.168.22.125
---
# Source: harbor/templates/redis/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: harbor-redis
  namespace: registry
  labels:
    app: harbor
spec:
  ports:
    - port: 6379
  selector:
    app: harbor
    component: redis
---
# Source: harbor/templates/redis/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: harbor-redis
  namespace: registry
  labels:
    app: harbor
    component: redis
spec:
  replicas: 1
  serviceName: harbor-redis
  selector:
    matchLabels:
      app: harbor
      component: redis
  template:
    metadata:
      labels:
        app: harbor
        component: redis
    spec:
      securityContext:
        runAsUser: 999
        fsGroup: 999
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 120
      initContainers:
      - name: init-dir
        image: goharbor/redis-photon:v2.11.1
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "chown -R 999:999 /var/lib/redis"]
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: data
          mountPath: /var/lib/redis
      containers:
      - name: redis
        image: goharbor/redis-photon:v2.11.1
        imagePullPolicy: IfNotPresent
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        livenessProbe:
          tcpSocket:
            port: 6379
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          tcpSocket:
            port: 6379
          initialDelaySeconds: 1
          periodSeconds: 10
        volumeMounts:
        - name: data
          mountPath: /var/lib/redis
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: "1Gi"
部署 PostgreSQL

PostgreSQL 和 Redis 一样,数据持久化目录涉及权限问题,这里也先绑定节点

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: database-data-harbor-database-0
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: database-data-harbor-database-0
    namespace: registry
  hostPath:
    path: /approot/k8s_data/harbor-database
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - 192.168.22.124
---
# Source: harbor/templates/database/database-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: harbor-database
  namespace: registry
  labels:
    app: harbor
type: Opaque
data:
  POSTGRES_PASSWORD: "Y2hhbmdlaXQ="
---
# Source: harbor/templates/database/database-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: harbor-database
  namespace: registry
  labels:
    app: harbor
spec:
  ports:
    - port: 5432
  selector:
    app: harbor
    component: database
---
# Source: harbor/templates/database/database-ss.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: harbor-database
  namespace: registry
  labels:
    app: harbor
    component: database
spec:
  replicas: 1
  serviceName: harbor-database
  selector:
    matchLabels:
      app: harbor
      component: database
  template:
    metadata:
      labels:
        app: harbor
        component: database
    spec:
      securityContext:
        runAsUser: 999
        fsGroup: 999
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 120
      initContainers:
      # with "fsGroup" set, each time a volume is mounted, Kubernetes must recursively chown() and chmod() all the files and directories inside the volume
      # this causes the postgresql reports the "data directory /var/lib/postgresql/data/pgdata has group or world access" issue when using some CSIs e.g. Ceph
      # use this init container to correct the permission
      # as "fsGroup" applied before the init container running, the container has enough permission to execute the command
      - name: "data-permissions-ensurer"
        image: goharbor/harbor-db:v2.11.1
        imagePullPolicy: IfNotPresent
        securityContext:
          runAsUser: 0
        command: ["sh", "-c", "mkdir -p /var/lib/postgresql/data/pgdata && chmod -R 700 /var/lib/postgresql/data/pgdata && chown -R 999:999 /var/lib/postgresql/data"]
        volumeMounts:
          - name: database-data
            mountPath: /var/lib/postgresql/data
            subPath:
      containers:
      - name: database
        image: goharbor/harbor-db:v2.11.1
        imagePullPolicy: IfNotPresent
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        livenessProbe:
          exec:
            command:
            - /docker-healthcheck.sh
          initialDelaySeconds: 300
          periodSeconds: 10
          timeoutSeconds: 1
        readinessProbe:
          exec:
            command:
            - /docker-healthcheck.sh
          initialDelaySeconds: 1
          periodSeconds: 10
          timeoutSeconds: 1
        envFrom:
          - secretRef:
              name: harbor-database
        env:
          # put the data into a sub directory to avoid the permission issue in k8s with restricted psp enabled
          # more detail refer to https://github.com/goharbor/harbor-helm/issues/756
          - name: PGDATA
            value: "/var/lib/postgresql/data/pgdata"
        volumeMounts:
        - name: database-data
          mountPath: /var/lib/postgresql/data
          subPath:
        - name: shm-volume
          mountPath: /dev/shm
      volumes:
      - name: shm-volume
        emptyDir:
          medium: Memory
          sizeLimit: 512Mi
  volumeClaimTemplates:
  - metadata:
      name: "database-data"
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: "1Gi"
部署 Harbor core
---
# Source: harbor/templates/core/core-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: harbor-core
  labels:
    app: "harbor"
type: Opaque
data:
  secretKey: "bm90LWEtc2VjdXJlLWtleQ=="
  secret: "OVQzOHVXZmtybTRTZFVUcQ=="
  tls.key: "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBMENVM0Y1bXFHSktra0ZVd09aNzZoRGpJSjhEUlo4cm5hY1p4YXNpa2ZnK2RkZndlCk5SWUI3RTFRbzZDQlA4bk9CdWN0L3JiRVNVank4S1JlaWRUTFZwRG41OFk4dW9iN3BqSTVLbENMdVE2NEN0QW0KSVVEUzJWMTdqeTRFTmFRWjlNK0V4NjNpVnlNVGFpMjhYeHhZaytCMXp0ZEtxR2dhRXhHY2x5bG5Xb1FXU3RCUgpaeUlpblNUblJmdGFIeDlSQ0x6Z0lFUDlKamVSUEV4NmRLa0FFMHdaVmJSOWRUWWd6M21XSFdLc3BFbS8vWGVYClpzTTFYR3ZLQzNPNHo4Q3lWZ2FIMzdRMWNPM3NxSFNTMGwwaVVtZCs2azBoZGd0VGxUVU5mNVFMQWx6d25SV1oKbDR6d3I3N2x1bTh1c0h2emNra3NnalUzSU5XVHBnQmFGV25oOHdJREFRQUJBb0lCQUN1OVpsSmpURWRWcVpkYgpENE5NVVVDdjNmL2NtU1RDa3RhN2lPSHp2LzF0c3AwMG1mUjE1M21NMWNGTTNWeFdRQ0ZiTzJNbmJTQXBZRVFKCmhvUllYMUtWcU9ZZjFtc3NLbjNHV0JUNFVDUlhYMzJHT0QwTXJrSlhUcnZMNDc2UitaSmtlWGFzcDcrLzh6aUEKMi9Ed3QveDdVc1pnbjZPOEhKNmRPTmJiTUlqb2o1enVLSVdCampleFMybHVCdEFYNzduZXhmUzNpV3RrQS9USgpwcUpsNEJETFV1WEtralJKQzVEWnBBdHdtTVpQeGQrSTQzYnc3bVRpemppaEEzaXo4SkJKclBnTTE2b1V3SnQ2CmdMVVp5ZkZGTFNPbjEyZHhPZUxPNXZFNTJKV0JtVzRuRW5IOUxxb3hDWExic00xT04zN0ZwcmhzUXUvdko0M0wKaFJoMWFtRUNnWUVBMUh3Z241V1pjd1l1Vkw0TnRvMTlzVytncFZrUzloem1ObGtNcXR5bnFoUWE1ODh4Sk95dwpLUDdncEdZOGhIbnNmQ0NGbUpDV21CdUJRbUxrRWlNUU83eG8xQ1dMdzYydTRZeGdtenlJZDQxWmVDdVlpZHNFCnVPMlpjVEUrazc4Qy9CcmFUZHlqVk9SMnIvbk1IZGh6SzlhSkM0WlFOU2tudURwNUpVY091NDhDZ1lFQStzV1YKRElsTVBjNGtpaWNYeXdMbE50L1pQWGJjbUtRWWZJNFZsclpWQXpFRlFaQ3NGMzY0K2p1NEFlZUttdGJhMkZ4RApEMFdmaWxWOVpTczNCUDloc3ZpWk42eExaVjJZMEJHSlN6Mlp5L0x5aTRaNXk3MnB0aW83bGxyMWx4azZBTFVVCmkrQ3c4RmlQVElHMlozS1BSVko5b1B1S3JnUzUvSDFqNktzUTBWMENnWUI4NXlwV0pKNDdHeHNJL1Y4YVBEbnkKbjJlVFNyVDJyeTQwTEV4aDg2c3JNdjVOM1dGS0QwZk9FV1VEdm9VOGFsODA1L2tnSVg0a2s2WjcyNTJ0ZTZjRApObEY0dzBsUkVUdUhvZmozeDdHQWRUcHVoVkg1VnlHRGcwZDdYak1tcmxXVzFFSVhHdWQzODRSQkZWbURBY1ZSCnM1NkRnOFNLTzFMNTNJVnlBRDhNeVFLQmdERzBoQXlPRWp5VjVZdzBuM1N2eURzT040TUZVa2czRGx0eDFqbWYKUGs1NW91OFIrK3BVUmRuamlGOW9RNExaWDF0UFBrT0NxMUxDQ3k3SVdBbDNqU2ZxT29SY2REMU5SZ0xIMXd6QQowd0VuMElkelNpVG1IUU5zYjQ4bnpGSDh3QkJ2MC9pOXVwU0pHUzR5NzdLbGRGeHJNMWQ3UkV1bHlDK1Jzd0hsCkZscEpBb0dBZFRmSzZKTVJabWNBbGZCdTlUSnh2NjVKdFcrMDI5Wi94eDJ1NUprVzFPUnVwNVJlekJBM2NiQzQKRmc0Y1h6SHJ1S0sxWVhGcERyS0tGYTFMSzFGaFpjMkZCSnN5dGVLNHFQeVNLOTZVb1BlbHA1VzVBMDVZTjBBaQpLTDB5MzhNYWlYb1AyTWFvb2pSR29xWU9sTVVXRlU5RnJQSm9aSXNGKzRuUjVWZHNUUzQ9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg=="
  tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURIekNDQWdlZ0F3SUJBZ0lRTGRpZ2xmZXNGaFhvVldOaTRKYkNwVEFOQmdrcWhraUc5dzBCQVFzRkFEQWEKTVJnd0ZnWURWUVFERXc5b1lYSmliM0l0ZEc5clpXNHRZMkV3SGhjTk1qUXhNREUwTURFMU16QTRXaGNOTWpVeApNREUwTURFMU16QTRXakFhTVJnd0ZnWURWUVFERXc5b1lYSmliM0l0ZEc5clpXNHRZMkV3Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURRSlRjWG1hb1lrcVNRVlRBNW52cUVPTWdud05Gbnl1ZHAKeG5GcXlLUitENTExL0I0MUZnSHNUVkNqb0lFL3ljNEc1eTMrdHNSSlNQTHdwRjZKMU10V2tPZm54ank2aHZ1bQpNamtxVUl1NURyZ0swQ1loUU5MWlhYdVBMZ1ExcEJuMHo0VEhyZUpYSXhOcUxieGZIRmlUNEhYTzEwcW9hQm9UCkVaeVhLV2RhaEJaSzBGRm5JaUtkSk9kRisxb2ZIMUVJdk9BZ1EvMG1ONUU4VEhwMHFRQVRUQmxWdEgxMU5pRFAKZVpZZFlxeWtTYi85ZDVkbXd6VmNhOG9MYzdqUHdMSldCb2ZmdERWdzdleW9kSkxTWFNKU1ozN3FUU0YyQzFPVgpOUTEvbEFzQ1hQQ2RGWm1YalBDdnZ1VzZieTZ3ZS9OeVNTeUNOVGNnMVpPbUFGb1ZhZUh6QWdNQkFBR2pZVEJmCk1BNEdBMVVkRHdFQi93UUVBd0lDcERBZEJnTlZIU1VFRmpBVUJnZ3JCZ0VGQlFjREFRWUlLd1lCQlFVSEF3SXcKRHdZRFZSMFRBUUgvQkFVd0F3RUIvekFkQmdOVkhRNEVGZ1FVR2dIa3dDQ1JZaGhTTEFGNDAvdkJTczVPbHd3dwpEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRzVJajhkZjZOcDY0NjZlYTJjcFlDZk9Vc3BRc21kMFJNd0dRTHZ2CndIak5kSDR5NWw2TjkwQUNQNHBEWmF4MUx4TEJqcHlNeGVCbzJ6TkF6NjFYQ0tJZ3RQU1RsK0NmTStqenRkVVQKVlRUNmw4emZRbVZCQk56WlVwMlhUTXdyVkowUHZML2FIbk94NGRDb0pxd2tobGNrY3JRM0ErN1haNmtGYnl1WQpBQ200cnppSHRJVWpyZ25veUVtUGFxWTJTYzJ3a3JRZklLVXRDVkl4WFdZbW51WHF6d0MwSVdqOXV5VGlTNzdECkg0V1NFdjh4ajVId3ZkK1JvaGtYaGQrbkM5WUhVQVRGSWpsclpxYkRUZU5vdjBQNG81d3N5RmJMOFN4YTFJNVoKRENqc2ZUeGx3NTJCYUI1V0YxZEJLYnBtUmRPWWprN2xEVHpqd0tRSmVkVHhnYW89Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
  # Harbor admin 用户的密码
  HARBOR_ADMIN_PASSWORD: "MXFAVzNlJFI="
  POSTGRESQL_PASSWORD: "Y2hhbmdlaXQ="
  REGISTRY_CREDENTIAL_PASSWORD: "aGFyYm9yX3JlZ2lzdHJ5X3Bhc3N3b3Jk"
  CSRF_KEY: "b0wxSjdQZ2F1OFBxWWNLYXpkU2plUDNNemtzdG9nZ1U="
---
# Source: harbor/templates/core/core-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: harbor-core
  labels:
    app: "harbor"
data:
  app.conf: |+
    appname = Harbor
    runmode = prod
    enablegzip = true

    [prod]
    httpport = 8080
  PORT: "8080"
  DATABASE_TYPE: "postgresql"
  POSTGRESQL_HOST: "harbor-database"
  POSTGRESQL_PORT: "5432"
  POSTGRESQL_USERNAME: "postgres"
  POSTGRESQL_DATABASE: "registry"
  POSTGRESQL_SSLMODE: "disable"
  POSTGRESQL_MAX_IDLE_CONNS: "100"
  POSTGRESQL_MAX_OPEN_CONNS: "900"
  EXT_ENDPOINT: "http://harbor.devops.icu"
  CORE_URL: "http://harbor-core:80"
  JOBSERVICE_URL: "http://harbor-jobservice"
  REGISTRY_URL: "http://harbor-registry:5000"
  TOKEN_SERVICE_URL: "http://harbor-core:80/service/token"
  CORE_LOCAL_URL: "http://127.0.0.1:8080"
  WITH_TRIVY: "true"
  TRIVY_ADAPTER_URL: "http://harbor-trivy:8080"
  REGISTRY_STORAGE_PROVIDER_NAME: "s3"
  LOG_LEVEL: "info"
  CONFIG_PATH: "/etc/core/app.conf"
  CHART_CACHE_DRIVER: "redis"
  _REDIS_URL_CORE: "redis://harbor-redis:6379/0?idle_timeout_seconds=30"
  _REDIS_URL_REG: "redis://harbor-redis:6379/2?idle_timeout_seconds=30"
  PORTAL_URL: "http://harbor-portal"
  REGISTRY_CONTROLLER_URL: "http://harbor-registry:8080"
  REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user"
  HTTP_PROXY: ""
  HTTPS_PROXY: ""
  NO_PROXY: "harbor-core,harbor-jobservice,harbor-database,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
  PERMITTED_REGISTRY_TYPES_FOR_PROXY_CACHE: "docker-hub,harbor,azure-acr,aws-ecr,google-gcr,quay,docker-registry,github-ghcr,jfrog-artifactory"
  METRIC_ENABLE: "true"
  METRIC_PATH: "/metrics"
  METRIC_PORT: "8001"
  METRIC_NAMESPACE: harbor
  METRIC_SUBSYSTEM: core
  QUOTA_UPDATE_PROVIDER: "db"
---
# Source: harbor/templates/core/core-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: harbor-core
  labels:
    app: "harbor"
spec:
  ports:
    - name: http-web
      port: 80
      targetPort: 8080
    - name: http-metrics
      port: 8001
  selector:
    app: "harbor"
    component: core
---
# Source: harbor/templates/core/core-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: harbor-core
  labels:
    app: "harbor"
    component: core
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: "harbor"
      component: core
  template:
    metadata:
      labels:
        app: "harbor"
        component: core
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 120
      containers:
      - name: core
        image: goharbor/harbor-core:v2.11.1
        imagePullPolicy: IfNotPresent
        startupProbe:
          httpGet:
            path: /api/v2.0/ping
            scheme: HTTP
            port: 8080
          failureThreshold: 360
          initialDelaySeconds: 10
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /api/v2.0/ping
            scheme: HTTP
            port: 8080
          failureThreshold: 2
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /api/v2.0/ping
            scheme: HTTP
            port: 8080
          failureThreshold: 2
          periodSeconds: 10
        envFrom:
        - configMapRef:
            name: "harbor-core"
        - secretRef:
            name: "harbor-core"
        env:
          - name: CORE_SECRET
            valueFrom:
              secretKeyRef:
                name: harbor-core
                key: secret
          - name: JOBSERVICE_SECRET
            valueFrom:
              secretKeyRef:
                name: harbor-jobservice
                key: JOBSERVICE_SECRET
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: config
          mountPath: /etc/core/app.conf
          subPath: app.conf
        - name: secret-key
          mountPath: /etc/core/key
          subPath: key
        - name: token-service-private-key
          mountPath: /etc/core/private_key.pem
          subPath: tls.key
        - name: psc
          mountPath: /etc/core/token
      volumes:
      - name: config
        configMap:
          name: harbor-core
          items:
            - key: app.conf
              path: app.conf
      - name: secret-key
        secret:
          secretName: harbor-core
          items:
            - key: secretKey
              path: key
      - name: token-service-private-key
        secret:
          secretName: harbor-core
      - name: psc
        emptyDir: {}
部署 Harbor trivy

同 Redis,不支持 MinIO 对象存储,只能先绑定到本地

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-harbor-trivy-0
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: data-harbor-trivy-0
    namespace: registry
  hostPath:
    path: /approot/k8s_data/harbor-trivy
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - 192.168.22.123
---
# Source: harbor/templates/trivy/trivy-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: harbor-trivy
  namespace: registry
  labels:
    app: "harbor"
type: Opaque
data:
  redisURL: cmVkaXM6Ly9oYXJib3ItcmVkaXM6NjM3OS81P2lkbGVfdGltZW91dF9zZWNvbmRzPTMw
  gitHubToken: ""
---
# Source: harbor/templates/trivy/trivy-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-trivy"
  namespace: registry
  labels:
    app: "harbor"
spec:
  ports:
    - name: http-trivy
      protocol: TCP
      port: 8080
  selector:
    app: "harbor"
    component: trivy
---
# Source: harbor/templates/trivy/trivy-sts.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: harbor-trivy
  namespace: registry
  labels:
    app: "harbor"
    component: trivy
spec:
  replicas: 1
  serviceName: harbor-trivy
  selector:
    matchLabels:
      app: "harbor"
      component: trivy
  template:
    metadata:
      labels:
        app: "harbor"
        component: trivy
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      initContainers:
      - name: init-dir
        image: goharbor/trivy-adapter-photon:v2.11.1
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "chown -R 10000:10000 /home/scanner/.cache"]
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: data
          mountPath: /home/scanner/.cache
      containers:
        - name: trivy
          image: goharbor/trivy-adapter-photon:v2.11.1
          imagePullPolicy: IfNotPresent
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          env:
            - name: HTTP_PROXY
              value: ""
            - name: HTTPS_PROXY
              value: ""
            - name: NO_PROXY
              value: "harbor-core,harbor-jobservice,harbor-database,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
            - name: "SCANNER_LOG_LEVEL"
              value: "info"
            - name: "SCANNER_TRIVY_CACHE_DIR"
              value: "/home/scanner/.cache/trivy"
            - name: "SCANNER_TRIVY_REPORTS_DIR"
              value: "/home/scanner/.cache/reports"
            - name: "SCANNER_TRIVY_DEBUG_MODE"
              value: "false"
            - name: "SCANNER_TRIVY_VULN_TYPE"
              value: "os,library"
            - name: "SCANNER_TRIVY_TIMEOUT"
              value: "5m0s"
            - name: "SCANNER_TRIVY_GITHUB_TOKEN"
              valueFrom:
                secretKeyRef:
                  name: harbor-trivy
                  key: gitHubToken
            - name: "SCANNER_TRIVY_SEVERITY"
              value: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"
            - name: "SCANNER_TRIVY_IGNORE_UNFIXED"
              value: "false"
            - name: "SCANNER_TRIVY_SKIP_UPDATE"
              value: "false"
            - name: "SCANNER_TRIVY_SKIP_JAVA_DB_UPDATE"
              value: "false"
            - name: "SCANNER_TRIVY_OFFLINE_SCAN"
              value: "false"
            - name: "SCANNER_TRIVY_SECURITY_CHECKS"
              value: "vuln"
            - name: "SCANNER_TRIVY_INSECURE"
              value: "false"
            - name: "SCANNER_API_SERVER_ADDR"
              value: ":8080"
            - name: "SCANNER_REDIS_URL"
              valueFrom:
                secretKeyRef:
                  name: harbor-trivy
                  key: redisURL
            - name: "SCANNER_STORE_REDIS_URL"
              valueFrom:
                secretKeyRef:
                  name: harbor-trivy
                  key: redisURL
            - name: "SCANNER_JOB_QUEUE_REDIS_URL"
              valueFrom:
                secretKeyRef:
                  name: harbor-trivy
                  key: redisURL
          ports:
            - name: api-server
              containerPort: 8080
          volumeMounts:
          - name: data
            mountPath: /home/scanner/.cache
            readOnly: false
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /probe/healthy
              port: api-server
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 10
          readinessProbe:
            httpGet:
              scheme: HTTP
              path: /probe/ready
              port: api-server
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          resources:
            limits:
              cpu: 1
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 512Mi
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: "5Gi"
部署 Harbor jobservice
---
# Source: harbor/templates/jobservice/jobservice-cm-env.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-jobservice-env"
  namespace: registry
  labels:
    app: "harbor"
data:
  CORE_URL: "http://harbor-core:80"
  TOKEN_SERVICE_URL: "http://harbor-core:80/service/token"
  REGISTRY_URL: "http://harbor-registry:5000"
  REGISTRY_CONTROLLER_URL: "http://harbor-registry:8080"
  REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user"

  JOBSERVICE_WEBHOOK_JOB_MAX_RETRY: "3"
  JOBSERVICE_WEBHOOK_JOB_HTTP_CLIENT_TIMEOUT: "3"
  HTTP_PROXY: ""
  HTTPS_PROXY: ""
  NO_PROXY: "harbor-core,harbor-jobservice,harbor-database,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
  METRIC_NAMESPACE: harbor
  METRIC_SUBSYSTEM: jobservice
---
# Source: harbor/templates/jobservice/jobservice-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-jobservice"
  namespace: registry
  labels:
    app: "harbor"
data:
  config.yml: |+
    #Server listening port
    protocol: "http"
    port: 8080
    worker_pool:
      workers: 10
      backend: "redis"
      redis_pool:
        redis_url: "redis://harbor-redis:6379/1"
        namespace: "harbor_job_service_namespace"
        idle_timeout_second: 3600
    job_loggers:
      - name: "FILE"
        level: INFO
        settings: # Customized settings of logger
          base_dir: "/var/log/jobs"
        sweeper:
          duration: 14 #days
          settings: # Customized settings of sweeper
            work_dir: "/var/log/jobs"
    metric:
      enabled: true
      path: /metrics
      port: 8001
    #Loggers for the job service
    loggers:
      - name: "STD_OUTPUT"
        level: INFO
    reaper:
      # the max time to wait for a task to finish, if unfinished after max_update_hours, the task will be mark as error, but the task will continue to run, default value is 24
      max_update_hours: 24
      # the max time for execution in running state without new task created
      max_dangling_hours: 168
---
# Source: harbor/templates/jobservice/jobservice-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: harbor-jobservice
  namespace: registry
  annotations:
    helm.sh/resource-policy: keep
  labels:
    app: "harbor"
    component: jobservice
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-s3
---
# Source: harbor/templates/jobservice/jobservice-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-jobservice"
  namespace: registry
  labels:
    app: "harbor"
spec:
  ports:
    - name: http-jobservice
      port: 80
      targetPort: 8080
    - name: http-metrics
      port: 8001
  selector:
    app: "harbor"
    component: jobservice
---
# Source: harbor/templates/jobservice/jobservice-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "harbor-jobservice"
  namespace: registry
  labels:
    app: "harbor"
    component: jobservice
spec:
  replicas: 1
  revisionHistoryLimit: 10
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: "harbor"
      component: jobservice
  template:
    metadata:
      labels:
        app: "harbor"
        component: jobservice
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 120
      containers:
      - name: jobservice
        image: goharbor/harbor-jobservice:v2.11.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /api/v1/stats
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /api/v1/stats
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 20
          periodSeconds: 10
        env:
          - name: CORE_SECRET
            valueFrom:
              secretKeyRef:
                name: harbor-core
                key: secret
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        envFrom:
        - configMapRef:
            name: "harbor-jobservice-env"
        - secretRef:
            name: "harbor-jobservice"
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: jobservice-config
          mountPath: /etc/jobservice/config.yml
          subPath: config.yml
        - name: job-logs
          mountPath: /var/log/jobs
          subPath:
      volumes:
      - name: jobservice-config
        configMap:
          name: "harbor-jobservice"
      - name: job-logs
        persistentVolumeClaim:
          claimName: harbor-jobservice
部署 Harbor registry

以下几个参数,从 chatgpt 摘抄的,用作参考

chunksize: 10485760

  • 解释:每个分块上传的大小,单位是字节。10,485,760 字节等于 10 MB。
  • 建议值:默认建议设置在 5 MB 到 50 MB之间,取决于你的网络环境和负载。太小会增加分块的数量和请求次数,太大会增加单次上传的时间。
    • 如果网络速度较快且可靠,可以增大 chunksize,比如设置为 25 MB 或 50 MB。
    • 如果网络不稳定,可以适当减小,但避免设置过低。

multipartcopychunksize: 33554432

  • 解释:拷贝大文件时每个分块的大小,单位是字节。33,554,432 字节等于 32 MB。
  • 建议值:默认建议设置在 16 MB 到 64 MB之间,具体取决于文件大小和存储性能。拷贝大文件时,较大的 multipartcopychunksize 可以加快进度,但会增加内存和带宽占用。
    • 对于大型文件,可以增大到 64 MB,甚至更高。
    • 对于频繁的小文件传输,32 MB 是比较合适的值。

multipartcopymaxconcurrency: 100

  • 解释:并发上传的最大线程数,控制了同时上传的分块数量。
  • 建议值:并发数应根据网络带宽和服务器的吞吐能力来设置。建议值通常为 5 到 20
    • 100 并发是非常高的值,适合超高性能的网络环境。大多数情况下,50 以下更为合适。
    • 如果服务器资源有限或者网络带宽受限,建议减少到 10 或 20,以避免因过多的并发导致超时或失败。

multipartcopythresholdsize: 5368709120

  • 解释:决定何时启用分块上传。5,368,709,120 字节等于 5 GB。当文件超过这个大小时,会启用分块上传。
  • 建议值:5 GB 是一个较为常见的默认值。如果大多数文件都远小于 5 GB,可以调小这个值,例如 1 G或2 GB,以更早触发分块上传。
    • 如果你处理的文件一般较小,可以调低这个值以利用分块上传的优势。
    • 对于经常传输大型文件的环境,5 GB 是合适的阈值。
---
# Source: harbor/templates/registry/registry-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: "harbor-registry"
  namespace: registry
  labels:
    app: "harbor"
type: Opaque
data:
  REGISTRY_HTTP_SECRET: "Rk9MZGxiTnVhVUNWMW9Naw=="
  REGISTRY_REDIS_PASSWORD: ""
  REGISTRY_STORAGE_S3_ACCESSKEY: "N2l2a3VCcnJwWWZMWU50NUZvNUw="
  REGISTRY_STORAGE_S3_SECRETKEY: "aDF6NHFFQUc5Y3U2M1lWRTE2eDZsTXNtckREbVEzeGN3dGdmQmJ4cw=="
---
# Source: harbor/templates/registry/registry-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: "harbor-registry-htpasswd"
  namespace: registry
  labels:
    app: "harbor"
type: Opaque
data:
  REGISTRY_HTPASSWD: "aGFyYm9yX3JlZ2lzdHJ5X3VzZXI6JDJhJDEwJGpGWm93Qk94NC5iZ3JGQnR4dlBidHVEQmhmWlhUV0tiTnVoamd6bXVLZ0xvVm1seXU3MzRt"
---
# Source: harbor/templates/registry/registryctl-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: "harbor-registryctl"
  namespace: registry
  labels:
    app: "harbor"
type: Opaque
data:
---
# Source: harbor/templates/registry/registry-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-registry"
  namespace: registry
  labels:
    app: "harbor"
data:
  config.yml: |+
    version: 0.1
    log:
      level: info
      fields:
        service: registry
    storage:
      s3:
        region: "default"
        # region: "asia-east1"
        bucket: harbor
        regionendpoint: http://minio-svc.storage.svc.cluster.local:9000
        # v4auth: true
        # chunksize: 5242880
        rootdirectory: /
        # multipartcopychunksize: 8388608
        multipartcopymaxconcurrency: 10
        # multipartcopythresholdsize: 5368709120
      cache:
        layerinfo: redis
      maintenance:
        uploadpurging:
          enabled: true
          age: 168h
          interval: 24h
          dryrun: false
      delete:
        enabled: true
      redirect:
        disable: true
    redis:
      addr: harbor-redis:6379
      db: 2
      readtimeout: 10s
      writetimeout: 10s
      dialtimeout: 10s
      pool:
        maxidle: 100
        maxactive: 500
        idletimeout: 60s
    http:
      timeout: 300s
      addr: :5000
      relativeurls: false
      # set via environment variable
      # secret: placeholder
      debug:
        addr: :8001
        prometheus:
          enabled: true
          path: /metrics
    auth:
      htpasswd:
        realm: harbor-registry-basic-realm
        path: /etc/registry/passwd
    validation:
      disabled: true
    compatibility:
      schema1:
        enabled: true
  ctl-config.yml: |+
    ---
    protocol: "http"
    port: 8080
    log_level: info
    registry_config: "/etc/registry/config.yml"
---
# Source: harbor/templates/registry/registryctl-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-registryctl"
  namespace: registry
  labels:
    app: "harbor"
data:
---
# Source: harbor/templates/registry/registry-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-registry"
  namespace: registry
  labels:
    app: "harbor"
spec:
  ports:
    - name: http-registry
      port: 5000
    - name: http-controller
      port: 8080
    - name: http-metrics
      port: 8001
  selector:
    app: "harbor"
    component: registry
---
# Source: harbor/templates/registry/registry-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "harbor-registry"
  namespace: registry
  labels:
    app: "harbor"
    component: registry
spec:
  replicas: 1
  revisionHistoryLimit: 10
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: "harbor"
      component: registry
  template:
    metadata:
      labels:
        app: "harbor"
        component: registry
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
        fsGroupChangePolicy: OnRootMismatch
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 120
      containers:
      - name: registry
        image: goharbor/registry-photon:v2.11.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /
            scheme: HTTP
            port: 5000
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            scheme: HTTP
            port: 5000
          initialDelaySeconds: 1
          periodSeconds: 10
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        args: ["serve", "/etc/registry/config.yml"]
        envFrom:
        - secretRef:
            name: "harbor-registry"
        env:
        ports:
        - containerPort: 5000
        - containerPort: 8001
        volumeMounts:
        - name: registry-data
          mountPath: /storage
          subPath:
        - name: registry-htpasswd
          mountPath: /etc/registry/passwd
          subPath: passwd
        - name: registry-config
          mountPath: /etc/registry/config.yml
          subPath: config.yml
      - name: registryctl
        image: goharbor/harbor-registryctl:v2.11.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /api/health
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /api/health
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 1
          periodSeconds: 10
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        envFrom:
        - configMapRef:
            name: "harbor-registryctl"
        - secretRef:
            name: "harbor-registry"
        - secretRef:
            name: "harbor-registryctl"
        env:
        - name: CORE_SECRET
          valueFrom:
            secretKeyRef:
              name: harbor-core
              key: secret
        - name: JOBSERVICE_SECRET
          valueFrom:
            secretKeyRef:
              name: harbor-jobservice
              key: JOBSERVICE_SECRET
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: registry-data
          mountPath: /storage
          subPath:
        - name: registry-config
          mountPath: /etc/registry/config.yml
          subPath: config.yml
        - name: registry-config
          mountPath: /etc/registryctl/config.yml
          subPath: ctl-config.yml
      volumes:
      - name: registry-htpasswd
        secret:
          secretName: harbor-registry-htpasswd
          items:
            - key: REGISTRY_HTPASSWD
              path: passwd
      - name: registry-config
        configMap:
          name: "harbor-registry"
      - name: registry-data
        emptyDir: {}
部署 Harbor portal

到这一步结束,就可以打开浏览器输入自己配置的 Harbor 域名来访问了

  • 用户名:admin
  • 密码:1q@W3e$R(在 Harbor core 的 configmap 里面可以自己定义)
---
# Source: harbor/templates/portal/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-portal"
  namespace: registry
  labels:
    app: "harbor"
data:
  nginx.conf: |+
    worker_processes auto;
    pid /tmp/nginx.pid;
    events {
        worker_connections  1024;
    }
    http {
        client_body_temp_path /tmp/client_body_temp;
        proxy_temp_path /tmp/proxy_temp;
        fastcgi_temp_path /tmp/fastcgi_temp;
        uwsgi_temp_path /tmp/uwsgi_temp;
        scgi_temp_path /tmp/scgi_temp;
        server {
            listen 8080;
            listen [::]:8080;
            server_name  localhost;
            # server_name  harbor.devops.icu;
            root   /usr/share/nginx/html;
            index  index.html index.htm;
            include /etc/nginx/mime.types;
            gzip on;
            gzip_min_length 1000;
            gzip_proxied expired no-cache no-store private auth;
            gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
            location /devcenter-api-2.0 {
                try_files $uri $uri/ /swagger-ui-index.html;
            }
            location / {
                try_files $uri $uri/ /index.html;
            }
            location = /index.html {
                add_header Cache-Control "no-store, no-cache, must-revalidate";
            }
        }
    }
---
# Source: harbor/templates/portal/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-portal"
  namespace: registry
  labels:
    app: "harbor"
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app: "harbor"
    component: portal
---
# Source: harbor/templates/portal/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "harbor-portal"
  namespace: registry
  labels:
    app: "harbor"
    component: portal
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: "harbor"
      component: portal
  template:
    metadata:
      labels:
        app: "harbor"
        component: portal
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      containers:
      - name: portal
        image: goharbor/harbor-portal:v2.11.1
        imagePullPolicy: IfNotPresent
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        livenessProbe:
          httpGet:
            path: /
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 1
          periodSeconds: 10
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: portal-config
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
      volumes:
      - name: portal-config
        configMap:
          name: "harbor-portal"
---
# Source: harbor/templates/ingress/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: "harbor-ingress"
  namespace: registry
  labels:
    app: "harbor"
  annotations:
    ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /api/
        pathType: Prefix
        backend:
          service:
            name: harbor-core
            port:
              number: 80
      - path: /service/
        pathType: Prefix
        backend:
          service:
            name: harbor-core
            port:
              number: 80
      - path: /v2/
        pathType: Prefix
        backend:
          service:
            name: harbor-core
            port:
              number: 80
      - path: /chartrepo/
        pathType: Prefix
        backend:
          service:
            name: harbor-core
            port:
              number: 80
      - path: /c/
        pathType: Prefix
        backend:
          service:
            name: harbor-core
            port:
              number: 80
      - path: /
        pathType: Prefix
        backend:
          service:
            name: harbor-portal
            port:
              number: 80
    host: harbor.devops.icu

登陆后就可以看到这些页面了

在这里插入图片描述

部署 Harbor exporter

exporter 是提供给 prometheus 的,这个不是必须的,看大家自己的情况

---
# Source: harbor/templates/exporter/exporter-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: harbor-exporter
  namespace: registry
  labels:
    app: "harbor"
type: Opaque
data:
  HARBOR_ADMIN_PASSWORD: "MXFAVzNlJFI="
  HARBOR_DATABASE_PASSWORD: "Y2hhbmdlaXQ="
---
# Source: harbor/templates/exporter/exporter-cm-env.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-exporter-env"
  namespace: registry
  labels:
    app: "harbor"
data:
  HTTP_PROXY: ""
  HTTPS_PROXY: ""
  NO_PROXY: "harbor-core,harbor-jobservice,harbor-database,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
  LOG_LEVEL: "info"
  HARBOR_EXPORTER_PORT: "8001"
  HARBOR_EXPORTER_METRICS_PATH: "/metrics"
  HARBOR_EXPORTER_METRICS_ENABLED: "true"
  HARBOR_EXPORTER_CACHE_TIME: "23"
  HARBOR_EXPORTER_CACHE_CLEAN_INTERVAL: "14400"
  HARBOR_METRIC_NAMESPACE: harbor
  HARBOR_METRIC_SUBSYSTEM: exporter
  HARBOR_REDIS_URL: "redis://harbor-redis:6379/1"
  HARBOR_REDIS_NAMESPACE: harbor_job_service_namespace
  HARBOR_REDIS_TIMEOUT: "3600"
  HARBOR_SERVICE_SCHEME: "http"
  HARBOR_SERVICE_HOST: "harbor-core"
  HARBOR_SERVICE_PORT: "80"
  HARBOR_DATABASE_HOST: "harbor-database"
  HARBOR_DATABASE_PORT: "5432"
  HARBOR_DATABASE_USERNAME: "postgres"
  HARBOR_DATABASE_DBNAME: "registry"
  HARBOR_DATABASE_SSLMODE: "disable"
  HARBOR_DATABASE_MAX_IDLE_CONNS: "100"
  HARBOR_DATABASE_MAX_OPEN_CONNS: "900"
---
# Source: harbor/templates/exporter/exporter-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-exporter"
  namespace: registry
  labels:
    app: "harbor"
spec:
  ports:
    - name: http-metrics
      port: 8001
  selector:
    app: "harbor"
    component: exporter
---
# Source: harbor/templates/exporter/exporter-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: harbor-exporter
  namespace: registry
  labels:
    app: "harbor"
    component: exporter
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: "harbor"
      component: exporter
  template:
    metadata:
      labels:
        app: "harbor"
        component: exporter
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      containers:
      - name: exporter
        image: goharbor/harbor-exporter:v2.11.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /
            port: 8001
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            port: 8001
          initialDelaySeconds: 30
          periodSeconds: 10
        args: ["-log-level", "info"]
        envFrom:
        - configMapRef:
            name: "harbor-exporter-env"
        - secretRef:
            name: "harbor-exporter"
        env:
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        ports:
        - containerPort: 8001
        volumeMounts:
      volumes:
      - name: config
        secret:
          secretName: "harbor-exporter"

Harbor 的配置和验证

创建用户

尽量不要直接使用 admin 用户来拉取镜像

在这里插入图片描述

创建项目

个人有强迫症,喜欢分类,这里的配额可以设定一个大小,限制这个项目的空间,避免把磁盘跑满出现问题,测试的我就没配置,-1 表示不限制

在这里插入图片描述

项目分配成员

给项目分配成员,同时配置应有的权限

在这里插入图片描述

关于权限这块,可以查看官方文档:User Permissions By Role,下面是翻译的内容

Action(动作)Limited Guest(受限访客)Guest(访客)Developer(开发者)Maintainer(维护人员)Project Admin(项目管理员)Admin(系统管理员)
查看项目配置
编辑项目配置
查看项目成员列表
创建/编辑/删除项目成员
查看项目日志列表
查看项目复制列表
查看项目复制作业列表
查看项目标签列表
创建/编辑/删除项目标签
查看存储库列表
创建存储库
编辑/删除仓库
查看图片列表
重新标记图像
拉取图片
推送图片
扫描/删除图像
将扫描仪添加到
在项目中编辑扫描仪
查看映像漏洞列表
查看映像构建历史记录
添加/删除图像标签
查看舵手图表列表
下载舵手图
上传舵图
删除舵图
查看
下载
上传
删除
添加/删除
查看项目机器人列表
创建/编辑/删除项目机器人
查看配置的
创建/编辑/删除
启用/禁用
创建/删除标记保留规则
启用/禁用标记保留规则
创建/删除标签不可变性规则
启用/禁用标签不可变性规则
查看项目配额
编辑项目配额
添加新扫描仪

docker login 配置

/etc/docker/daemon.json 文件里面追加下面的内容,具体要注意 json 格式,以及内容要改成自己配置的 Harbor 地址

"insecure-registries": ["http://harbor.devops.icu"]

配置完成后,需要重启 docker 服务

systemctl restart docker

登录 Harbor

docker login http://harbor.devops.icu

输入用户名和密码,登录成功后,会有类似如下的返回

WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

containerd 配置

暂时没验证出来和 docker login 一样的方式,只能每次 push 的时候,加上用户名和密码来推送

推送镜像验证

docker 验证

修改镜像 tag,这里选自己本地有的就行

docker tag m.daocloud.io/busybox:1.37 harbor.devops.icu/baseimage/busybox:1.37

推送镜像,返回 Pushed 后,可以去 Harbor 页面查看

docker push harbor.devops.icu/baseimage/busybox:1.37
containerd 验证

修改镜像 tag,这里选自己本地有的就行

ctr -n k8s.io image tag m.daocloud.io/busybox:1.37 harbor.devops.icu/baseimage/busybox:1.37

推送镜像,返回 Pushed 后,可以去 Harbor 页面查看

ctr -n k8s.io image push --user harboruser --plain-http harbor.devops.icu/baseimage/busybox:1.37

拉取镜像验证

docker 验证

先把本地 tag 过的镜像删了,或者换个机器直接 pull 也可以

docker rmi harbor.devops.icu/baseimage/busybox:1.37

拉取镜像

docker pull harbor.devops.icu/baseimage/busybox:1.37
containerd 验证
ctr -n k8s.io image pull --user harboruser --plain-http harbor.devops.icu/baseimage/busybox:1.37

遗留问题

Harbor 上删除镜像后,MinIO 的数据不会被删除,尝试过 Harbor 的垃圾清理,没有触发,如果有大佬知道的,希望赐教,后期如果有找到问题,再更新博客

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/894999.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

前端考试总结

1HTML标签 h标题标签 块级标签 独占一行 p段落标签 同上 br换行标签 单标签 img图片标签 内联标签:不独占一行(src图片地址 alt图片的替代文字 title鼠标悬停提示文字) a超链接标签 同上 (href跳转路径 target属性{_blank新窗口打开 _self在当前窗口打开}) 列表标签(ul无…

VSCODE 导入cubeide工程

1.下载vscode及插件STM32 VS Code Ectersion 版本号1.0.0,之后这个有导入功能。 2.等待自动安装对应插件,提示缺少什么就补什么 3.在左侧出现stm32图标。点击Import a local project导入本地项目。 4.报错 [{"resource": "/f:V11/cmak…

前端html,css 样式巩固1

想做这样 一个效果 点击图片切换 当前的选中图片 我们使用 原生的js html 来开发这个 直接粘贴代码 相信大家 都能看懂的 <!DOCTYPE html> <html lang"en"><head><meta charset"UTF-8"><meta name"viewport" …

抖音视频制作怎么暂停画面,抖音视频怎么让它有暂停的效果

千万别滥用视频特效&#xff0c;不然它能毁掉你的抖音作品。在创作过程中&#xff0c;应尽量使用类似暂停画面、隐形字幕这样的视觉特效&#xff0c;可以显著提高作品的视觉体验。增强视频表现力的同时&#xff0c;也不会让画面看起来过于夸张。有关抖音视频制作怎么暂停画面的…

Springboot 整合 Java DL4J 实现文物保护系统

&#x1f9d1; 博主简介&#xff1a;历代文学网&#xff08;PC端可以访问&#xff1a;https://literature.sinhy.com/#/literature?__c1000&#xff0c;移动端可微信小程序搜索“历代文学”&#xff09;总架构师&#xff0c;15年工作经验&#xff0c;精通Java编程&#xff0c;…

2011年国赛高教杯数学建模A题城市表层土壤重金属污染分析解题全过程文档及程序

2011年国赛高教杯数学建模 A题 城市表层土壤重金属污染分析 随着城市经济的快速发展和城市人口的不断增加&#xff0c;人类活动对城市环境质量的影响日显突出。对城市土壤地质环境异常的查证&#xff0c;以及如何应用查证获得的海量数据资料开展城市环境质量评价&#xff0c;研…

应用层协议 序列化

自定义应用层协议 例子&#xff1a;网络版本计算器 序列化反序列化 序列化&#xff1a;将消息&#xff0c;昵称&#xff0c;日期整合成消息-昵称-日期 反序列化&#xff1a;消息-昵称-日期->消息&#xff0c;昵称&#xff0c;日期 在序列化中&#xff0c;定义一个结构体…

【Pycharm默认解释器配置文件】怎样删除配置解释器的无效历史记录?

有时候我们希望删除无效的解释器路径&#xff0c;可以找到这个文件&#xff0c;进行删除修改。 C:\Users\你的用户名\AppData\Roaming\JetBrains\PyCharm2022.3\options\jdk.table.xml直接删除解释器名称对应的一整个<jdk version"2">节点即可&#xff01; …

2023年ICPC亚洲合肥赛区赛 C. Cyclic Substrings

题目 题解 #include<bits/stdc.h> using namespace std; // #define int long long #define ll long long const int maxn 6e6 5; const int mod 998244353; int fail[maxn];//fail[i]表示i结点代表的回文串的最大回文后缀的编号 int len[maxn]; //len[i]表示结点i代…

大模型涌现判定

什么是大模型&#xff1f; 大模型&#xff1a;是“规模足够大&#xff0c;训练足够充分&#xff0c;出现了涌现”的深度学习系统&#xff1b; 大模型技术的革命性&#xff1a;延申了人的器官的功能&#xff0c;带来了生产效率量级提升&#xff0c;展现了AGI的可行路径&#x…

UDP/TCP协议

网络层只负责将数据包送达至目标主机&#xff0c;并不负责将数据包上交给上层的哪一个应用程序&#xff0c;这是传输层需要干的事&#xff0c;传输层通过端口来区分不同的应用程序。传输层协议主要分为UDP&#xff08;用户数据报协议&#xff09;和TCP&#xff08;传输控制协议…

mongodb-7.0.14分片副本集超详细部署

mongodb介绍&#xff1a; 是最常用的nosql数据库&#xff0c;在数据库排名中已经上升到了前六。这篇文章介绍如何搭建高可用的mongodb&#xff08;分片副本&#xff09;集群。 环境准备 系统系统 BC 21.10 三台服务器&#xff1a;192.168.123.247/248/249 安装包&#xff1a…

Javaweb基础-vue

Vue.js Vue是一套用于构建用户界面的渐进式框架。 起步 引入vue <head><script src"static/js/vue2.6.12.min.js"></script> </head> 创建vue应用 <body> <div id"index"><p>{{message}}</p> </div>…

Java的walkFileTree方法用法【FileVisitor接口】

在Java旧版本中遍历文件系统只能通过File类通过递归来实现&#xff0c;但是这种方法不仅消耗资源大而且效率低。 NIO.2的Files工具类提供了两个静态工具方法walk()和walkFileTree()可用来高效并优雅地遍历文件系统。walkFileTree()功能更强&#xff0c;可自定义实现更多功能&am…

5.3章节python中字典:字典创建、元素访问、相关操作

1.字典的创建和删除 2.字典的访问和遍历 3.字典的相关操作 4.字典的生成式 一、字典的创建和删除 字典&#xff08;dictionary&#xff09;是一种用于存储键值对&#xff08;key-value pairs&#xff09;的数据结构。每个键&#xff08;key&#xff09;都映射到一个值&#xf…

基于FPGA的信号发生器verilog实现,可以输出方波,脉冲波,m序列以及正弦波,可调整输出信号频率

目录 1.算法运行效果图预览 2.算法运行软件版本 3.部分核心程序 4.算法理论概述 5.算法完整程序工程 1.算法运行效果图预览 (完整程序运行后无水印) 输出方波 输出脉冲波 输出m随机序列 输出正弦波 2.算法运行软件版本 vivado2019.2 3.部分核心程序 &#xff08;完整…

顺序表算法题【不一样的解法!】

本章概述 算法题1算法题2算法题3彩蛋时刻&#xff01;&#xff01;&#xff01; 算法题1 力扣&#xff1a;移除元素 我们先来看这个题目的要求描述&#xff1a; 把与val相同数值的元素移除掉&#xff0c;忽略元素的相对位置变化&#xff0c;然后返回剩下与val值不同的元素个数…

OpenCV高级图形用户界面(10)创建一个新的窗口函数namedWindow()的使用

操作系统&#xff1a;ubuntu22.04 OpenCV版本&#xff1a;OpenCV4.9 IDE:Visual Studio Code 编程语言&#xff1a;C11 算法描述 创建一个窗口。 函数 namedWindow 创建一个可以作为图像和跟踪条占位符的窗口。创建的窗口通过它们的名字来引用。 如果已经存在同名的窗口&am…

Docker搭建Cisco AnyConnect 教程

本章教程搭建一个Cisco AnyConnect 连接教程。 一、下载文件 因为是基于Docker方式进行搭建的,所以需要提前安装好Docker,本章不介绍如何安装Docker,可以自行百度解决。 通过网盘分享的文件:ocserv-docker 链接: https://pan.baidu.com/s/14-2p9jenqE0KWzMilVzV-A?pwd=4yd…

小O睡眠省电调研

摘要 AI 预测睡眠 断网 杀应用为主的策略 UI 睡眠识别 AI 识别 将亮灭屏、音频、上传下载、运动状态数据存到xml中&#xff0c;供预测分析 睡眠策略 OPPO 睡眠省电 1. sOSysNetControlManagerNewInstance&#xff1a;断网&#xff08;wifi\mobiledata&#xff09;2. S…