资源配额(ResourceQuota) 资源限制(LimitRange)

资源配额 ResourceQuota

资源配额 ResourceQuota:限制命名空间总容量。

当多个团队、多个用户共享使用K8s集群时,会出现不均匀资源使用,默认情况下先到先得,这时可以通过ResourceQuota来对命名空间资源使用总量做限制,从而解决这个问题。

使用流程:k8s管理员为每个命名空间创建一个或多个ResourceQuota对象,定义资源使用总量,K8s会跟踪命名空间资源使用情况,当超过定义的资源配额会返回拒绝

image-20230524202104161

💘 实战:资源配额 ResourceQuota-2023.5.25(测试成功)

image-20230524212414355

  • 实验环境
实验环境:
1、win10,vmwrokstation虚机;
2、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
   k8s version:v1.20.0
   docker://20.10.7
  • 实验软件

链接:https://pan.baidu.com/s/1-NzdrpktfaUOAl6QO_hqsA?pwd=0820
提取码:0820
2023.5.25-ResourceQuota-code

image-20230525070729388

1.计算资源配额

image-20230525061309728

  • 自己虚机目前配置为2c,2g

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-mFzQVe8l-1685312821737)(https://bucket-hg.oss-cn-shanghai.aliyuncs.com/img/image-20230524213037637.png)]

为1master,2个node节点。

  • 创建测试命名空间test
[root@k8s-master1 ~]#kubectl create ns test
namespace/test created
  • 创建ResourceQuota资源
[root@k8s-master1 ~]#mkdir ResourceQuota
[root@k8s-master1 ~]#cd ResourceQuota/
[root@k8s-master1 ResourceQuota]#vim compute-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-resources
  namespace: test
spec:
  hard:
    requests.cpu: "1"
    requests.memory: 1Gi
    limits.cpu: "2"
    limits.memory: 2Gi
    
#部署:
[root@k8s-master1 ResourceQuota]#kubectl apply -f compute-resources.yaml 
resourcequota/compute-resources configured

#查看当前配置的ResourceQuota
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE     REQUEST                                     LIMIT
compute-resources   2m37s   requests.cpu: 0/1, requests.memory: 0/1Gi   limits.cpu: 0/2, limits.memory: 0/2Gi
  • 部署一个pod应用
[root@k8s-master1 ResourceQuota]#kubectl run web --image=nginx --dry-run=client -oyaml > pod.yaml
[root@k8s-master1 ResourceQuota]#vim pod.yaml
#删除没用的配置,并配置上resources
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web
  name: web
  namespace: test
spec:
  containers:
  - image: nginx
    name: web
    resources:
      requests:
        cpu: 0.5
        memory: 0.5Gi
      limits:
        cpu: 1
        memory: 1Gi        
        
#部署pod
[root@k8s-master1 ResourceQuota]#kubectl apply -f pod.yaml 
Error from server (Forbidden): error when creating "pod.yaml": pods "web" is forbidden: failed quota: compute-resources: must specify limits.cpu,limits.memory
#注意:在部署pod时会看到报错,提示"pods "web" is forbidden: failed quota: compute-resources: must specify limits.cpu,limits.memory",因为test命名空间配置了ResourceQuota,pod里只配置requests会报错;

#测试:如果不配置resource,看会否会报错?
cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web
  name: web
  namespace: test
spec:
  containers:
  - image: nginx
    name: web
    #resources:
     # requests:
      #  cpu: 0.5
      #  memory: 0.5Gi
[root@k8s-master1 ResourceQuota]#kubectl apply -f pod.yaml 
Error from server (Forbidden): error when creating "pod.yaml": pods "web" is forbidden: failed quota: compute-resources: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
[root@k8s-master1 ResourceQuota]#kubectl  get po -ntest
No resources found in test namespace.
#现象:也是会报错的!!!

#结论:只要是配置了ResourceQuota的命名空间,pod里必须要配置上limits.cpu,limits.memory,requests.cpu,requests.memory,否则`会返回拒绝`,无法成功创建资源的。

#重新配置pod:补加limits配置
[root@k8s-master1 ResourceQuota]#vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web
  name: web
  namespace: test
spec:
  containers:
  - image: nginx
    name: web
    resources:
      requests:
        cpu: 0.5
        memory: 0.5Gi
      limits:
        cpu: 1
        memory: 1Gi 
        
#重新部署:
[root@k8s-master1 ResourceQuota]#kubectl apply -f pod.yaml 
pod/web created
  • 查看
#查看:
[root@k8s-master1 ResourceQuota]#kubectl get po -ntest
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          26s
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE   REQUEST                                            LIMIT
compute-resources   8h    requests.cpu: 500m/1, requests.memory: 512Mi/1Gi   limits.cpu: 1/2, limits.memory: 1Gi/2Gi
#可以看到,此时ResourceQuota下可以清楚地看到requests.cpu,requests.memory,limits.cpu,limits.memory的当前使用量/总使用量。
  • 这里测试下,若继续在test命名空间下新建pod,如果pod里cpu或者memory的requests值之和超过ResourceQuota里定义的,预计会报错。当然,pod里cpu或者memory的rlimits值之和超过ResourceQuota里定义的,同理也会报错。接下来,我们测试下看看。

测试:如果pod里cpu或者memory的requests值之和超过ResourceQuota里定义的,预计会报错。

[root@k8s-master1 ResourceQuota]#cp pod.yaml pod1.yaml
[root@k8s-master1 ResourceQuota]#vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web
  name: web2
  namespace: test
spec:
  containers:
  - image: nginx
    name: web
    resources:
      requests:
        cpu: 0.6
        memory: 0.5Gi
      limits:
        cpu: 1
        memory: 1Gi

#部署,并观察现象:
[root@k8s-master1 ResourceQuota]#kubectl apply -f pod1.yaml 
Error from server (Forbidden): error when creating "pod1.yaml": pods "web2" is forbidden: exceeded quota: compute-resources, requested: requests.cpu=600m, used: requests.cpu=500m, limited: requests.cpu=1
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE   REQUEST                                            LIMIT
compute-resources   8h    requests.cpu: 500m/1, requests.memory: 512Mi/1Gi   limits.cpu: 1/2, limits.memory: 1Gi/2Gi

结论:如果pod里cpu或者memory的requests值之和超过ResourceQuota里定义的,创建新的pod会报错。

测试:如果pod里cpu或者memory的linits值之和超过ResourceQuota里定义的,预计会报错。

[root@k8s-master1 ResourceQuota]#cp pod.yaml pod2.yaml
[root@k8s-master1 ResourceQuota]#vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web
  name: web3
  namespace: test
spec:
  containers:
  - image: nginx
    name: web
    resources:
      requests:
        cpu: 0.5
        memory: 0.5Gi
      limits:
        cpu: 1.1
        memory: 1Gi


#部署,并观察现象:
[root@k8s-master1 ResourceQuota]#kubectl apply -f pod2.yaml 
Error from server (Forbidden): error when creating "pod2.yaml": pods "web3" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1100m, used: limits.cpu=1, limited: limits.cpu=2
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE   REQUEST                                            LIMIT
compute-resources   8h    requests.cpu: 500m/1, requests.memory: 512Mi/1Gi   limits.cpu: 1/2, limits.memory: 1Gi/2Gi

结论:如果pod里cpu或者memory的limits值之和超过ResourceQuota里定义的,创建新的pod会报错。

因此:

如果某个命名空间下配置了ResourceQuota,pod里必须要配置上limits.cpu,limits.memory,requests.cpu,requests.memory,否则会返回拒绝,无法成功创建资源的。

另外,如果pod里cpu或者memory的requests&limits值之和超过ResourceQuota里定义的requests&limits,则会返回拒绝,无法成功创建资源的。

[root@k8s-master1 ResourceQuota]#cp pod.yaml pod3.yaml
[root@k8s-master1 ResourceQuota]#vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web
  name: web3
  namespace: test
spec:
  containers:
  - image: nginx
    name: web
    resources:
      requests:
        cpu: 0.5
        memory: 0.5Gi
      limits:
        cpu: 1
        memory: 1Gi

#部署:
[root@k8s-master1 ResourceQuota]#kubectl apply -f pod3.yaml
pod/web3 created

#查看:
[root@k8s-master1 ResourceQuota]#kubectl get po -ntest
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          16m
web3   1/1     Running   0          27s
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE   REQUEST                                       LIMIT
compute-resources   8h    requests.cpu: 1/1, requests.memory: 1Gi/1Gi   limits.cpu: 2/2, limits.memory: 2Gi/2Gi

测试结束。😘

2.存储资源配额

image-20230525061406558

  • 部署ResourceQuota
[root@k8s-master1 ResourceQuota]#vim storage-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: storage-resources
  namespace: test
spec:
  hard:
    requests.storage: "10G"  
    
#部署:
[root@k8s-master1 ResourceQuota]#kubectl apply -f storage-resources.yaml 
resourcequota/storage-resources created

#查看
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE   REQUEST                                       LIMIT
compute-resources   8h    requests.cpu: 1/1, requests.memory: 1Gi/1Gi   limits.cpu: 2/2, limits.memory: 2Gi/2Gi
storage-resources   8s    requests.storage: 0/10G
  • 创建pvc测试
[root@k8s-master1 ResourceQuota]#vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
  namespace: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
      
#部署:
[root@k8s-master1 ResourceQuota]#kubectl apply -f pvc.yaml 
persistentvolumeclaim/pvc created

#查看:
[root@k8s-master1 ResourceQuota]#kubectl get pvc -ntest
NAME   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc    Pending     #这个pending不影响实验,这里是没有pv才导致pvc处于Pending状态    
#部署成功后,可以看到ResourceQuota requests.storage这里已经发生了变化
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE    REQUEST                                       LIMIT
compute-resources   8h     requests.cpu: 1/1, requests.memory: 1Gi/1Gi   limits.cpu: 2/2, limits.memory: 2Gi/2Gi
storage-resources   117s   requests.storage: 8Gi/10G
  • 我们继续在创建一个pvc,此时如果requests.storage之和超过ResourceQuota里定义的话,那么预计会报错的
[root@k8s-master1 ResourceQuota]#cp pvc.yaml pvc1.yaml
[root@k8s-master1 ResourceQuota]#vim pvc1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
  namespace: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2.1Gi
      
#部署:
[root@k8s-master1 ResourceQuota]#kubectl apply -f pvc1.yaml 
Error from server (Forbidden): error when creating "pvc1.yaml": persistentvolumeclaims "pvc1" is forbidden: exceeded quota: storage-resources, requested: requests.storage=2254857831, used: requests.storage=8Gi, limited: requests.storage=10G
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE     REQUEST                                       LIMIT
compute-resources   8h      requests.cpu: 1/1, requests.memory: 1Gi/1Gi   limits.cpu: 2/2, limits.memory: 2Gi/2Gi
storage-resources   6m29s   requests.storage: 8Gi/10G
#可以看到,此时报错了,意料之中。

#我们重新部署一个pvc看看:
[root@k8s-master1 ResourceQuota]#cp pvc.yaml pvc2.yaml
[root@k8s-master1 ResourceQuota]#vim pvc2.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc2
  namespace: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
#部署:注意,存储这里不能使用满!!!
[root@k8s-master1 ResourceQuota]#kubectl apply -f pvc2.yaml 
Error from server (Forbidden): error when creating "pvc2.yaml": persistentvolumeclaims "pvc2" is forbidden: exceeded quota: storage-resources, requested: requests.storage=2Gi, used: requests.storage=8Gi, limited: requests.storage=10G
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE     REQUEST                                       LIMIT
compute-resources   8h      requests.cpu: 1/1, requests.memory: 1Gi/1Gi   limits.cpu: 2/2, limits.memory: 2Gi/2Gi
storage-resources   6m29s   requests.storage: 8Gi/10G

#我们再次创建下pvc3.yaml看下
[root@k8s-master1 ResourceQuota]#cp pvc.yaml pvc3.yaml
[root@k8s-master1 ResourceQuota]#vim pvc3.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc3
  namespace: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

#部署:
[root@k8s-master1 ResourceQuota]#kubectl apply -f pvc3.yaml 
persistentvolumeclaim/pvc3 created

#查看:符合预期,可以正常部署pvc。
[root@k8s-master1 ResourceQuota]#kubectl get pvc -ntest
NAME   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc    Pending                                                     9m27s
pvc3   Pending                                                     9s
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE   REQUEST                                       LIMIT
compute-resources   8h    requests.cpu: 1/1, requests.memory: 1Gi/1Gi   limits.cpu: 2/2, limits.memory: 2Gi/2Gi
storage-resources   11m   requests.storage: 9Gi/10G
[root@k8s-master1 ResourceQuota]#

测试结束。😘

3.对象数量配额

image-20230525063637024

  • 我们来看下当前环境
[root@k8s-master1 ResourceQuota]#kubectl get po -ntest
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          41m
web3   1/1     Running   0          25m
  • 部署ResourceQuota
[root@k8s-master1 ResourceQuota]#vim object-counts.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: object-counts
  namespace: test
spec:
  hard:
    pods: "4"
    count/deployments.apps: "3"
    count/services: "3"
    
#部署:
[root@k8s-master1 ResourceQuota]#kubectl apply -f object-counts.yaml 
resourcequota/object-counts created

#查看:
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE   REQUEST                                                       LIMIT
compute-resources   9h    requests.cpu: 1/1, requests.memory: 1Gi/1Gi                   limits.cpu: 2/2, limits.memory: 2Gi/2Gi
object-counts       15s   count/deployments.apps: 0/3, count/services: 0/3, pods: 2/4
storage-resources   16m   requests.storage: 9Gi/10G
[root@k8s-master1 ResourceQuota]#
  • 测试
#此时已经存在2个pod了,ResourceQuota里限制的pod最大数量为4,那我们接下来创建下测试pod看下
#但此时为了测试方方便,我删除下前面的compute-resources.yaml,不然创建pod会报错的
[root@k8s-master1 ResourceQuota]#kubectl delete -f compute-resources.yaml 
resourcequota "compute-resources" deleted
[root@k8s-master1 ResourceQuota]#kubectl get quota -ntest
NAME                AGE     REQUEST                                                       LIMIT
object-counts       3m23s   count/deployments.apps: 0/3, count/services: 0/3, pods: 2/4
storage-resources   19m     requests.storage: 9Gi/10G

#创建3个测试pod看下
[root@k8s-master1 ResourceQuota]#kubectl get po -ntest
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          47m
web3   1/1     Running   0          32m
[root@k8s-master1 ResourceQuota]#kubectl run web4 --image=nginx -ntest
pod/web4 created
[root@k8s-master1 ResourceQuota]#kubectl run web5 --image=nginx -ntest
Error from server (Forbidden): pods "web5" is forbidden: exceeded quota: object-counts, requested: pods=1, used: pods=4, limited: pods=4
#可以看到,这里报错了。

测试结束。😘

汇总

  • 注意事项
  1. 如果某个命名空间下配置了ResourceQuota,pod里必须要配置上limits.cpu,limits.memory,requests.cpu,requests.memory,否则会返回拒绝,无法成功创建资源的。
  2. 如果pod里cpu或者memory的requests&limits值之和超过ResourceQuota里定义的requests&limits,则会返回拒绝,无法成功创建资源的。(需要注意:实际创建的request和limits pod之和是可以等于这个ResourceQuota定义的数值的,但是存储资源配额:requests.storage、对象数量配额是不能超过(必须小于)ResourceQuota定义的数值,否则会报错的。)
  • 这些字段是可以写在一起的

image-20230525064623650

资源限制 LimitRange

资源限制 LimitRange:限制容器的最大最小。

默认情况下,K8s集群上的容器对计算资源没有任何限制,可能会导致个别容器资源过大导致影响其他容器正常工作,这时可以使用LimitRange定义容器默认CPU和内存请求值或者最大上限

LimitRange限制维度:

• 限制容器配置requests.cpu/memory,limits.cpu/memory的最小、最大值

• 限制容器配置requests.cpu/memory,limits.cpu/memory的默认值

• 限制PVC配置requests.storage的最小、最大值

💘 实战:资源限制 LimitRange-2023.5.25(测试成功)

image-20230524212613262

  • 实验环境
实验环境:
1、win10,vmwrokstation虚机;
2、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
   k8s version:v1.20.0
   docker://20.10.7
  • 实验软件

链接:https://pan.baidu.com/s/1dTuFjqToJaCiHHvtYH9xiw?pwd=0820
提取码:0820

2023.5.25-LimitRange-cdoe

image-20230525122821896

  • 为了保持环境的纯洁性,这里删除上面的ResourceQuota配置
[root@k8s-master1 ~]#cd ResourceQuota/
[root@k8s-master1 ResourceQuota]#ls
compute-resources.yaml  object-counts.yaml  pod1.yaml  pod2.yaml  pod3.yaml  pod.yaml  pvc1.yaml  pvc2.yaml  pvc3.yaml  pvc.yaml  storage-resources.yaml
[root@k8s-master1 ResourceQuota]#kubectl delete -f .
resourcequota "object-counts" deleted
pod "web" deleted
pod "web3" deleted
pod "web3" deleted
persistentvolumeclaim "pvc" deleted
persistentvolumeclaim "pvc3" deleted
resourcequota "storage-resources" deleted
Error from server (NotFound): error when deleting "compute-resources.yaml": resourcequotas "compute-resources" not found
Error from server (NotFound): error when deleting "pod1.yaml": pods "web2" not found
Error from server (NotFound): error when deleting "pvc1.yaml": persistentvolumeclaims "pvc1" not found
Error from server (NotFound): error when deleting "pvc2.yaml": persistentvolumeclaims "pvc2" not found
[root@k8s-master1 ResourceQuota]#

1.计算资源最大、最小限制

image-20230525071051206

  • 结论:

如果命名空间里配置了LimitRange,那么后续创建的pod里容器的request值不能小于LimitRange里定义的min(request的最小值),limits值不能大于LimitRange里定义的max(limits的最大值),否则会报错的。

接下来,我们验证下上面的结论。

  • 部署LimitRange
[root@k8s-master1 ~]#mkdir LimitRange
[root@k8s-master1 ~]#cd LimitRange/
[root@k8s-master1 LimitRange]#vim cpu-memory-min-max.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: cpu-memory-min-max
  namespace: test
spec:
  limits:
  - max: # 容器能设置limit的最大值
      cpu: 1
      memory: 1Gi
    min: # 容器能设置request的最小值
      cpu: 200m 
      memory: 200Mi
    type: Container
 
#部署
[root@k8s-master1 LimitRange]#kubectl apply -f cpu-memory-min-max.yaml 
limitrange/cpu-memory-min-max created

#查看
[root@k8s-master1 LimitRange]#kubectl get limits -n test
NAME                 CREATED AT
cpu-memory-min-max   2023-05-24T23:26:13Z
[root@k8s-master1 LimitRange]#kubectl describe limits -ntest
Name:       cpu-memory-min-max
Namespace:  test
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   memory    200Mi  1Gi  1Gi              1Gi            -
Container   cpu       200m   1    1                1              -
#注意:这里是有request和limit默认值的
  • 我们来创建一个小于request最小值,创建一个大于limit最大值的pod看下情况

创建一个小于request最小值:

[root@k8s-master1 LimitRange]#vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web
  name: web
  namespace: test
spec:
  containers:
  - image: nginx
    name: web
    resources:
      requests:
        cpu: 100m
        memory: 200Mi
      limits:
        cpu: 1
        memory: 1Gi
        
#部署:(符合预期效果)
[root@k8s-master1 LimitRange]#kubectl apply -f pod.yaml
Error from server (Forbidden): error when creating "pod.yaml": pods "web" is forbidden: minimum cpu usage per Container is 200m, but request is 100m
[root@k8s-master1 LimitRange]#kubectl describe limits -ntest
Name:       cpu-memory-min-max
Namespace:  test
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   cpu       200m   1    1                1              -
Container   memory    200Mi  1Gi  1Gi              1Gi            -

创建一个大于limit最大值:

[root@k8s-master1 LimitRange]#cp pod.yaml pod1.yaml
[root@k8s-master1 LimitRange]#vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web
  name: web
  namespace: test
spec:
  containers:
  - image: nginx
    name: web
    resources:
      requests:
        cpu: 200m
        memory: 200Mi
      limits:
        cpu: 1.1
        memory: 1Gi
        
#部署:(符合预期效果)
[root@k8s-master1 LimitRange]#kubectl apply -f pod1.yaml 
Error from server (Forbidden): error when creating "pod1.yaml": pods "web" is forbidden: maximum cpu usage per Container is 1, but limit is 1100m
[root@k8s-master1 LimitRange]#kubectl describe limits -ntest
Name:       cpu-memory-min-max
Namespace:  test
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   memory    200Mi  1Gi  1Gi              1Gi            -
Container   cpu       200m   1    1                1              -

创建一个合适的pod:

[root@k8s-master1 LimitRange]#cp pod.yaml pod2.yaml
[root@k8s-master1 LimitRange]#vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web
  name: web
  namespace: test
spec:
  containers:
  - image: nginx
    name: web
    resources:
      requests:
        cpu: 250m
        memory: 200Mi
      limits:
        cpu: 0.9
        memory: 1Gi
        
#部署:
[root@k8s-master1 LimitRange]#kubectl apply -f pod2.yaml 
pod/web created

#查看:
[root@k8s-master1 LimitRange]#kubectl describe limits -ntest
Name:       cpu-memory-min-max
Namespace:  test
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   cpu       200m   1    1                1              -
Container   memory    200Mi  1Gi  1Gi              1Gi            -
[root@k8s-master1 LimitRange]#kubectl get po -ntest
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          20s
web4   1/1     Running   0          52m

测试结束。😘

2.计算资源默认值限制

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-q2v0vH4B-1685312821739)(https://bucket-hg.oss-cn-shanghai.aliyuncs.com/img/image-20230525071058992.png)]

  • 结论:

只要给某个命名空间配置了LimitRange,如果你的pod里不配置request、limit,也是可以创建成功pod的,则k8s会默认分配一个request、limit值的。

  • 我们来看下default命名空间下的pod是否有默认request、limit值
root@k8s-master1 LimitRange]#kubectl get limits 
No resources found in default namespace.
[root@k8s-master1 LimitRange]#kubectl get po
NAME       READY   STATUS    RESTARTS   AGE
busybox    1/1     Running   6          3d10h
busybox2   1/1     Running   6          3d10h
py-k8s     1/1     Running   1          24h
[root@k8s-master1 LimitRange]#kubectl describe po busybox 
Name:         busybox
Namespace:    default
Priority:     0
Node:         k8s-node2/172.29.9.33
Start Time:   Sun, 21 May 2023 20:44:21 +0800
Labels:       run=busybox
Annotations:  cni.projectcalico.org/podIP: 10.244.169.162/32
……

#可以看到,default命名空间下的pod是没有默认request、limit值的,因此其pod可以用尽宿主机的资源
  • 我们看下test命令空间下,创建一个新pod,是否会有默认request、limit值
[root@k8s-master1 LimitRange]#kubectl get limits -ntest
NAME                 CREATED AT
cpu-memory-min-max   2023-05-24T23:26:13Z
[root@k8s-master1 LimitRange]#kubectl describe limits -ntest
Name:       cpu-memory-min-max
Namespace:  test
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   cpu       200m   1    1                1              -
Container   memory    200Mi  1Gi  1Gi              1Gi            -
[root@k8s-master1 LimitRange]#kubectl run web520 --image=nginx -ntest
pod/web520 created
[root@k8s-master1 LimitRange]#kubectl describe pod web520 -ntest
Name:         web520
Namespace:    test
Priority:     0
Node:         k8s-node2/172.29.9.33
Start Time:   Thu, 25 May 2023 07:44:16 +0800
Labels:       run=web520
Annotations:  cni.projectcalico.org/podIP: 10.244.169.167/32
              cni.projectcalico.org/podIPs: 10.244.169.167/32
              kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container web520; cpu, memory limit for container web520
Status:       Running
IP:           10.244.169.167
IPs:
  IP:  10.244.169.167
Containers:
  web520:
    Container ID:   docker://6cf523d8b462fdfcb44348e4af5247f8c35e5cb0cf4c7e5b0dadaeea76aa8bec
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 25 May 2023 07:44:26 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:        1
      memory:     1Gi
    Environment:  <none>
    
#可以看到,是有默认值的    
  • 接下来,我们更改下这个默认值
[root@k8s-master1 LimitRange]#vim default-cpu-memory-min-max.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: default-cpu-memory-min-max 
  namespace: test
spec:
  limits:
  - default:
      cpu: 500m
      memory: 500Mi
    defaultRequest:
      cpu: 300m
      memory: 300Mi
    type: Container
    
#部署:
[root@k8s-master1 LimitRange]#kubectl apply -f default-cpu-memory-min-max.yaml 
limitrange/default-cpu-memory-min-max created
[root@k8s-master1 LimitRange]#kubectl describe limits -ntest
Name:       cpu-memory-min-max
Namespace:  test
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   cpu       200m   1    1                1              -
Container   memory    200Mi  1Gi  1Gi              1Gi            -


Name:       default-cpu-memory-min-max
Namespace:  test
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Container   cpu       -    -    300m             500m           -
Container   memory    -    -    300Mi            500Mi          -
#可以看到,此时的默认值改变了

#我们再创建一个pod看下现象
[root@k8s-master1 LimitRange]#kubectl run web1314 --image=nginx -ntest
pod/web1314 created
[root@k8s-master1 LimitRange]#kubectl describe pod web1314 -ntest
Name:         web1314
Namespace:    test
Priority:     0
Node:         k8s-node1/172.29.9.32
Start Time:   Thu, 25 May 2023 07:51:28 +0800
Labels:       run=web1314
Annotations:  cni.projectcalico.org/podIP: 10.244.36.101/32
              cni.projectcalico.org/podIPs: 10.244.36.101/32
              kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container web1314; cpu, memory limit for container web1314
Status:       Running
IP:           10.244.36.101
IPs:
  IP:  10.244.36.101
Containers:
  web1314:
    Container ID:   docker://9cd7d7d0ebf7adb9e71b33749c26dd9e9c0149c3c1e427b50dd733bd3989d75b
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 25 May 2023 07:51:30 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:        1
      memory:     1Gi
    Environment:  <none>
    
#注意:测试出现了问题
#在test命名空间有2个LimitRange资源,其default配置有冲突,但根据测试现象看,默认匹配第一条规则。
#此时,我们直接在第一条规则上直接更改默认值,我们看下现象:
[root@k8s-master1 LimitRange]#vim cpu-memory-min-max.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: cpu-memory-min-max
  namespace: test
spec:
  limits:
  - max:
      cpu: 1
      memory: 1Gi
    min:
      cpu: 200m
      memory: 200Mi
    default:
      cpu: 600m
      memory: 600Mi
    defaultRequest:
      cpu: 400m
      memory: 400Mi
    type: Container

#部署并查看:
[root@k8s-master1 LimitRange]#kubectl apply -f cpu-memory-min-max.yaml 
limitrange/cpu-memory-min-max configured
root@k8s-master1 LimitRange]#kubectl describe limits -ntest
Name:       cpu-memory-min-max
Namespace:  test
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   cpu       200m   1    400m             600m           -
Container   memory    200Mi  1Gi  400Mi            600Mi          -


Name:       default-cpu-memory-min-max
Namespace:  test
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Container   cpu       -    -    300m             500m           -
Container   memory    -    -    300Mi            500Mi          -

#再次部署一个新pod
[root@k8s-master1 LimitRange]#kubectl run web1315 --image=nginx -ntest
pod/web1315 created
[root@k8s-master1 LimitRange]#kubectl describe pod web1315  -ntest
Name:         web1315
Namespace:    test
Priority:     0
Node:         k8s-node2/172.29.9.33
Start Time:   Thu, 25 May 2023 07:57:39 +0800
Labels:       run=web1315
Annotations:  cni.projectcalico.org/podIP: 10.244.169.168/32
              cni.projectcalico.org/podIPs: 10.244.169.168/32
              kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container web1315; cpu, memory limit for container web1315
Status:       Running
IP:           10.244.169.168
IPs:
  IP:  10.244.169.168
Containers:
  web1315:
    Container ID:   docker://a9e19c184eaa9513078b46d8d2d8ce5dd0a09dca00772052345d59040c57346e
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 25 May 2023 07:57:42 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     600m
      memory:  600Mi
    Requests:
      cpu:        400m
      memory:     400Mi
    Environment:  <none>
    Mounts:
#此时,新建pod里的默认值被改变过来了,符合预期效果。    

测试结束。😘

3.存储资源最大、最小限制

image-20230525071106317

  • 和前面的计算资源一样,接下来进行测试
[root@k8s-master1 LimitRange]#vim storage-min-max.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: storage-min-max
  namespace: test
spec:
  limits:
    - type: PersistentVolumeClaim
      max:
        storage: 10Gi
      min:
        storage: 1Gi    

#部署:
[root@k8s-master1 LimitRange]#kubectl apply -f storage-min-max.yaml 
limitrange/storage-min-max created

#创建一个pvc
[root@k8s-master1 LimitRange]#vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-storage-test
  namespace: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 11Gi
      
#部署pvc:
[root@k8s-master1 LimitRange]#kubectl apply -f pvc.yaml 
Error from server (Forbidden): error when creating "pvc.yaml": persistentvolumeclaims "pvc-storage-test" is forbidden: maximum storage usage per PersistentVolumeClaim is 10Gi, but request is 11Gi
[root@k8s-master1 LimitRange]#kubectl describe limits -ntest
Name:       cpu-memory-min-max
Namespace:  test
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   cpu       200m   1    400m             600m           -
Container   memory    200Mi  1Gi  400Mi            600Mi          -


Name:       default-cpu-memory-min-max
Namespace:  test
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Container   cpu       -    -    300m             500m           -
Container   memory    -    -    300Mi            500Mi          -


Name:                  storage-min-max
Namespace:             test
Type                   Resource  Min  Max   Default Request  Default Limit  Max Limit/Request Ratio
----                   --------  ---  ---   ---------------  -------------  -----------------------
PersistentVolumeClaim  storage   1Gi  10Gi  -                -              -
#可以看到,自己申请的存储为11Gi,但是LimitRange里max可申请存储为10Gi,因此报错。

#重新创建一个pvc
[root@k8s-master1 LimitRange]#cp pvc.yaml pvc1.yaml
[root@k8s-master1 LimitRange]#vim pvc1.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-storage-test
  namespace: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi 
      
#部署:
[root@k8s-master1 LimitRange]#kubectl apply -f pvc1.yaml 
persistentvolumeclaim/pvc-storage-test created
[root@k8s-master1 LimitRange]#kubectl get pvc -ntest
NAME               STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-storage-test   Pending                                                     30s
[root@k8s-master1 LimitRange]#kubectl describe limits -ntest
Name:       cpu-memory-min-max
Namespace:  test
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   memory    200Mi  1Gi  400Mi            600Mi          -
Container   cpu       200m   1    400m             600m           -


Name:       default-cpu-memory-min-max
Namespace:  test
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Container   memory    -    -    300Mi            500Mi          -
Container   cpu       -    -    300m             500m           -


Name:                  storage-min-max
Namespace:             test
Type                   Resource  Min  Max   Default Request  Default Limit  Max Limit/Request Ratio
----                   --------  ---  ---   ---------------  -------------  -----------------------
PersistentVolumeClaim  storage   1Gi  10Gi  -                -              -
#部署成功,符合预期。

测试结束。😘

ibe limits -ntest
Name: cpu-memory-min-max
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio


Container memory 200Mi 1Gi 400Mi 600Mi -
Container cpu 200m 1 400m 600m -

Name: default-cpu-memory-min-max
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio


Container memory - - 300Mi 500Mi -
Container cpu - - 300m 500m -

Name: storage-min-max
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio


PersistentVolumeClaim storage 1Gi 10Gi - - -
#部署成功,符合预期。


测试结束。😘



本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/25570.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Aerial Vision-and-Dialog Navigation阅读报告

Aerial Vision-and-Dialog Navigation 本次报告&#xff0c;包含以下部分&#xff1a;1摘要&#xff0c;2数据集/模拟器&#xff0c;3AVDN任务&#xff0c;4模型&#xff0c;5实验结果。重点介绍第2/3部分相关主页&#xff1a;Aerial Vision-and-Dialog Navigation (google.com…

基于C#的串口扫描枪通信实战

今天搞大事&#xff0c;观众们动起来&#xff0c;搞事的目的是 掌握串口通信及winform开发技术 硬件设备&#xff1a;1、串口激光扫描枪&#xff0c;注意是串口&#xff0c;不是USB口 2、USB转串口的连接线一根&#xff0c;如图连接所示 3、USB扩展器一个&#xff0c;如果你电…

iphone苹果手机如何备份整个手机数据?

手机上的数据变得越来越重要&#xff0c;大家也越来越注重数据安全。如果手机设备丢失的话&#xff0c;不仅是设备的丢失&#xff0c;还是数据的丢失。因此&#xff0c;备份数据就显得很重要。那么&#xff0c;iphone如何备份整个手机&#xff0c;苹果怎么查备份的照片&#xf…

Python量化交易:策略创建运行流程

学习目标 目标 知道策略的创建和运行知道策略的相关设置知道RQ的策略运行流程应用 无 1、体验创建策略、运行策略流程 1.1 创建策略 1.2 策略界面 2、 策略界面功能、运行介绍 2.1 一个完整的策略需要做的事情 选择策略的运行信息&#xff1a; 选择运行区间和初始资金选择回…

笔记本安装CentOS

目标: 1.利用闲置笔记本 2.省电/提高利用率/不安装图形桌面/最小化安装/附加选项:开发工具 step1&#xff1a;镜像下载 CentOS-7.9 163镜像 阿里云镜像 清华大学镜像 随便选一个 step2: 下载U盘系统盘制作工具Rufus U盘写入镜像/安装 step3: 安装完毕进入系统 …

【Linux】文件的压缩和解压

欢迎来到博主 Apeiron 的博客&#xff0c;祝您旅程愉快 &#xff01; 时止则止&#xff0c;时行则行。动静不失其时&#xff0c;其道光明。 目录 1、压缩格式 2、压缩软件 3、tar 命令简介 4、tar 命令压缩 5、总结 1、压缩格式 在市面上有非常多的压缩格式&#xff0c;…

《面试1v1》类加载过程

我是 javapub&#xff0c;一名 Markdown 程序员从&#x1f468;‍&#x1f4bb;&#xff0c;八股文种子选手。 面试官&#xff1a; 你了解Java的类加载过程吗?跟我聊聊classes是如何加载到JVM中的。 候选人&#xff1a; Java的类加载过程由加载、验证、准备、解析和初始化5个…

JAVA变量在不同情况下未赋值与默认初始值

目录 一、默认初始值 二、本地变量 代码 运行结果 二、实例变量 代码 运行结果 三、本地变量和实例变量的区别 1.作用域 2.生命周期 3.初始化 一、默认初始值 数据类型初始值数据类型初始值byte0long0Lchar‘u0000’float0.0fshort0double0.0int0booleanfalse引用nul…

【react全家桶】react-Hook (下)

本人大二学生一枚&#xff0c;热爱前端&#xff0c;欢迎来交流学习哦&#xff0c;一起来学习吧。 <专栏推荐> &#x1f525;&#xff1a;js专栏 &#x1f525;&#xff1a;vue专栏 &#x1f525;&#xff1a;react专栏 文章目录 15【react-Hook &#xff08;下&#x…

普源1G带宽4通道10G采样率数字示波器MSO8104

超高性价比七合一 集成示波器在如今的集成设计领域&#xff0c;一款集成度较高的综合示波器已经成为设计工程师必不可少的得力工具。 MSO8000 系列数字示波器&#xff0c;它集 7 种独立仪器于一体&#xff0c;包括一台示波器、一台 16 通道逻辑分析仪、一台频谱分析仪、一台任…

ipad触控笔是哪几款?一般电容笔和Apple pencil区别

和苹果Pencil最大的区别就是&#xff0c;电容笔没具备重力压感&#xff0c;只有一种倾斜的压感。如果你不经常画画&#xff0c;那么你可以使用一款平替电容笔。这种平替电容笔&#xff0c;不仅仅是用在办公上&#xff0c;还能用来做笔记和练习。更何况&#xff0c;现在苹果一款…

【密码学复习】第十章 身份鉴别

身份鉴别的定义 定义&#xff1a;身份鉴别&#xff0c;又称为身份识别、身份认证。它是证实客户的真实身份与其所声称的身份是否相符的过程。 口令身份鉴别 固定口令&#xff08;四&#xff09; 注册环节&#xff1a;双因子认证 ① 接收用户提供的口令pw&#xff08;PIN&…

JVM-基础知识

JVM基础知识 JVM结构图 字节码文件 Java虚拟机不和包括Java在内的任何语言绑定,它只与字节码文件这种特定的二进制文件格式所关联. Class文件结构不仅仅是JVM的执行入口,更是Java生态圈的基础和核心. 字节码文件内容是什么 字节码是一种二进制的类文件,他的内容是JVM指令,而…

github在线编程

github在线编程 文章目录 github在线编程两种区别演示项目 Ruoyi-VueGitHub Codespaces 演示github 访问项目使用 GitHubCodeSpace 打开该项目查看运行环境安装运行环境初始化myql数据安装 redis运行前端运行后端前后端运行成功测试安装相关插件 GitPod 演示 说明: 目前总结 gi…

把玩数据在内存中的存储

前言&#xff1a;时光如梭&#x1f4a6;&#xff0c;今天到了C语言进阶啦&#x1f60e;&#xff0c;基础知识我们已经有了初步认识&#xff0c; 是时候该拔高拔高自己了&#x1f63c;。 目标&#xff1a;掌握浮点数在内存的存储&#xff0c;整形在内存的存储。 鸡汤&#xff1a…

好玩!AI文字RPG游戏;播客进入全AI时代?LangChain项目实践手册;OpenAI联创科普GPT | ShowMeAI日报

&#x1f440;日报&周刊合集 | &#x1f3a1;生产力工具与行业应用大全 | &#x1f9e1; 点赞关注评论拜托啦&#xff01; &#x1f916; Microsoft Build 中国黑客松挑战赛&#xff0c;进入AI新纪元 近期&#xff0c;伴随着人工智能的新一轮浪潮&#xff0c;Hackathon (黑…

【网络协议详解】——OSPF协议(学习笔记)

目录 &#x1f552; 1. 概述&#x1f552; 2. 相关概念&#x1f558; 2.1 基本思想&#x1f558; 2.2 区域及路由&#x1f558; 2.3 链路状态数据库&#x1f564; 2.3.1 点到点网络&#x1f564; 2.3.2 点到多点网络&#x1f564; 2.3.3 广播网络与非广播多址接入网络&#x1f…

反射枚举

1、定义 java的反射机制是在运行状态中&#xff0c;对于任意一个类&#xff0c;都能直到这个类的所有属性和方法&#xff1b;对于任意一个对象&#xff0c;都能够调用他的任意方法和属性&#xff0c;既然能够拿到那么&#xff0c;我们就可以修改部分类型信息&#xff1b;这种动…

软考A计划-电子商务设计师-计算机系统基础知识

点击跳转专栏>Unity3D特效百例点击跳转专栏>案例项目实战源码点击跳转专栏>游戏脚本-辅助自动化点击跳转专栏>Android控件全解手册点击跳转专栏>Scratch编程案例 &#x1f449;关于作者 专注于Android/Unity和各种游戏开发技巧&#xff0c;以及各种资源分享&am…

Delphi11的多线程ⓞ,附送图片处理代码

Delphi11的多线程ⓞ OLD Coder , 习惯使用Pascal 接下来准备启用多线程&#xff0c;毕竟硬件多核&#xff0c;Timer不太爽了&#xff08;曾经的桌面&#xff0c;都是Timer——理解为“片”&#xff09; 突然想写写&#xff0c;不知道还有多少D兄弟们在。 从源码开始 用D11之…