Kubernetes Resource Limits

Container and pods in Kubernetes are dynamically distributed on the available nodes. Processes in a container consume cpu cycles and memory. Kubernetes does not know how much resources a container will consume, unless the container declares it.

All resources are finite. If too many pods are running in a kubernetes cluster, no pod will work. If kubernetes can make an informed decission that no more pods should be started, because it can be expected, that the next pod will again consume that many cpu cores, it will not start the pod.

Please note, the cluster is oversubscribed. The admin enforced limits are rough guidelines to prevent accidential resource exhaustion, not guaranteed resources.

Limits vs Requests

A Request is the lower limit kubernetes will guarantee. A pod will not be started unless kubernetes has these resource available. If you define 128 CPU cores as request, the pod will most likely never run, as kubernetes will need to find a node that has at least 128 cores. Defining two pods, one requesting 10 cores and one with 7 cores, they will not be able to run together on a 16 core node, only one will be started.

As long as spare cpu cycles are available, any pod can use them. As long as enough memory is available, pods can use them. But if kubernetes has to reclaim memory, it will restart pod. Remember, kubernetes treats restarting pods as a valid solution for a lot of things (rebalancing, updates, resource limits, etc).

A limit is a hard upper bound and will be enforced. Any pod using more cpu cycles then their declared limit, will be throttled. Consuming more memory than allowed will result in termination and restart.

Definition

apiVersion: v1
kind: Pod
metadata:
    name: pod1
spec:
    containers:
        - name: container1
          image: aco/base:8
          command: ['/bin/sh']
          tty: true
          resources:
              requests:
                  memory: 16Mi
              limits:
                  memory: 16Mi 

The pod will run, the container will be created. Attach to it kubectl attach -it pod1 -c container1. Consume memory (tail /dev/zero) and kubernetes will kill the process.

checks

What is the limit of the namespace? kubectl describe namespace my-namespace
...
Resource Quotas
  Name:          quota
  Resource       Used  Hard
  --------       ---   ---
  limits.cpu     1     16
  limits.memory  16Mi  16Gi

What are my pods consuming? kubectl top pod
NAME   CPU(cores)   MEMORY(bytes)   
pod1   0m           0Mi 

defaults

If a namespace has limits, all containers must declare one. A default limit is set

kubectl describe limits default
Name:       default
Namespace:  
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Container   cpu       -    -    100m             1              -
Container   memory    -    -    250Mi            1Gi            -

so by default request 0.1 cpu cores and 250MB ram, and limit on 1 cpu core and 1Gig ram.

-- ChristophHandel - 18 May 2022

This topic: IN > WebHome > Container > KubernetesResources
Topic revision: 19 May 2022, ChristophHandel
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback