Allocate or limit resource for pods in Kubernet?

The resource limit for Pod has been set as:

resource limit cpu: 500m memory: 5Gi 

and there 10G mem left on node.

I created 5 pods in a short time successfully, and node may still remain in memory, for example. 8G .

Memory usage increases over time and reaches the limit ( 5G x 5 = 25G > 10G ), then node will be absent.

To ensure usability, is there a way to set a resource limit on a node?

Update

The main problem is that pod memory usage is not always equal to the limit, especially at the time it is just starting. Thus, as soon as possible, unlimited pods can be created, and then make all nodes full load. This is not good. It may be something to allocate resources, rather than set a limit.

Update 2

I tested the limitations and resources again:

 resources: limits: cpu: 500m memory: 5Gi requests: cpu: 500m memory: 5Gi 

The total mem is 15G, and 14G on the left, but 3 packages are planned and successfully completed:

 > free -mh total used free shared buff/cache available Mem: 15G 1.1G 8.3G 3.4M 6.2G 14G Swap: 0B 0B 0B > docker stats CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O 44eaa3e2d68c 0.63% 1.939 GB / 5.369 GB 36.11% 0 B / 0 B 47.84 MB / 0 B 87099000037c 0.58% 2.187 GB / 5.369 GB 40.74% 0 B / 0 B 48.01 MB / 0 B d5954ab37642 0.58% 1.936 GB / 5.369 GB 36.07% 0 B / 0 B 47.81 MB / 0 B 

It seems that node will be crushed soon by XD

Update 3

Now I change the resource limit, request 5G and limit 8G :

 resources: limits: cpu: 500m memory: 5Gi requests: cpu: 500m memory: 8Gi 

Results: enter image description here

According to the k8s source code about checking resources :

enter image description here

Shared memory is only 15G , and all elements require 24G , so all containers can be killed. (my single container will cost more than 16G usually, if not limited.)

This means that you are better off keeping requests exactly equal to limits to avoid crushing pod or node. As if the value of requests not specified, it will be set as limit by default , so what exactly is requests used for? I think that only limits fully or IMO, contrary to what K8s claimed, I rather like asking the resource request more than the limit to ensure the usability of the nodes.

Update 4

Kubernetes 1.1 schedule mems pods requests using the formula:

(capacity - memoryRequested) >= podRequest.memory

Kubernetes does not seem to care about memory usage, as Vishnu Kannan said. This way node will be crushed if mem used other applications.

Fortunately, from commit e64fe822 the formula was changed as:

(allocatable - memoryRequested) >= podRequest.memory

waiting for k8s v1.2!

+5
source share
2 answers

Kubernetes resource specifications have two fields: request and limit .

limits specify the size of the resource that the container can use. For memory, if a container goes beyond its limits, it will be killed by OOM. For a CPU, its use can be throttled.

requests are different in that they ensure that the node on which the pod is mounted has at least the large capacity available to it. If you want to make sure that your containers can grow to a certain size without expiring a node resource, specify a request of that size. This will limit the number of packages you can schedule, however - a 10G node can only fit 2 pods with a 5G memory request.

+15
source

Kubernetes maintains quality of service. If your Pods have limits , they belong to the Guaranteed class, and the likelihood of them being killed due to lack of system memory is extremely low. If the docker daemon or some other daemon that you run on node consumes a lot of memory, this means that there is a chance that the Guaranteed Pods will be killed.

The Kube Scheduler takes into account the amount of memory and allocated memory when planning. For example, you cannot schedule more than two modules, each of which requests 5 GB per 10 GB node.

Memory usage is not used by Kubernetes at this time for planning purposes.

+7
source

Source: https://habr.com/ru/post/1243815/


All Articles