The resource limit for Pod has been set as:
resource limit cpu: 500m memory: 5Gi
and there 10G mem left on node.
I created 5 pods in a short time successfully, and node may still remain in memory, for example. 8G .
Memory usage increases over time and reaches the limit ( 5G x 5 = 25G > 10G ), then node will be absent.
To ensure usability, is there a way to set a resource limit on a node?
Update
The main problem is that pod memory usage is not always equal to the limit, especially at the time it is just starting. Thus, as soon as possible, unlimited pods can be created, and then make all nodes full load. This is not good. It may be something to allocate resources, rather than set a limit.
Update 2
I tested the limitations and resources again:
resources: limits: cpu: 500m memory: 5Gi requests: cpu: 500m memory: 5Gi
The total mem is 15G, and 14G on the left, but 3 packages are planned and successfully completed:
> free -mh total used free shared buff/cache available Mem: 15G 1.1G 8.3G 3.4M 6.2G 14G Swap: 0B 0B 0B > docker stats CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O 44eaa3e2d68c 0.63% 1.939 GB / 5.369 GB 36.11% 0 B / 0 B 47.84 MB / 0 B 87099000037c 0.58% 2.187 GB / 5.369 GB 40.74% 0 B / 0 B 48.01 MB / 0 B d5954ab37642 0.58% 1.936 GB / 5.369 GB 36.07% 0 B / 0 B 47.81 MB / 0 B
It seems that node will be crushed soon by XD
Update 3
Now I change the resource limit, request 5G and limit 8G :
resources: limits: cpu: 500m memory: 5Gi requests: cpu: 500m memory: 8Gi
Results: 
According to the k8s source code about checking resources :

Shared memory is only 15G , and all elements require 24G , so all containers can be killed. (my single container will cost more than 16G usually, if not limited.)
This means that you are better off keeping requests exactly equal to limits to avoid crushing pod or node. As if the value of requests not specified, it will be set as limit by default , so what exactly is requests used for? I think that only limits fully or IMO, contrary to what K8s claimed, I rather like asking the resource request more than the limit to ensure the usability of the nodes.
Update 4
Kubernetes 1.1 schedule mems pods requests using the formula:
(capacity - memoryRequested) >= podRequest.memory
Kubernetes does not seem to care about memory usage, as Vishnu Kannan said. This way node will be crushed if mem used other applications.
Fortunately, from commit e64fe822 the formula was changed as:
(allocatable - memoryRequested) >= podRequest.memory
waiting for k8s v1.2!