As you know, in EKS each of your pods has a private IP assigned. Which means that the max number of pods that a node can handle is directly linked to the max number of ENI possible (Elastic Network Interfaces), and their IP adresses, for the EC2 instance type of the node you are using.
I discovered recently that there were two hard limits applicable here, so I am sharing them in this small post if that can help you to gain some time.
First take the number from this file regarding your node instance type.
If your node is inside a managed node group where the AMI is pinpointed :
- the number above is your max
If your node is inside a managed node group where the AMI is NOT pinpointed :
- if the number above is < 110 then this is your max
- if the number above is > 110 and your instance type has less than 30 vCPU, then 110 is your max (explained here)
- if the number above is > 110 and your instance type has more than 30 vCPU, then 250 is your max (explained here)
So technically you multiply this final number with the number of nodes in your cluster (assuming they all have the same instance type) and you have the maximum number of pods that can run inside your EKS cluster.
If you want to double check this value for a node, you can simply run this kubectl commands :
kubectl get nodes
kubectl describe node NODE_NAME | grep 'pods\|PrivateIP'