How To
Summary
Looking at the guidelines for AKS, there is a limit in the number of pods that can be allocated into a single node of 250 (even if the recommendation is 110).
I was wondering if this is a soft limit that can be increased in some way from Microsoft or if it is a hard limit. We do have single node installations for several reasons and we're close to reaching this limit, so if we don't have an alternative we would have to switch to on-prem clusters.
Objective
Looking at the guidelines for AKS, there is a limit in the number of pods that can be allocated into a single node of 250 (even if the recommendation is 110)?
Environment
Azure Kubernetes Service (AKS)
Steps
The maximum number of pods per node in Azure Kubernetes Service (AKS) is 250, regardless of the networking plugin used (Kubenet or Azure CNI, including Azure CNI Overlay).
Unfortunately, this is a hard limit tied to the fixed /24 pod CIDR block allocated per node, which provides up to 256 IP addresses (with some reserved, effectively supporting a maximum of 250 pods).
While you can configure the max pods per node up to 250 during cluster or node pool creation (via Azure CLI, ARM templates, or the portal), there is no option to request an increase beyond this from Microsoft support, it's not a quota that can be raised like vCPU limits or similar resources.
The recommendation of 110 pods per node is for performance and stability reasons (e.g., to avoid kubelet overload or networking strain), especially with Windows containers or certain workloads, but exceeding it up to 250 is allowed if your setup can handle it.
If you're approaching this limit on a single-node cluster and can't scale horizontally (e.g., by adding more nodes), switching to an on-premises Kubernetes setup (like with kubeadm or a managed distribution) would indeed allow more flexibility.
In a self-managed cluster, you could adjust the pod CIDR to a larger subnet (e.g., /23 for ~510 pods or /22 for ~1020), though this requires careful planning for IP management, performance testing, and potential kubelet tuning to handle higher densities.
Hence, to summarize the above in points:
Pod Limits in AKS
Hard maximum = 250 pods/node
- This is a fixed service quota and can’t be raised via support ticket.
Default limits depend on networking plugin:
- Kubenet default 110 pods/node via CLI/ARM
- Azure CNI default 30 pods/node
Configuring maxPods
- You can override the default at node pool creation using the --max-pods parameter (CLI or ARM template), up to the 250 limit.
- Existing clusters:
- You cannot modify maxPods on existing node pools.
- Instead, add a new node pool with a higher maxPods, then migrate workloads accordingly
Limit Explanation
- AKS reserves IP addresses based on maxPods per node.
- It helps Kubernetes components scale, such as kubelet, CNI, control plane, and networking.
- Beyond 250, AKS cannot guarantee IP or system stability.
Recommendations/ Next steps:
Given you're near the 250 pods/node ceiling on a single-node cluster:
- Confirm your networking plugin (Kubenet or Azure CNI).
- If not already at the 250 limit, recreate or add a node pool with --max-pods=250.
- If you're already at 250, you're at the ceiling.
Options:
- Scale out by adding more nodes or pools.
- Shift to a multi-node cluster on AKS.
- Or consider on-prem if you require a single node.
References:
Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)
IP address planning for your Azure Kubernetes Service (AKS) clusters
https://github.com/Azure/AKS/issues/4367
Related Information
Document Location
Worldwide
Was this topic helpful?
Document Information
Modified date:
28 January 2026
UID
ibm17258191