Overview of Azure CNI networking

Completed

Kubernetes supports various plugins, allowing you to add new features and replace or enhance existing cluster behavior. The Container Network Interface (CNI) is a specification that allows developers to create plugins to configure container networking. Kubernetes applies the CNI specification, enabling the use of CNI plugins on your clusters.

Azure CNI plugins

The Azure CNI plugin enables collaboration between containers and Azure virtual networks (VNets). Using Azure CNI in a Kubernetes cluster allows pods to be assigned IP addresses from an Azure virtual network. The pod can then communicate on that virtual network just like any other device. It can connect to other pods, peered networks, on-premises networks using a VPN or ExpressRoute, or to other Azure services using Private Link.

Aside from the traditional Azure CNI plugin, Azure Kubernetes Service (AKS) supports the following CNI plugins:

Plugin Description When to use
Azure CNI Overlay Cluster nodes are deployed into an Azure virtual network subnet. Pods are assigned IP addresses from a private CIDR (Classless Inter-Domain Routing) logically different from the virtual network hosting the nodes. Pod and node traffic within the cluster use an Overlay network. NAT uses the node's IP address to reach resources outside of the cluster. • You want to scale to a large number of pods, but the IP address space in your virtual network is limited.
• Most of the pod communication is within the cluster.
• You don't need advanced AKS features, such as virtual nodes.
Azure CNI Powered by Cilium Combines the Azure CNI control plane with the Cilium data plane. Cilium enforces network policies to allow or deny traffic between pods, so you don't need to use a separate network policy engine. You can choose between two different methods for assigning pod IPs: via an overlay network or a virtual network. • You need support for larger clusters.
• You want faster service routing, more efficient network policy enforcement, and better observability of cluster traffic.
• You want to use the functionalities of the traditional Azure CNI and Azure CNI Overlay plugins with high-performance networking and security.
Azure CNI for dynamic allocation of IPs and enhanced subnet support Uses the functionalities of the traditional Azure CNI plugin and extends upon them to allocate pod IPs from subnets separate from the subnet hosting the AKS cluster. IPs are dynamically allocated to cluster pods from the pod subnet. Node and pod subnets can be scaled independently and share pod subnets across multiple node pools or clusters in the same virtual network. Since pods have a separate subnet, you can configure separate virtual network policies for them that differ from the node policies. • You want the flexibility to scale node and pod subnets independently.
• You need support for larger clusters without sacrificing performance.
• You want to configure separate virtual network policies for pods.
Bring your own (BYO) CNI AKS clusters are deployed without a preinstalled CNI plugin. From there, you can install your preferred Azure supported non-Microsoft CNI plugin. See Networking concepts for applications in Azure Kubernetes Service (AKS). Keep in mind that Microsoft support can't assist with CNI-related issues in clusters deployed with BYO CNI. • You want to use the same CNI plugin in AKS that you use in your on-premises Kubernetes environment.
• You want to use advanced functionalities available in the supported non-Microsoft plugins.