Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS)
Azure CNI Powered by Cilium combines the robust control plane of Azure CNI with the data plane of Cilium to provide high-performance networking and security.
By making use of eBPF programs loaded into the Linux kernel and a more efficient API object structure, Azure CNI Powered by Cilium provides the following benefits:
Functionality equivalent to existing Azure CNI and Azure CNI Overlay plugins
Improved Service routing
More efficient network policy enforcement
Better observability of cluster traffic
Support for larger clusters (more nodes, pods, and services)
IP Address Management (IPAM) with Azure CNI Powered by Cilium
Azure CNI Powered by Cilium can be deployed using two different methods for assigning pod IPs:
Assign IP addresses from an overlay network (similar to Azure CNI Overlay mode)
Assign IP addresses from a virtual network (similar to existing Azure CNI with Dynamic Pod IP Assignment)
If you aren't sure which option to select, read "Choosing a network model to use".
Network Policy Enforcement
Cilium enforces network policies to allow or deny traffic between pods. With Cilium, you don't need to install a separate network policy engine such as Azure Network Policy Manager or Calico.
Limitations
Azure CNI powered by Cilium currently has the following limitations:
Available only for Linux and not for Windows.
Cilium L7 policy enforcement is disabled.
Hubble is disabled.
Network policies cannot use
ipBlock
to allow access to node or pod IPs. See frequently asked questions for details and recommended workaround.Multiple Kubernetes services can't use the same host port with different protocols (for example, TCP or UDP) (Cilium issue #14287).
Network policies may be enforced on reply packets when a pod connects to itself via service cluster IP (Cilium issue #19406).
Network policies are not applied to pods using host networking (
spec.hostNetwork: true
) because these pods use the host identity instead of having individual identities.
Prerequisites
Azure CLI version 2.48.1 or later. Run
az --version
to see the currently installed version. If you need to install or upgrade, see Install Azure CLI.If using ARM templates or the REST API, the AKS API version must be 2022-09-02-preview or later.
Note
Previous AKS API versions (2022-09-02preview to 2023-01-02preview) used the field networkProfile.ebpfDataplane=cilium
. AKS API versions since 2023-02-02preview use the field networkProfile.networkDataplane=cilium
to enable Azure CNI Powered by Cilium.
Create a new AKS Cluster with Azure CNI Powered by Cilium
Option 1: Assign IP addresses from an overlay network
Use the following commands to create a cluster with an overlay network and Cilium. Replace the values for <clusterName>
, <resourceGroupName>
, and <location>
:
az aks create \
--name <clusterName> \
--resource-group <resourceGroupName> \
--location <location> \
--network-plugin azure \
--network-plugin-mode overlay \
--pod-cidr 192.168.0.0/16 \
--network-dataplane cilium \
--generate-ssh-keys
Note
The --network-dataplane cilium
flag replaces the deprecated --enable-ebpf-dataplane
flag used in earlier versions of the aks-preview CLI extension.
Option 2: Assign IP addresses from a virtual network
Run the following commands to create a resource group and virtual network with a subnet for nodes and a subnet for pods.
# Create the resource group
az group create --name <resourceGroupName> --location <location>
# Create a virtual network with a subnet for nodes and a subnet for pods
az network vnet create --resource-group <resourceGroupName> --location <location> --name <vnetName> --address-prefixes <address prefix, example: 10.0.0.0/8> -o none
az network vnet subnet create --resource-group <resourceGroupName> --vnet-name <vnetName> --name nodesubnet --address-prefixes <address prefix, example: 10.240.0.0/16> -o none
az network vnet subnet create --resource-group <resourceGroupName> --vnet-name <vnetName> --name podsubnet --address-prefixes <address prefix, example: 10.241.0.0/16> -o none
Create the cluster using --network-dataplane cilium
:
az aks create \
--name <clusterName> \
--resource-group <resourceGroupName> \
--location <location> \
--max-pods 250 \
--network-plugin azure \
--vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/nodesubnet \
--pod-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/podsubnet \
--network-dataplane cilium \
--generate-ssh-keys
Update an existing cluster to Azure CNI Powered by Cilium
Note
You can update an existing cluster to Azure CNI Powered by Cilium if the cluster meets the following criteria:
- The cluster uses either Azure CNI Overlay or Azure CNI with dynamic IP allocation. This does not include Azure CNI.
- The cluster does not have any Windows node pools.
Note
When enabling Cilium in a cluster with a different network policy engine (Azure NPM or Calico), the network policy engine will be uninstalled and replaced with Cilium. See Uninstall Azure Network Policy Manager or Calico for more details.
Warning
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately isn't supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged. Cilium will begin enforcing network policies only after all nodes have been re-imaged.
To perform the upgrade, you will need Azure CLI version 2.52.0 or later. Run az --version
to see the currently installed version. If you need to install or upgrade, see Install Azure CLI.
Use the following command to upgrade an existing cluster to Azure CNI Powered by Cilium. Replace the values for <clusterName>
and <resourceGroupName>
:
az aks update --name <clusterName> --resource-group <resourceGroupName> \
--network-dataplane cilium
Note
After enabling Azure CNI Powered by Cilium on an AKS cluster, you can't disable it. If you want to use a different network data plane, you must create a new AKS cluster.
Frequently asked questions
Can I customize Cilium configuration?
No, AKS manages the Cilium configuration and it can't be modified. We recommend that customers who require more control use AKS BYO CNI and install Cilium manually.
Can I use
CiliumNetworkPolicy
custom resources instead of KubernetesNetworkPolicy
resources?CiliumNetworkPolicy
custom resources are partially supported. Customers may use FQDN filtering as part of the Advanced Container Networking Services feature bundle.This
CiliumNetworkPolicy
example demonstrates a sample matching pattern for services that match the specified label.apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "example-fqdn" spec: endpointSelector: matchLabels: foo: bar egress: - toFQDNs: - matchPattern: "*.example.com"
Why is traffic being blocked when the
NetworkPolicy
has anipBlock
that allows the IP address?A limitation of Azure CNI Powered by Cilium is that a
NetworkPolicy
'sipBlock
cannot select pod or node IPs.For example, this
NetworkPolicy
has anipBlock
that allows all egress to0.0.0.0/0
:apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: example-ipblock spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 # This will still block pod and node IPs.
However, when this
NetworkPolicy
is applied, Cilium will block egress to pod and node IPs even though the IPs are within theipBlock
CIDR.As a workaround, you can add
namespaceSelector
andpodSelector
to select pods. The example below selects all pods in all namespaces:apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: example-ipblock spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 - namespaceSelector: {} - podSelector: {}
Note
It is not currently possible to specify a
NetworkPolicy
with anipBlock
to allow traffic to node IPs.Does AKS configure CPU or memory limits on the Cilium
daemonset
?No, AKS doesn't configure CPU or memory limits on the Cilium
daemonset
because Cilium is a critical system component for pod networking and network policy enforcement.Does Azure CNI powered by Cilium use Kube-Proxy?
No, AKS clusters created with network dataplane as Cilium don't use Kube-Proxy. If the AKS clusters are on Azure CNI Overlay or Azure CNI with dynamic IP allocation and are upgraded to AKS clusters running Azure CNI powered by Cilium, new nodes workloads are created without kube-proxy. Older workloads are also migrated to run without kube-proxy as a part of this upgrade process.
Next steps
Learn more about networking in AKS in the following articles:
Azure Kubernetes Service