Tutorial: Configure Azure CNI networking in Azure Kubernetes Service (AKS) using Ansible
Important
Ansible 2.8 (or later) is required to run the sample playbooks in this article.
Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. The Kubernetes masters are managed by Azure. You only manage and maintain the agent nodes. As a managed Kubernetes service, AKS is free - you pay only for the agent nodes within your clusters; not for the masters.
Using AKS, you can deploy a cluster using the following network models:
- Kubenet networking - Network resources are typically created and configured as the AKS cluster is deployed.
- Azure CNI networking - AKS cluster is connected to existing virtual network (VNET) resources and configurations.
For more information about networking to your applications in AKS, see Network concepts for applications in AKS.
In this article, you learn how to:
- Create an AKS cluster
- Configure Azure CNI networking
Prerequisites
- Azure subscription: If you don't have an Azure subscription, create a free account before you begin.
- Azure service principal: Create a service principal, making note of the following values: appId, displayName, password, and tenant.
Install Ansible: Do one of the following options:
- Install and configure Ansible on a Linux virtual machine
- Configure Azure Cloud Shell and - if you don't have access to a Linux virtual machine - create a virtual machine with Ansible.
Create a virtual network and subnet
The sample playbook code in this section is used to:
- Create a virtual network
- Create a subnet within the virtual network
Save the following playbook as vnet.yml
:
- name: Create vnet
azure_rm_virtualnetwork:
resource_group: "{{ resource_group }}"
name: "{{ name }}"
address_prefixes_cidr:
- 10.0.0.0/8
- name: Create subnet
azure_rm_subnet:
resource_group: "{{ resource_group }}"
name: "{{ name }}"
address_prefix_cidr: 10.240.0.0/16
virtual_network_name: "{{ name }}"
register: subnet
Create an AKS cluster in the virtual network
The sample playbook code in this section is used to:
- Create an AKS cluster within a virtual network.
Save the following playbook as aks.yml
:
- name: List supported kubernetes version from Azure
azure_rm_aks_version:
location: "{{ location }}"
register: versions
- name: Create AKS cluster within a VNet
azure_rm_aks:
resource_group: "{{ resource_group }}"
name: "{{ name }}"
dns_prefix: "{{ name }}"
kubernetes_version: "{{ versions.azure_aks_versions[-1] }}"
agent_pool_profiles:
- count: 3
name: nodepool1
vm_size: Standard_D2_v2
vnet_subnet_id: "{{ vnet_subnet_id }}"
linux_profile:
admin_username: azureuser
ssh_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
service_principal:
client_id: "{{ lookup('ini', 'client_id section=default file=~/.azure/credentials') }}"
client_secret: "{{ lookup('ini', 'secret section=default file=~/.azure/credentials') }}"
network_profile:
network_plugin: azure
docker_bridge_cidr: 172.17.0.1/16
dns_service_ip: 10.2.0.10
service_cidr: 10.2.0.0/24
register: aks
Here are some key notes to consider when working with the sample playbook:
Use the
azure_rm_aks_version
module to find the supported version.The
vnet_subnet_id
is the subnet created in the previous section.The playbook loads
ssh_key
from~/.ssh/id_rsa.pub
. If you modify it, use the single-line format - starting with "ssh-rsa" (without the quotes).The
client_id
andclient_secret
values are loaded from~/.azure/credentials
, which is the default credential file. You can set these values to your service principal or load these values from environment variables:client_id: "{{ lookup('env', 'AZURE_CLIENT_ID') }}" client_secret: "{{ lookup('env', 'AZURE_SECRET') }}"
Run the sample playbook
The sample playbook code in this section is used to test various features shown throughout this tutorial.
Save the following playbook as aks-azure-cni.yml
:
---
- hosts: localhost
vars:
resource_group: aksansibletest
name: aksansibletest
location: eastus
tasks:
- name: Ensure resource group exists
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}"
- name: Create vnet
include_tasks: vnet.yml
- name: Create AKS
vars:
vnet_subnet_id: "{{ subnet.state.id }}"
include_tasks: aks.yml
- name: Show AKS cluster detail
debug:
var: aks
Here are some key notes to consider when working with the sample playbook:
- Change the
aksansibletest
value to your resource group name. - Change the
aksansibletest
value to your AKS name. - Change the
eastus
value to your resource group location.
Run the playbook using the ansible-playbook command:
ansible-playbook aks-azure-cni.yml
After running the playbook, you see output similar to the following results:
PLAY [localhost]
TASK [Gathering Facts]
ok: [localhost]
TASK [Ensure resource group exists]
changed: [localhost]
TASK [Create vnet]
included: /home/devops/aks-cni/vnet.yml for localhost
TASK [Create vnet]
changed: [localhost]
TASK [Create subnet]
changed: [localhost]
TASK [Create AKS]
included: /home/devops/aks-cni/aks.yml for localhost
TASK [List supported kubernetes version from Azure]
[WARNING]: Azure API profile latest does not define an entry for
ContainerServiceClient
ok: [localhost]
TASK [Create AKS cluster with vnet]
changed: [localhost]
TASK [Show AKS cluster detail]
ok: [localhost] => {
"aks": {
"aad_profile": {},
"addon": {},
"agent_pool_profiles": [
{
"count": 3,
"name": "nodepool1",
"os_disk_size_gb": 100,
"os_type": "Linux",
"storage_profile": "ManagedDisks",
"vm_size": "Standard_D2_v2",
"vnet_subnet_id": "/subscriptions/BBBBBBBB-BBBB-BBBB-BBBB-BBBBBBBBBBBB/resourceGroups/aksansibletest/providers/Microsoft.Network/virtualNetworks/aksansibletest/subnets/aksansibletest"
}
],
"changed": true,
"dns_prefix": "aksansibletest",
"enable_rbac": false,
"failed": false,
"fqdn": "aksansibletest-0272707d.hcp.eastus.azmk8s.io",
"id": "/subscriptions/BBBBBBBB-BBBB-BBBB-BBBB-BBBBBBBBBBBB/resourcegroups/aksansibletest/providers/Microsoft.ContainerService/managedClusters/aksansibletest",
"kube_config": "..."
},
"location": "eastus",
"name": "aksansibletest",
"network_profile": {
"dns_service_ip": "10.2.0.10",
"docker_bridge_cidr": "172.17.0.1/16",
"network_plugin": "azure",
"network_policy": null,
"pod_cidr": null,
"service_cidr": "10.2.0.0/24"
},
"node_resource_group": "MC_aksansibletest_aksansibletest_eastus",
"provisioning_state": "Succeeded",
"service_principal_profile": {
"client_id": "AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA"
},
"tags": null,
"type": "Microsoft.ContainerService/ManagedClusters",
"warnings": [
"Azure API profile latest does not define an entry for ContainerServiceClient",
"Azure API profile latest does not define an entry for ContainerServiceClient"
]
}
}
PLAY RECAP
localhost : ok=9 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Clean up resources
Save the following code as
delete_rg.yml
.--- - hosts: localhost tasks: - name: Deleting resource group - "{{ name }}" azure_rm_resourcegroup: name: "{{ name }}" state: absent register: rg - debug: var: rg
Run the playbook using the ansible-playbook command. Replace the placeholder with the name of the resource group to be deleted. All resources within the resource group will be deleted.
ansible-playbook delete_rg.yml --extra-vars "name=<resource_group>"
Key points:
- Because of the
register
variable anddebug
section of the playbook, the results display when the command finishes.
- Because of the