Tutorial: Configure kubenet networking in Azure Kubernetes Service (AKS) using Ansible
Important
Ansible 2.8 (or later) is required to run the sample playbooks in this article.
Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. The Kubernetes masters are managed by Azure. You only manage and maintain the agent nodes. As a managed Kubernetes service, AKS is free - you pay only for the agent nodes within your clusters; not for the masters.
Using AKS, you can deploy a cluster using the following network models:
- Kubenet networking - Network resources are typically created and configured as the AKS cluster is deployed.
- Azure Container Networking Interface (CNI) networking - AKS cluster is connected to existing virtual network resources and configurations.
For more information about networking to your applications in AKS, see Network concepts for applications in AKS.
In this article, you learn how to:
- Create an AKS cluster
- Configure Azure kubenet networking
Prerequisites
- Azure subscription: If you don't have an Azure subscription, create a free account before you begin.
- Azure service principal: Create a service principal, making note of the following values: appId, displayName, password, and tenant.
Install Ansible: Do one of the following options:
- Install and configure Ansible on a Linux virtual machine
- Configure Azure Cloud Shell and - if you don't have access to a Linux virtual machine - create a virtual machine with Ansible.
Create a virtual network and subnet
The playbook code in this section creates the following Azure resources:
- Virtual network
- Subnet within the virtual network
Save the following playbook as vnet.yml
:
- name: Create vnet
azure_rm_virtualnetwork:
resource_group: "{{ resource_group }}"
name: "{{ name }}"
address_prefixes_cidr:
- 10.0.0.0/8
- name: Create subnet
azure_rm_subnet:
resource_group: "{{ resource_group }}"
name: "{{ name }}"
address_prefix_cidr: 10.240.0.0/16
virtual_network_name: "{{ name }}"
register: subnet
Create an AKS cluster in the virtual network
The playbook code in this section creates an AKS cluster within a virtual network.
Save the following playbook as aks.yml
:
- name: List supported kubernetes version from Azure
azure_rm_aks_version:
location: "{{ location }}"
register: versions
- name: Create AKS cluster with vnet
azure_rm_aks:
resource_group: "{{ resource_group }}"
name: "{{ name }}"
dns_prefix: "{{ name }}"
kubernetes_version: "{{ versions.azure_aks_versions[-1] }}"
agent_pool_profiles:
- count: 3
name: nodepool1
vm_size: Standard_D2_v2
vnet_subnet_id: "{{ vnet_subnet_id }}"
linux_profile:
admin_username: azureuser
ssh_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
service_principal:
client_id: "{{ lookup('ini', 'client_id section=default file=~/.azure/credentials') }}"
client_secret: "{{ lookup('ini', 'secret section=default file=~/.azure/credentials') }}"
network_profile:
network_plugin: kubenet
pod_cidr: 192.168.0.0/16
docker_bridge_cidr: 172.17.0.1/16
dns_service_ip: 10.0.0.10
service_cidr: 10.0.0.0/16
register: aks
Here are some key notes to consider when working with the sample playbook:
Use
azure_rm_aks_version
module to find the supported version.The
vnet_subnet_id
is the subnet created in the previous section.The
network_profile
defines the properties for the kubenet network plug-in.The
service_cidr
is used to assign internal services in the AKS cluster to an IP address. This IP address range should be an address space that isn't used outside of the AKS clusters. However, you can reuse the same service CIDR for multiple AKS clusters.The
dns_service_ip
address should be the ".10" address of your service IP address range.The
pod_cidr
should be a large address space that isn't in use elsewhere in your network environment. The address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed. As with the service CIDR, this IP range shouldn't exist outside of the AKS cluster, but it can be safely reused across clusters.The pod IP address range is used to assign a /24 address space to each node in the cluster. In the following example, the
pod_cidr
of 192.168.0.0/16 assigns the first node 192.168.0.0/24, the second node 192.168.1.0/24, and the third node 192.168.2.0/24.As the cluster scales or upgrades, Azure continues to assign a pod IP address range to each new node.
The playbook loads
ssh_key
from~/.ssh/id_rsa.pub
. If you modify it, use the single-line format - starting with "ssh-rsa" (without the quotes).The
client_id
andclient_secret
values are loaded from~/.azure/credentials
, which is the default credential file. You can set these values to your service principal or load these values from environment variables:client_id: "{{ lookup('env', 'AZURE_CLIENT_ID') }}" client_secret: "{{ lookup('env', 'AZURE_SECRET') }}"
Associate the network resources
When you create an AKS cluster, a network security group and route table are created. These resources are managed by AKS and updated when you create and expose services. Associate the network security group and route table with your virtual network subnet as follows.
Save the following playbook as associate.yml
.
- name: Get route table
azure_rm_routetable_facts:
resource_group: "{{ node_resource_group }}"
register: routetable
- name: Get network security group
azure_rm_securitygroup_facts:
resource_group: "{{ node_resource_group }}"
register: nsg
- name: Parse subnet id
set_fact:
subnet_name: "{{ vnet_subnet_id | regex_search(subnet_regex, '\\1') }}"
subnet_rg: "{{ vnet_subnet_id | regex_search(rg_regex, '\\1') }}"
subnet_vn: "{{ vnet_subnet_id | regex_search(vn_regex, '\\1') }}"
vars:
subnet_regex: '/subnets/(.+)'
rg_regex: '/resourceGroups/(.+?)/'
vn_regex: '/virtualNetworks/(.+?)/'
- name: Associate network resources with the node subnet
azure_rm_subnet:
name: "{{ subnet_name[0] }}"
resource_group: "{{ subnet_rg[0] }}"
virtual_network_name: "{{ subnet_vn[0] }}"
security_group: "{{ nsg.ansible_facts.azure_securitygroups[0].id }}"
route_table: "{{ routetable.route_tables[0].id }}"
Here are some key notes to consider when working with the sample playbook:
- The
node_resource_group
is the resource group name in which the AKS nodes are created. - The
vnet_subnet_id
is the subnet created in previous section.
Run the sample playbook
This section lists the complete sample playbook that calls the tasks creating in this article.
Save the following playbook as aks-kubenet.yml
:
---
- hosts: localhost
vars:
resource_group: aksansibletest
name: aksansibletest
location: eastus
tasks:
- name: Ensure resource group exist
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}"
- name: Create vnet
include_tasks: vnet.yml
- name: Create AKS
vars:
vnet_subnet_id: "{{ subnet.state.id }}"
include_tasks: aks.yml
- name: Associate network resources with the node subnet
vars:
vnet_subnet_id: "{{ subnet.state.id }}"
node_resource_group: "{{ aks.node_resource_group }}"
include_tasks: associate.yml
- name: Get details of the AKS
azure_rm_aks_facts:
name: "{{ name }}"
resource_group: "{{ resource_group }}"
show_kubeconfig: user
register: output
- name: Show AKS cluster detail
debug:
var: output.aks[0]
In the vars
section, make the following changes:
- For the
resource_group
key, change theaksansibletest
value to your resource group name. - For the
name
key, change theaksansibletest
value to your AKS name. - For the
Location
key, change theeastus
value to your resource group location.
Run the complete playbook using the ansible-playbook
command:
ansible-playbook aks-kubenet.yml
Running the playbook shows results similar to the following output:
PLAY [localhost]
TASK [Gathering Facts]
ok: [localhost]
TASK [Ensure resource group exist]
ok: [localhost]
TASK [Create vnet]
included: /home/devops/aks-kubenet/vnet.yml for localhost
TASK [Create vnet]
ok: [localhost]
TASK [Create subnet]
ok: [localhost]
TASK [Create AKS]
included: /home/devops/aks-kubenet/aks.yml for localhost
TASK [List supported kubernetes version from Azure]
[WARNING]: Azure API profile latest does not define an entry for
ContainerServiceClient
ok: [localhost]
TASK [Create AKS cluster with vnet]
changed: [localhost]
TASK [Associate network resources with the node subnet]
included: /home/devops/aks-kubenet/associate.yml for localhost
TASK [Get route table]
ok: [localhost]
TASK [Get network security group]
ok: [localhost]
TASK [Parse subnet id]
ok: [localhost]
TASK [Associate network resources with the node subnet]
changed: [localhost]
TASK [Get details of the AKS]
ok: [localhost]
TASK [Show AKS cluster detail]
ok: [localhost] => {
"output.aks[0]": {
"id": /subscriptions/BBBBBBBB-BBBB-BBBB-BBBB-BBBBBBBBBBBB/resourcegroups/aksansibletest/providers/Microsoft.ContainerService/managedClusters/aksansibletest",
"kube_config": "apiVersion: ...",
"location": "eastus",
"name": "aksansibletest",
"properties": {
"agentPoolProfiles": [
{
"count": 3,
"maxPods": 110,
"name": "nodepool1",
"osDiskSizeGB": 100,
"osType": "Linux",
"storageProfile": "ManagedDisks",
"vmSize": "Standard_D2_v2",
"vnetSubnetID": "/subscriptions/BBBBBBBB-BBBB-BBBB-BBBB-BBBBBBBBBBBB/resourceGroups/aksansibletest/providers/Microsoft.Network/virtualNetworks/aksansibletest/subnets/aksansibletest"
}
],
"dnsPrefix": "aksansibletest",
"enableRBAC": false,
"fqdn": "aksansibletest-cda2b56c.hcp.eastus.azmk8s.io",
"kubernetesVersion": "1.12.6",
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": "ssh-rsa ..."
}
]
}
},
"networkProfile": {
"dnsServiceIP": "10.0.0.10",
"dockerBridgeCidr": "172.17.0.1/16",
"networkPlugin": "kubenet",
"podCidr": "192.168.0.0/16",
"serviceCidr": "10.0.0.0/16"
},
"nodeResourceGroup": "MC_aksansibletest_pcaksansibletest_eastus",
"provisioningState": "Succeeded",
"servicePrincipalProfile": {
"clientId": "AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA"
}
},
"type": "Microsoft.ContainerService/ManagedClusters"
}
}
PLAY RECAP
localhost : ok=15 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Clean up resources
Save the following code as
delete_rg.yml
.--- - hosts: localhost tasks: - name: Deleting resource group - "{{ name }}" azure_rm_resourcegroup: name: "{{ name }}" state: absent register: rg - debug: var: rg
Run the playbook using the ansible-playbook command. Replace the placeholder with the name of the resource group to be deleted. All resources within the resource group will be deleted.
ansible-playbook delete_rg.yml --extra-vars "name=<resource_group>"
Key points:
- Because of the
register
variable anddebug
section of the playbook, the results display when the command finishes.
- Because of the