Training
Module
Azure Kubernetes Services Governance with Azure Policy - Training
In this module, we discuss how to use Azure Policy for Kubernetes to enforce rules and detect noncompliance in AKS clusters.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Azure Policy extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your cluster components in a centralized, consistent manner. Cluster components include pods, containers, and namespaces.
Azure Policy makes it possible to manage and report on the compliance state of your Kubernetes cluster components from one place. By using Azure Policy's Add-on or Extension, governing your cluster components is enhanced with Azure Policy features, like the ability to use selectors and overrides for safe policy rollout and rollback.
Azure Policy for Kubernetes supports the following cluster environments:
Important
The Azure Policy Add-on Helm model and the add-on for AKS Engine have been deprecated. Follow the instructions to remove the add-ons.
By installing Azure Policy's add-on or extension on your Kubernetes clusters, Azure Policy enacts the following functions:
To enable and use Azure Policy with your Kubernetes cluster, take the following actions:
Configure your Kubernetes cluster and install the Azure Kubernetes Service (AKS) add-on or Azure Policy's Extension for Arc-enabled Kubernetes clusters (depending on your cluster type).
Note
For common issues with installation, see Troubleshoot - Azure Policy Add-on.
Create or use a sample Azure Policy definition for Kubernetes
Review limitations and recommendations in our FAQ section
The Azure Policy Add-on for AKS is part of Kubernetes version 1.27 with long term support (LTS).
Register the resource providers and preview features.
Azure portal:
Register the Microsoft.PolicyInsights
resource providers. For steps, see
Resource providers and types.
Azure CLI:
# Log in first with az login if you're not using Cloud Shell
# Provider register: Register the Azure Policy provider
az provider register --namespace Microsoft.PolicyInsights
You need the Azure CLI version 2.12.0 or later installed and configured. To find the version, run the az --version
command. If you need to install or upgrade, see
How to install the Azure CLI.
The AKS cluster must be a supported Kubernetes version in Azure Kubernetes Service (AKS). Use the following script to validate your AKS cluster version:
# Log in first with az login if you're not using Cloud Shell
# Look for the value in kubernetesVersion
az aks list
Open ports for the Azure Policy extension. The Azure Policy extension uses these domains and ports to fetch policy definitions and assignments and report compliance of the cluster back to Azure Policy.
Domain | Port |
---|---|
data.policy.core.windows.net |
443 |
store.policy.core.windows.net |
443 |
login.windows.net |
443 |
dc.services.visualstudio.com |
443 |
After the prerequisites are completed, install the Azure Policy Add-on in the AKS cluster you want to manage.
Azure portal
Launch the AKS service in the Azure portal by selecting All services, then searching for and selecting Kubernetes services.
Select one of your AKS clusters.
Select Policies on the left side of the Kubernetes service page.
In the main page, select the Enable add-on button.
Azure CLI
# Log in first with az login if you're not using Cloud Shell
az aks enable-addons --addons azure-policy --name MyAKSCluster --resource-group MyResourceGroup
To validate that the add-on installation was successful and that the azure-policy and gatekeeper pods are running, run the following command:
# azure-policy pod is installed in kube-system namespace
kubectl get pods -n kube-system
# gatekeeper pod is installed in gatekeeper-system namespace
kubectl get pods -n gatekeeper-system
Lastly, verify that the latest add-on is installed by running this Azure CLI command, replacing
<rg>
with your resource group name and <cluster-name>
with the name of your AKS cluster:
az aks show --query addonProfiles.azurepolicy -g <rg> -n <cluster-name>
. The result should look
similar to the following output for clusters using service principals:
{
"config": null,
"enabled": true,
"identity": null
}
And the following output for clusters using managed identity:
{
"config": null,
"enabled": true,
"identity": {
"clientId": "########-####-####-####-############",
"objectId": "########-####-####-####-############",
"resourceId": "<resource-id>"
}
}
Azure Policy for Kubernetes makes it possible to manage and report on the compliance state of your Kubernetes clusters from one place. With Azure Policy's Extension for Arc-enabled Kubernetes clusters, you can govern your Arc-enabled Kubernetes cluster components, like pods and containers.
This article describes how to create, show extension status, and delete the Azure Policy for Kubernetes extension.
For an overview of the extensions platform, see Azure Arc cluster extensions.
If you already deployed Azure Policy for Kubernetes on an Azure Arc cluster using Helm directly without extensions, follow the instructions to delete the Helm chart. After the deletion is done, you can then proceed.
Ensure your Kubernetes cluster is a supported distribution.
Note
Azure Policy for Arc extension is supported on the following Kubernetes distributions.
Ensure you met all the common prerequisites for Kubernetes extensions listed here including connecting your cluster to Azure Arc.
Note
Azure Policy extension is supported for Arc enabled Kubernetes clusters in these regions.
Open ports for the Azure Policy extension. The Azure Policy extension uses these domains and ports to fetch policy definitions and assignments and report compliance of the cluster back to Azure Policy.
Domain | Port |
---|---|
data.policy.core.windows.net |
443 |
store.policy.core.windows.net |
443 |
login.windows.net |
443 |
dc.services.visualstudio.com |
443 |
Before you install the Azure Policy extension or enabling any of the service features, your subscription must enable the Microsoft.PolicyInsights
resource providers.
Note
To enable the resource provider, follow the steps in Resource providers and types or run either the Azure CLI or Azure PowerShell command.
Azure CLI
# Log in first with az login if you're not using Cloud Shell
# Provider register: Register the Azure Policy provider
az provider register --namespace 'Microsoft.PolicyInsights'
Azure PowerShell
# Log in first with Connect-AzAccount if you're not using Cloud Shell
# Provider register: Register the Azure Policy provider
Register-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights'
Note
Note the following for Azure Policy extension creation:
connectedk8s
will be propagated to the Azure Policy extension to support outbound proxy.To create an extension instance, for your Arc enabled cluster, run the following command substituting <>
with your values:
az k8s-extension create --cluster-type connectedClusters --cluster-name <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --extension-type Microsoft.PolicyInsights --name <EXTENSION_INSTANCE_NAME>
az k8s-extension create --cluster-type connectedClusters --cluster-name my-test-cluster --resource-group my-test-rg --extension-type Microsoft.PolicyInsights --name azurepolicy
{
"aksAssignedIdentity": null,
"autoUpgradeMinorVersion": true,
"configurationProtectedSettings": {},
"configurationSettings": {},
"customLocationSettings": null,
"errorInfo": null,
"extensionType": "microsoft.policyinsights",
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-test-rg/providers/Microsoft.Kubernetes/connectedClusters/my-test-cluster/providers/Microsoft.KubernetesConfiguration/extensions/azurepolicy",
"identity": {
"principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"tenantId": null,
"type": "SystemAssigned"
},
"location": null,
"name": "azurepolicy",
"packageUri": null,
"provisioningState": "Succeeded",
"releaseTrain": "Stable",
"resourceGroup": "my-test-rg",
"scope": {
"cluster": {
"releaseNamespace": "kube-system"
},
"namespace": null
},
"statuses": [],
"systemData": {
"createdAt": "2021-10-27T01:20:06.834236+00:00",
"createdBy": null,
"createdByType": null,
"lastModifiedAt": "2021-10-27T01:20:06.834236+00:00",
"lastModifiedBy": null,
"lastModifiedByType": null
},
"type": "Microsoft.KubernetesConfiguration/extensions",
"version": "1.1.0"
}
To check the extension instance creation was successful, and inspect extension metadata, run the following command substituting <>
with your values:
az k8s-extension show --cluster-type connectedClusters --cluster-name <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --name <EXTENSION_INSTANCE_NAME>
az k8s-extension show --cluster-type connectedClusters --cluster-name my-test-cluster --resource-group my-test-rg --name azurepolicy
To validate that the extension installation was successful and that the azure-policy and gatekeeper pods are running, run the following command:
# azure-policy pod is installed in kube-system namespace
kubectl get pods -n kube-system
# gatekeeper pod is installed in gatekeeper-system namespace
kubectl get pods -n gatekeeper-system
To delete the extension instance, run the following command substituting <>
with your values:
az k8s-extension delete --cluster-type connectedClusters --cluster-name <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --name <EXTENSION_INSTANCE_NAME>
The Azure Policy language structure for managing Kubernetes follows that of existing policy definitions. There are sample definition files available to assign in Azure Policy's built-in policy library that can be used to govern your cluster components.
Azure Policy for Kubernetes also support custom definition creation at the component-level for both Azure Kubernetes Service clusters and Azure Arc-enabled Kubernetes clusters. Constraint template and mutation template samples are available in the Gatekeeper community library. Azure Policy's Visual Studio Code Extension can be used to help translate an existing constraint template or mutation template to a custom Azure Policy policy definition.
With a Resource Provider mode of
Microsoft.Kubernetes.Data
, the effects audit, deny, disabled, and mutate are used to manage your Kubernetes clusters.
Audit and deny must provide details
properties
specific to working with
OPA Constraint Framework
and Gatekeeper v3.
As part of the details.templateInfo or details.constraintInfo properties in the policy definition, Azure Policy passes the URI or Base64Encoded
value of these CustomResourceDefinitions(CRD) to the add-on. Rego is the language that OPA and Gatekeeper support to validate a request to
the Kubernetes cluster. By supporting an existing standard for Kubernetes management, Azure Policy
makes it possible to reuse existing rules and pair them with Azure Policy for a unified cloud
compliance reporting experience. For more information, see
What is Rego?.
To assign a policy definition to your Kubernetes cluster, you must be assigned the appropriate Azure role-based access control (Azure RBAC) policy assignment operations. The Azure built-in roles Resource Policy Contributor and Owner have these operations. To learn more, see Azure RBAC permissions in Azure Policy.
Find the built-in policy definitions for managing your cluster using the Azure portal with the following steps. If using a custom policy definition, search for it by name or the category that you created it with.
Start the Azure Policy service in the Azure portal. Select All services in the left pane and then search for and select Policy.
In the left pane of the Azure Policy page, select Definitions.
From the Category dropdown list box, use Select all to clear the filter and then select Kubernetes.
Select the policy definition, then select the Assign button.
Set the Scope to the management group, subscription, or resource group of the Kubernetes cluster where the policy assignment applies.
Note
When assigning the Azure Policy for Kubernetes definition, the Scope must include the cluster resource.
Give the policy assignment a Name and Description that you can use to identify it easily.
Set the Policy enforcement to one of the following values:
Enabled - Enforce the policy on the cluster. Kubernetes admission requests with violations are denied.
Disabled - Don't enforce the policy on the cluster. Kubernetes admission requests with violations aren't denied. Compliance assessment results are still available. When you roll out new policy definitions to running clusters, Disabled option is helpful for testing the policy definition as admission requests with violations aren't denied.
Select Next.
Set parameter values
Select Review + create.
Alternately, use the Assign a policy - Portal quickstart to find and assign a Kubernetes policy. Search for a Kubernetes policy definition instead of the sample audit vms.
Important
Built-in policy definitions are available for Kubernetes clusters in category Kubernetes. For a list of built-in policy definitions, see Kubernetes samples.
The add-on checks in with Azure Policy service for changes in policy assignments every 15 minutes. During this refresh cycle, the add-on checks for changes. These changes trigger creates, updates, or deletes of the constraint templates and constraints.
In a Kubernetes cluster, if a namespace has the cluster-appropriate label, the admission requests with violations aren't denied. Compliance assessment results are still available.
admission.policy.azure.com/ignore
Note
While a cluster admin might have permission to create and update constraint templates and constraints resources install by the Azure Policy Add-on, these aren't supported scenarios as manual updates are overwritten. Gatekeeper continues to evaluate policies that existed prior to installing the add-on and assigning Azure Policy policy definitions.
Every 15 minutes, the add-on calls for a full scan of the cluster. After gathering details of the full scan and any real-time evaluations by Gatekeeper of attempted changes to the cluster, the add-on reports the results back to Azure Policy for inclusion in compliance details like any Azure Policy assignment. Only results for active policy assignments are returned during the audit cycle. Audit results can also be seen as violations listed in the status field of the failed constraint. For details on Non-compliant resources, see Component details for Resource Provider modes.
Note
Each compliance report in Azure Policy for your Kubernetes clusters include all violations within the last 45 minutes. The timestamp indicates when a violation occurred.
Some other considerations:
If the cluster subscription is registered with Microsoft Defender for Cloud, then Microsoft Defender for Cloud Kubernetes policies are applied on the cluster automatically.
When a deny policy is applied on cluster with existing Kubernetes resources, any preexisting resource that isn't compliant with the new policy continues to run. When the non-compliant resource gets rescheduled on a different node the Gatekeeper blocks the resource creation.
When a cluster has a deny policy that validates resources, the user doesn't get a rejection
message when creating a deployment. For example, consider a Kubernetes deployment that contains
replicasets
and pods. When a user executes kubectl describe deployment $MY_DEPLOYMENT
, it doesn't return a rejection message as part of events. However,
kubectl describe replicasets.apps $MY_DEPLOYMENT
returns the events associated with rejection.
Note
Init containers might be included during policy evaluation. To see if init containers are included, review the CRD for the following or a similar declaration:
input_containers[c] {
c := input.review.object.spec.initContainers[_]
}
If constraint templates have the same resource metadata name, but the policy definition references
the source at different locations, the policy definitions are considered to be in conflict. Example:
Two policy definitions reference the same template.yaml
file stored at different source locations
like the Azure Policy template store (store.policy.core.windows.net
) and GitHub.
When policy definitions and their constraint templates are assigned but aren't already installed on the cluster and are in conflict, they're reported as a conflict and aren't installed into the cluster until the conflict is resolved. Likewise, any existing policy definitions and their constraint templates that are already on the cluster that conflicts with newly assigned policy definitions continue to function normally. If an existing assignment is updated and there's a failure to sync the constraint template, the cluster is also marked as a conflict. For all conflict messages, see AKS Resource Provider mode compliance reasons
As a Kubernetes controller/container, both the azure-policy and gatekeeper pods keep logs in the Kubernetes cluster. In general, azure-policy logs can be used to troubleshoot issues with policy ingestion onto the cluster and compliance reporting. The gatekeeper-controller-manager pod logs can be used to troubleshoot runtime denies. The gatekeeper-audit pod logs can be used to troubleshoot audits of existing resources. The logs can be exposed in the Insights page of the Kubernetes cluster. For more information, see Monitor your Kubernetes cluster performance with Azure Monitor for containers.
To view the add-on logs, use kubectl
:
# Get the azure-policy pod name installed in kube-system namespace
kubectl logs <azure-policy pod name> -n kube-system
# Get the gatekeeper pod name installed in gatekeeper-system namespace
kubectl logs <gatekeeper pod name> -n gatekeeper-system
If you're attempting to troubleshoot a particular ComplianceReasonCode that is appearing in your compliance results, you can search the azure-policy pod logs for that code to see the full accompanying error.
For more information, see Debugging Gatekeeper in the Gatekeeper documentation.
After the add-on downloads the policy assignments and installs the constraint templates and constraints on the cluster, it annotates both with Azure Policy information like the policy assignment ID and the policy definition ID. To configure your client to view the add-on related artifacts, use the following steps:
Set up kubeconfig
for the cluster.
For an Azure Kubernetes Service cluster, use the following Azure CLI:
# Set context to the subscription
az account set --subscription <YOUR-SUBSCRIPTION>
# Save credentials for kubeconfig into .kube in your home folder
az aks get-credentials --resource-group <RESOURCE-GROUP> --name <CLUSTER-NAME>
Test the cluster connection.
Run the kubectl cluster-info
command. A successful run has each service responding with a URL
of where it's running.
To view constraint templates downloaded by the add-on, run kubectl get constrainttemplates
.
Constraint templates that start with k8sazure
are the ones installed by the add-on.
To view mutation templates downloaded by the add-on, run kubectl get assign
, kubectl get assignmetadata
, and kubectl get modifyset
.
To identify the mapping between a constraint template downloaded to the cluster and the policy
definition, use kubectl get constrainttemplates <TEMPLATE> -o yaml
. The results look similar to
the following output:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
annotations:
azure-policy-definition-id: /subscriptions/<SUBID>/providers/Microsoft.Authorization/policyDefinitions/<GUID>
constraint-template-installed-by: azure-policy-addon
constraint-template: <URL-OF-YAML>
creationTimestamp: "2021-09-01T13:20:55Z"
generation: 1
managedFields:
- apiVersion: templates.gatekeeper.sh/v1beta1
fieldsType: FieldsV1
...
<SUBID>
is the subscription ID and <GUID>
is the ID of the mapped policy definition.
<URL-OF-YAML>
is the source location of the constraint template that the add-on downloaded to
install on the cluster.
Once you have the names of the
add-on downloaded constraint templates, you can use the
name to see the related constraints. Use kubectl get <constraintTemplateName>
to get the list.
Constraints installed by the add-on start with azurepolicy-
.
The constraint has details about violations and mappings to the policy definition and assignment. To
see the details, use kubectl get <CONSTRAINT-TEMPLATE> <CONSTRAINT> -o yaml
. The results look
similar to the following output:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAzureContainerAllowedImages
metadata:
annotations:
azure-policy-assignment-id: /subscriptions/<SUB-ID>/resourceGroups/<RG-NAME>/providers/Microsoft.Authorization/policyAssignments/<ASSIGNMENT-GUID>
azure-policy-definition-id: /providers/Microsoft.Authorization/policyDefinitions/<DEFINITION-GUID>
azure-policy-definition-reference-id: ""
azure-policy-setdefinition-id: ""
constraint-installed-by: azure-policy-addon
constraint-url: <URL-OF-YAML>
creationTimestamp: "2021-09-01T13:20:55Z"
spec:
enforcementAction: deny
match:
excludedNamespaces:
- kube-system
- gatekeeper-system
- azure-arc
parameters:
imageRegex: ^.+azurecr.io/.+$
status:
auditTimestamp: "2021-09-01T13:48:16Z"
totalViolations: 32
violations:
- enforcementAction: deny
kind: Pod
message: Container image nginx for container hello-world has not been allowed.
name: hello-world-78f7bfd5b8-lmc5b
namespace: default
- enforcementAction: deny
kind: Pod
message: Container image nginx for container hello-world has not been allowed.
name: hellow-world-89f8bfd6b9-zkggg
For more information about troubleshooting the Add-on for Kubernetes, see the Kubernetes section of the Azure Policy troubleshooting article.
For Azure Policy extension for Arc extension related issues, go to:
For Azure Policy related issues, go to:
Azure Policy's Add-on for AKS has a version number that indicates the image version of add-on. As feature support is newly introduced on the Add-on, the version number is increased.
This section helps you identify which Add-on version is installed on your cluster and also share a historical table of the Azure Policy Add-on version installed per AKS cluster.
The Azure Policy Add-on uses the standard Semantic Versioning schema for each version. To identify the Azure Policy Add-on version being used, you can run the following command:
kubectl get pod azure-policy-<unique-pod-identifier> -n kube-system -o json | jq '.spec.containers[0].image'
To identify the Gatekeeper version that your Azure Policy Add-on is using, you can run the following command:
kubectl get pod gatekeeper-controller-<unique-pod-identifier> -n gatekeeper-system -o json | jq '.spec.containers[0].image'
Finally, to identify the AKS cluster version that you're using, follow the linked AKS guidance.
Policy can now be used to evaluate CONNECT operations, for instance, to deny exec
s. Note that there is no brownfield compliance available for noncompliant CONNECT operations, so a policy with Audit effect that targets CONNECTs is a no op.
Security improvements.
Introducing CEL and VAP. Common Expression Language (CEL) is a Kubernetes-native expression language that can be used to declare validation rules of a policy. Validating Admission Policy (VAP) feature provides in-tree policy evaluation, reduces admission request latency, and improves reliability and availability. The supported validation actions include Deny, Warn, and Audit. Custom policy authoring for CEL/VAP is allowed, and existing users won't need to convert their Rego to CEL as they will both be supported and be used to enforce policies. To use CEL and VAP, users need to enroll in the feature flag AKS-AzurePolicyK8sNativeValidation
in the Microsoft.ContainerService
namespace. For more information, view the Gatekeeper Documentation.
Security improvements.
Introducing expansion, a shift left feature that lets you know up front whether your workload resources (Deployments, ReplicaSets, Jobs, etc.) will produce admissible pods. Expansion shouldn't change the behavior of your policies; rather, it just shifts Gatekeeper's evaluation of pod-scoped policies to occur at workload admission time rather than pod admission time. However, to perform this evaluation it must generate and evaluate a what-if pod that is based on the pod spec defined in the workload, which might have incomplete metadata. For instance, the what-if pod won't contain the proper owner references. Because of this small risk of policy behavior changing, we're introducing expansion as disabled by default. To enable expansion for a given policy definition, set .policyRule.then.details.source
to All
. Built-ins will be updated soon to enable parameterization of this field. If you test your policy definition and find that the what-if pod being generated for evaluation purposes is incomplete, you can also use a mutation with source Generated
to mutate the what-if pods. For more information on this option, view the Gatekeeper documentation.
Expansion is currently only available on AKS clusters, not Arc clusters.
Security improvements.
Security improvements.
Security improvements.
Enables mutation and external data by default. The additional mutating webhook and increased validating webhook timeout cap might add latency to calls in the worst case. Also introduces support for viewing policy definition and set definition version in compliance results.
Introduces error state for policies in error, enabling them to be distinguished from policies in noncompliant states. Adds support for v1 constraint templates and use of the excludedNamespaces parameter in mutation policies. Adds an error status check on constraint templates post-installation.
Azure Policy for Kubernetes now supports mutation to remediate AKS clusters at-scale!
To remove the Azure Policy Add-on from your AKS cluster, use either the Azure portal or Azure CLI:
Azure portal
Launch the AKS service in the Azure portal by selecting All services, then searching for and selecting Kubernetes services.
Select your AKS cluster where you want to disable the Azure Policy Add-on.
Select Policies on the left side of the Kubernetes service page.
In the main page, select the Disable add-on button.
Azure CLI
# Log in first with az login if you're not using Cloud Shell
az aks disable-addons --addons azure-policy --name MyAKSCluster --resource-group MyResourceGroup
Note
Azure Policy Add-on Helm model is now deprecated. You should opt for the Azure Policy Extension for Azure Arc enabled Kubernetes instead.
To remove the Azure Policy Add-on and Gatekeeper from your Azure Arc enabled Kubernetes cluster, run the following Helm command:
helm uninstall azure-policy-addon
Note
The AKS Engine product is now deprecated for Azure public cloud customers. Consider using Azure Kubernetes Service (AKS) for managed Kubernetes or Cluster API Provider Azure for self-managed Kubernetes. There are no new features planned; this project will only be updated for CVEs & similar, with Kubernetes 1.24 as the final version to receive updates.
To remove the Azure Policy Add-on and Gatekeeper from your AKS Engine cluster, use the method that aligns with how the add-on was installed:
If installed by setting the addons property in the cluster definition for AKS Engine:
Redeploy the cluster definition to AKS Engine after changing the addons property for azure-policy to false:
"addons": [
{
"name": "azure-policy",
"enabled": false
}
]
For more information, see AKS Engine - Disable Azure Policy Add-on.
If installed with Helm Charts, run the following Helm command:
helm uninstall azure-policy-addon
metadata.gatekeeper.sh/requires-sync-data
annotation in a constraint template to configure the replication of data from your cluster into the OPA cache is currently only allowed for built-in policies. The reason is because it can dramatically increase the Gatekeeper pods resource usage if not used carefully.Changing the Gatekeeper config is unsupported, as it contains critical security settings. Edits to the config are reconciled.
Currently, several built-in policies make use of data replication, which enables users to sync existing on-cluster resources to the OPA cache and reference them during evaluation of an AdmissionReview
request. Data replication policies can be differentiated by the presence of data.inventory
in the Rego, and the presence of the metadata.gatekeeper.sh/requires-sync-data
annotation, which informs the Azure Policy addon which resources need to be cached for policy evaluation to work properly. This process differs from standalone Gatekeeper, where this annotation is descriptive, not prescriptive.
Data replication is currently blocked for use in custom policy definitions, because replicating resources with high instance counts can dramatically increase the Gatekeeper pods' resource usage if not used carefully. You'll see a ConstraintTemplateInstallFailed
error when attempting to create a custom policy definition containing a constraint template with this annotation.
Removing the annotation may appear to mitigate the error you see, but then the policy addon will not sync any required resources for that constraint template into the cache. Thus, your policies will be evaluated against an empty data.inventory
(assuming that no built-in is assigned that replicates the requisite resources). This will lead to misleading compliance results. As noted previously, manually editing the config to cache the required resources is also not permitted.
The following limitations apply only to the Azure Policy Add-on for AKS:
The Azure Policy Add-on requires three Gatekeeper components to run: One audit pod and two webhook pod replicas. One Azure Policy pod and one Azure Policy webhook pod is also installed.
The Azure Policy for Kubernetes components that run on your cluster consume more resources as the count of Kubernetes resources and policy assignments increases in the cluster, which requires audit and enforcement operations.
The following are estimates to help you plan:
Windows pods don't support security contexts. Thus, some of the Azure Policy definitions, like disallowing root privileges, can't be escalated in Windows pods and only apply to Linux pods.
The Azure Policy Add-on for Kubernetes collects limited cluster diagnostic data. This diagnostic data is vital technical data related to software and performance. The data is used in the following ways:
The information collected by the add-on isn't personal data. The following details are currently collected:
CriticalAddonsOnly
taint to schedule Gatekeeper pods. For more information, see Using system node pools.aad-pod-identity
enabled, Node Managed Identity (NMI) pods modify the nodes iptables
to intercept calls to the Azure Instance Metadata endpoint. This configuration means any request made to the Metadata endpoint is intercepted by NMI even if the pod doesn't use aad-pod-identity
.AzurePodIdentityException
CRD can be configured to inform aad-pod-identity
that any requests to the Metadata endpoint originating from a pod that matches labels defined in CRD should be proxied without any processing in NMI. The system pods with kubernetes.azure.com/managedby: aks
label in kube-system namespace should be excluded in aad-pod-identity
by configuring the AzurePodIdentityException
CRD. For more information, see Disable aad-pod-identity for a specific pod or application. To configure an exception, install the mic-exception YAML.Training
Module
Azure Kubernetes Services Governance with Azure Policy - Training
In this module, we discuss how to use Azure Policy for Kubernetes to enforce rules and detect noncompliance in AKS clusters.