Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Azure Functions provides integrated support for developing, deploying, and managing containerized function apps on Azure Container Apps. Use Azure Container Apps to host your function app containers when you need to run your event-driven functions in Azure in the same environment as other microservices, APIs, websites, workflows, or any container hosted programs. Container Apps hosting lets you run your functions in a fully managed, Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and Kubernetes Event-driven Autoscaling (KEDA).
You can write your function code in any language stack supported by Functions. You can use the same Functions triggers and bindings with event-driven scaling. You can also use existing Functions client tools and the Azure portal to create containers, deploy function app containers to Container Apps, and configure continuous deployment.
Container Apps integration also means that network and observability configurations, which are defined at the Container App environment level, apply to your function app as they do to all microservices running in a Container Apps environment. You also get the other cloud-native capabilities of Container Apps, including KEDA, Dapr, Envoy. You can still use Application Insights to monitor your functions executions, and your function app can access the same virtual networking resources provided by the environment.
For a general overview of container hosting options for Azure Functions, see Linux container support in Azure Functions.
There are two primary hosting plans for Container Apps, a serverless Consumption plan and a Dedicated plan, which uses workload profiles to better control your deployment resources. A workload profile determines the amount of compute and memory resources available to container apps deployed in an environment. These profiles are configured to fit the different needs of your applications.
The Consumption workload profile is the default profile added to every Workload profiles environment type. You can add Dedicated workload profiles to your environment as you create an environment or after it's created. To learn more about workload profiles, see Workload profiles in Azure Container Apps.
Container Apps hosting of containerized function apps is supported in all regions that support Container Apps.
If your app doesn't have specific hardware requirements, you can run your environment either in a Consumption plan or in a Dedicated plan using the default Consumption workload profile. When running functions on Container Apps, you're charged only for the Container Apps usage. For more information, see the Azure Container Apps pricing page.
Azure Functions on Azure Container Apps supports GPU-enabled hosting in the Dedicated plan with workload profiles.
To learn how to create and deploy a function app container to Container Apps in the default Consumption plan, see Create your first containerized functions on Azure Container Apps.
To learn how to create a Container Apps environment with workload profiles and deploy a function app container to a specific workload, see Container Apps workload profiles.
To use Container Apps hosting, your code must run on a function app in a Linux container that you create and maintain. Functions maintains a set of language-specific base images that you can use to generate your containerized function apps.
When you create a code project using Azure Functions Core Tools and include the --docker
option, Core Tools generates the Dockerfile with the correct base image, which you can use as a starting point when creating your container.
Important
When creating your own containers, you are required to keep the base image of your container updated to the latest supported base image. Supported base images for Azure Functions are language-specific and are found in the Azure Functions base image repos.
The Functions team is committed to publishing monthly updates for these base images. Regular updates include the latest minor version updates and security fixes for both the Functions runtime and languages. You should regularly update your container from the latest base image and redeploy the updated version of your container.
When you make changes to your functions code, you must rebuild and republish your container image. For more information, see Update an image in the registry.
Azure Functions currently supports the following methods of deploying a containerized function app to Azure Container Apps:
You can continuously deploy your containerized apps from source code using either Azure Pipelines or GitHub Actions. The continuous deployment feature of Functions isn't currently supported when deploying to Container Apps.
For the best security, you should connect to remote services using Microsoft Entra authentication and managed identity authorization. You can use managed identities for these connections:
When running in Container Apps, you can use Microsoft Entra ID with managed identities for all binding extensions that support managed identities. Currently, only these binding extensions support event-driven scaling when using managed identity authentication:
For other bindings, use fixed replicas when using managed identity authentication. For more information, see the Functions developer guide.
When you host your function apps in a Container Apps environment, your functions are able to take advantage of both internally and externally accessible virtual networks. To learn more about environment networks, see Networking in Azure Container Apps environment.
All Functions triggers can be used in your containerized function app. However, only these triggers can dynamically scale (from zero instances) based on received events when running in a Container Apps environment:
Azure Functions on Container Apps is designed to configure the scale parameters and rules as per the event target. You don't need to worry about configuring the KEDA scaled objects. You can still set minimum and maximum replica count when creating or modifying your function app. The following Azure CLI command sets the minimum and maximum replica count when creating a new function app in a Container Apps environment from an Azure Container Registry:
az functionapp create --name <APP_NAME> --resource-group <MY_RESOURCE_GROUP> --max-replicas 15 --min-replicas 1 --storage-account <STORAGE_NAME> --environment MyContainerappEnvironment --image <LOGIN_SERVER>/azurefunctionsimage:v1 --registry-username <USERNAME> --registry-password <SECURE_PASSWORD> --registry-server <LOGIN_SERVER>
The following command sets the same minimum and maximum replica count on an existing function app:
az functionapp config container set --name <APP_NAME> --resource-group <MY_RESOURCE_GROUP> --max-replicas 15 --min-replicas 1
Azure Functions on Container Apps runs your containerized function app resources in specially managed resource groups. These managed resource groups help protect the consistency of your apps by preventing unintended or unauthorized modification or deletion of resources in the managed group, even by service principals.
A managed resource group is created for you the first time you create function app resources in a Container Apps environment. Container Apps resources required by your containerized function app run in this managed resource group. Any other function apps that you create in the same environment use this existing group.
A managed resource group gets removed automatically after all function app container resources are removed from the environment. While the managed resource group is visible, any attempts to modify or remove the managed resource group result in an error. To remove a managed resource group from an environment, remove all of the function app container resources and it gets removed for you.
If you run into any issues with these managed resource groups, you should contact support.
You can monitor your containerized function app hosted in Container Apps using Azure Monitor Application Insights in the same way you do with apps hosted by Azure Functions. For more information, see Monitor Azure Functions.
For bindings that support event-driven scaling, scale events are logged as FunctionsScalerInfo
and FunctionsScalerError
events in your Log Analytics workspace. For more information, see Application Logging in Azure Container Apps.
Keep in mind the following considerations when deploying your function app containers to Container Apps:
ssl
isn't supported when hosted on Container Apps. Use a different protocol value.username
property must resolve to an application setting that contains the actual username value. When the default $ConnectionString
value is used, the Kafka trigger isn't able to cause the app to scale dynamically.WEBSITES_PORT
application setting to change this default port.containerapp
extension conflicts with the appservice-kube
extension in Azure CLI. If you have previously published apps to Azure Arc, run az extension list
and make sure that appservice-kube
isn't installed. If it is, you can remove it by running az extension remove -n appservice-kube
.Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowTraining
Module
Configure a container app in Azure Container Apps - Training
This module guides users through creating, configuring, and managing Container Apps and their environments. It also explores ingress options, scaling, instance management, and security considerations with best practices for configuring Azure Container Apps.
Certification
Microsoft Certified: Azure Developer Associate - Certifications
Build end-to-end solutions in Microsoft Azure to create Azure Functions, implement and manage web apps, develop solutions utilizing Azure storage, and more.