In Azure Functions, all functions share some core technical concepts and components, regardless of your preferred language or development environment. This article is language-specific. Choose your preferred language at the top of the article.
At the core of Azure Functions is a language-specific code project that implements one or more units of code execution called functions. Functions are simply methods that run in the Azure cloud based on events, in response to HTTP requests, or on a schedule. Think of your Azure Functions code project as a mechanism for organizing, deploying, and collectively managing your individual functions in the project when they're running in Azure. For more information, see Organize your functions.
The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For detailed language-specific guidance, see the C# developers guide.
The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the Java developers guide.
The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the Node.js developers guide.
The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the PowerShell developers guide.
The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the Python developers guide.
All functions must have a trigger, which defines how the function starts and can provide input to the function. Your functions can optionally define input and output bindings. These bindings simplify connections to other services without you having to work with client SDKs. For more information, see Azure Functions triggers and bindings concepts.
Azure Functions provides a set of language-specific project and function templates that make it easy to create new code projects and add functions to your project. You can use any of the tools that support Azure Functions development to generate new apps and functions using these templates.
Development tools
The following tools provide an integrated development and publishing experience for Azure Functions in your preferred language:
There's also an editor in the Azure portal that lets you update your code and your function.json definition file directly in the portal. You should only use this editor for small changes or creating proof-of-concept functions. You should always develop your functions locally, when possible. For more information, see Create your first function in the Azure portal.
Portal editing is only supported for Node.js version 3, which uses the function.json file.
Deployment
When you publish your code project to Azure, you're essentially deploying your project to an existing function app resource. A function app provides an execution context in Azure in which your functions run. As such, it's the unit of deployment and management for your functions. From an Azure Resource perspective, a function app is equivalent to a site resource (Microsoft.Web/sites) in Azure App Service, which is equivalent to a web app.
When the function app and any other required resources don't already exist in Azure, you first need to create these resources before you can deploy your project files. You can create these resources in one of these ways:
In addition to tool-based publishing, Functions supports other technologies for deploying source code to an existing function app. For more information, see Deployment technologies in Azure Functions.
Connect to services
A major requirement of any cloud-based compute service is reading data from and writing data to other cloud services. Functions provides an extensive set of bindings that makes it easier for you to connect to services without having to work with client SDKs.
Whether you use the binding extensions provided by Functions or you work with client SDKs directly, you securely store connection data and do not include it in your code. For more information, see Connections.
Bindings
Functions provides bindings for many Azure services and a few third-party services, which are implemented as extensions. For more information, see the complete list of supported bindings.
Binding extensions can support both inputs and outputs, and many triggers also act as input bindings. Bindings let you configure the connection to services so that the Functions host can handle the data access for you. For more information, see Azure Functions triggers and bindings concepts.
While Functions provides bindings to simplify data access in your function code, you're still able to use a client SDK in your project to directly access a given service, if you prefer. You might need to use client SDKs directly should your functions require a functionality of the underlying SDK that's not supported by the binding extension.
When you create a client SDK instance in your functions, you should get the connection info required by the client from Environment variables.
When you create a client SDK instance in your functions, you should get the connection info required by the client from Environment variables.
When you create a client SDK instance in your functions, you should get the connection info required by the client from Environment variables.
When you create a client SDK instance in your functions, you should get the connection info required by the client from Environment variables.
When you create a client SDK instance in your functions, you should get the connection info required by the client from Environment variables.
Connections
As a security best practice, Azure Functions takes advantage of the application settings functionality of Azure App Service to help you more securely store strings, keys, and other tokens required to connect to other services. Application settings in Azure are stored encrypted and can be accessed at runtime by your app as environment variable namevalue pairs. For triggers and bindings that require a connection property, you set the application setting name instead of the actual connection string. You can't configure a binding directly with a connection string or key.
For example, consider a trigger definition that has a connection property. Instead of the connection string, you set connection to the name of an environment variable that contains the connection string. Using this secrets access strategy both makes your apps more secure and makes it easier for you to change connections across environments. For even more security, you can use identity-based connections.
The default configuration provider uses environment variables. These variables are defined in application settings when running in the Azure and in the local settings file when developing locally.
Connection values
When the connection name resolves to a single exact value, the runtime identifies the value as a connection string, which typically includes a secret. The details of a connection string depend on the service to which you connect.
However, a connection name can also refer to a collection of multiple configuration items, useful for configuring identity-based connections. Environment variables can be treated as a collection by using a shared prefix that ends in double underscores __. The group can then be referenced by setting the connection name to this prefix.
For example, the connection property for an Azure Blob trigger definition might be Storage1. As long as there's no single string value configured by an environment variable named Storage1, an environment variable named Storage1__blobServiceUri could be used to inform the blobServiceUri property of the connection. The connection properties are different for each service. Refer to the documentation for the component that uses the connection.
Napomena
When using Azure App Configuration or Key Vault to provide settings for Managed Identity connections, setting names should use a valid key separator such as : or / in place of the __ to ensure names are resolved correctly.
For example, Storage1:blobServiceUri.
Configure an identity-based connection
Some connections in Azure Functions can be configured to use an identity instead of a secret. Support depends on the runtime version and the extension using the connection. In some cases, a connection string may still be required in Functions even though the service to which you're connecting supports identity-based connections. For a tutorial on configuring your function apps with managed identities, see the creating a function app with identity-based connections tutorial.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-assigned identity is used by default, although a user-assigned identity can be specified with the credential and clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When run in other contexts, such as local development, your developer identity is used instead, although this can be customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. For most Azure services, this means you need to assign a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
Važno
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere to the principle of least privilege, granting the identity only required privileges. For example, if the app only needs to be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would want to ensure the role assignment is scoped only over the resources that need to be read.
Choose one of these tabs to learn about permissions for each component:
You need to create a role assignment that provides access to your blob container at runtime. Management roles like Owner aren't sufficient. The following table shows built-in roles that are recommended when using the Blob Storage extension in normal operation. Your application may require further permissions based on the code you write.
You will need to create a role assignment that provides access to your queue at runtime. Management roles like Owner are not sufficient. The following table shows built-in roles that are recommended when using the Queue Storage extension in normal operation. Your application may require additional permissions based on the code you write.
You'll need to create a role assignment that provides access to your Azure Storage table service at runtime. Management roles like Owner aren't sufficient. The following table shows built-in roles that are recommended when using the Azure Tables extension against Azure Storage in normal operation. Your application may require additional permissions based on the code you write.
1 If your app is instead connecting to tables in Azure Cosmos DB for Table, using an identity isn't supported and the connection must use a connection string.
You will need to create a role assignment that provides access to your event hub at runtime. The scope of the role assignment can be for an Event Hubs namespace, or the event hub itself. Management roles like Owner are not sufficient. The following table shows built-in roles that are recommended when using the Event Hubs extension in normal operation. Your application may require additional permissions based on the code you write.
You'll need to create a role assignment that provides access to your topics and queues at runtime. Management roles like Owner aren't sufficient. The following table shows built-in roles that are recommended when using the Service Bus extension in normal operation. Your application may require additional permissions based on the code you write.
1 For triggering from Service Bus topics, the role assignment needs to have effective scope over the Service Bus subscription resource. If only the topic is included, an error will occur. Some clients, such as the Azure portal, don't expose the Service Bus subscription resource as a scope for role assignment. In such cases, the Azure CLI may be used instead. To learn more, see Azure built-in roles for Azure Service Bus.
You must create a role assignment that provides access to your Event Grid topic at runtime. Management roles like Owner are not sufficient. The following table shows built-in roles that are recommended when using the Event Hubs extension in normal operation. Your application may require additional permissions based on the code you write.
Cosmos DB does not use Azure RBAC for data operations. Instead, it uses a Cosmos DB built-in RBAC system which is built on similar concepts. You will need to create a role assignment that provides access to your database account at runtime. Azure RBAC Management roles like Owner are not sufficient. The following table shows built-in roles that are recommended when using the Azure Cosmos DB extension in normal operation. Your application may require additional permissions based on the code you write.
1 These roles cannot be used in an Azure RBAC role assignment. See the Cosmos DB built-in RBAC system documentation for details on how to assign these roles.
2 When using identity, Cosmos DB treats container creation as a management operation. It is not available as a data-plane operation for the trigger. You will need to ensure that you create the containers needed by the trigger (including the lease container) before setting up your function.
You need to create a role assignment that provides access to Azure SignalR Service data plane REST APIs. We recommend you to use the built-in role SignalR Service Owner. Management roles like Owner aren't sufficient.
You'll need to create a role assignment that provides access to Azure storage at runtime. Management roles like Owner aren't sufficient. The following built-in roles are recommended when using the Durable Functions extension in normal operation:
Your application may require more permissions based on the code you write. If you're using the default behavior or explicitly setting connectionName to "AzureWebJobsStorage", see Connecting to host storage with an identity for other permission considerations.
You will need to create a role assignment that provides access to the storage account for "AzureWebJobsStorage" at runtime. Management roles like Owner are not sufficient. The Storage Blob Data Owner role covers the basic needs of Functions host storage - the runtime needs both read and write access to blobs and the ability to create containers. Several extensions use this connection as a default location for blobs, queues, and tables, and these uses may add requirements as noted in the table below. You may need additional permissions if you use "AzureWebJobsStorage" for any other purposes.
The blob trigger internally uses Azure Queues and writes blob receipts. It uses AzureWebJobsStorage for these, regardless of the connection configured for the trigger.
Durable Functions uses blobs, queues, and tables to coordinate activity functions and maintain orchestration state. It uses the AzureWebJobsStorage connection for all of these by default, but you can specify a different connection in the Durable Functions extension configuration.
Common properties for identity-based connections
An identity-based connection for an Azure service accepts the following common properties, where <CONNECTION_NAME_PREFIX> is the value of your connection property in the trigger or binding definition:
Property
Environment variable template
Description
Token Credential
<CONNECTION_NAME_PREFIX>__credential
Defines how a token should be obtained for the connection. This setting should be set to managedidentity if your deployed Azure Function intends to use managed identity authentication. This value is only valid when a managed identity is available in the hosting environment.
Client ID
<CONNECTION_NAME_PREFIX>__clientId
When credential is set to managedidentity, this property can be set to specify the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. It's invalid to specify both a Resource ID and a client ID. If not specified, the system-assigned identity is used. This property is used differently in local development scenarios, when credential shouldn't be set.
When credential is set to managedidentity, this property can be set to specify the resource Identifier to be used when obtaining a token. The property accepts a resource identifier corresponding to the resource ID of the user-defined managed identity. It's invalid to specify both a resource ID and a client ID. If neither are specified, the system-assigned identity is used. This property is used differently in local development scenarios, when credential shouldn't be set.
Other options may be supported for a given connection type. Refer to the documentation for the component making the connection.
Azure SDK Environment Variables
Oprez
Use of the Azure SDK's EnvironmentCredential environment variables is not recommended due to the potentially unintentional impact on other connections. They also are not fully supported when deployed to Azure Functions.
The environment variables associated with the Azure SDK's EnvironmentCredential can also be set, but these are not processed by the Functions service for scaling in Consumption plans. These environment variables are not specific to any one connection and will apply as a default unless a corresponding property is not set for a given connection. For example, if AZURE_CLIENT_ID is set, this would be used as if <CONNECTION_NAME_PREFIX>__clientId had been configured. Explicitly setting <CONNECTION_NAME_PREFIX>__clientId would override this default.
Local development with identity-based connections
Napomena
Local development with identity-based connections requires version 4.0.3904 of Azure Functions Core Tools, or a later version.
When you're running your function project locally, the above configuration tells the runtime to use your local developer identity. The connection attempts to get a token from the following locations, in order:
A local cache shared between Microsoft applications
The current user context in Visual Studio
The current user context in Visual Studio Code
The current user context in the Azure CLI
If none of these options are successful, an error occurs.
Your identity may already have some role assignments against Azure resources used for development, but those roles may not provide the necessary data access. Management roles like Owner aren't sufficient. Double-check what permissions are required for connections for each component, and make sure that you have them assigned to yourself.
In some cases, you may wish to specify use of a different identity. You can add configuration properties for the connection that point to the alternate identity based on a client ID and client Secret for a Microsoft Entra service principal. This configuration option is not supported when hosted in the Azure Functions service. To use an ID and secret on your local machine, define the connection with the following extra properties:
Property
Environment variable template
Description
Tenant ID
<CONNECTION_NAME_PREFIX>__tenantId
The Microsoft Entra tenant (directory) ID.
Client ID
<CONNECTION_NAME_PREFIX>__clientId
The client (application) ID of an app registration in the tenant.
Client secret
<CONNECTION_NAME_PREFIX>__clientSecret
A client secret that was generated for the app registration.
Here's an example of local.settings.json properties required for identity-based connection to Azure Blobs:
The Azure Functions host uses the storage connection set in AzureWebJobsStorage to enable core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This connection can also be configured to use an identity.
Oprez
Other components in Functions rely on AzureWebJobsStorage for default behaviors. You should not move it to an identity-based connection if you are using older versions of extensions that do not support this type of connection, including triggers and bindings for Azure Blobs, Event Hubs, and Durable Functions. Similarly, AzureWebJobsStorage is used for deployment artifacts when using server-side build in Linux Consumption, and if you enable this, you will need to deploy via an external deployment package.
In addition, your function app might be reusing AzureWebJobsStorage for other storage connections in their triggers, bindings, and/or function code. Make sure that all uses of AzureWebJobsStorage are able to use the identity-based connection format before changing this connection from a connection string.
To use an identity-based connection for AzureWebJobsStorage, configure the following app settings:
Setting
Description
Example value
AzureWebJobsStorage__blobServiceUri
The data plane URI of the blob service of the storage account, using the HTTPS scheme.
If you're configuring AzureWebJobsStorage using a storage account that uses the default DNS suffix and service name for global Azure, following the https://<accountName>.[blob|queue|file|table].core.windows.net format, you can instead set AzureWebJobsStorage__accountName to the name of your storage account. The endpoints for each storage service are inferred for this account. This doesn't work when the storage account is in a sovereign cloud or has a custom DNS.
Setting
Description
Example value
AzureWebJobsStorage__accountName
The account name of a storage account, valid only if the account isn't in a sovereign cloud and doesn't have a custom DNS. This syntax is unique to AzureWebJobsStorage and can't be used for other identity-based connections.
<storage_account_name>
You will need to create a role assignment that provides access to the storage account for "AzureWebJobsStorage" at runtime. Management roles like Owner are not sufficient. The Storage Blob Data Owner role covers the basic needs of Functions host storage - the runtime needs both read and write access to blobs and the ability to create containers. Several extensions use this connection as a default location for blobs, queues, and tables, and these uses may add requirements as noted in the table below. You may need additional permissions if you use "AzureWebJobsStorage" for any other purposes.
The blob trigger internally uses Azure Queues and writes blob receipts. It uses AzureWebJobsStorage for these, regardless of the connection configured for the trigger.
Durable Functions uses blobs, queues, and tables to coordinate activity functions and maintain orchestration state. It uses the AzureWebJobsStorage connection for all of these by default, but you can specify a different connection in the Durable Functions extension configuration.
Reporting Issues
Item
Description
Link
Runtime
Script Host, Triggers & Bindings, Language Support
Pridružite se seriji susreta kako biste s kolegama programerima i stručnjacima izgradili skalabilna rješenja umjetne inteligencije temeljena na stvarnim slučajevima upotrebe.
Build end-to-end solutions in Microsoft Azure to create Azure Functions, implement and manage web apps, develop solutions utilizing Azure storage, and more.