Training
Module
Chain Azure Functions together by using input and output bindings - Training
In this module, we learn how to integrate your Azure Function with various data sources by using bindings.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
This article is an introduction to working with Azure Functions in .NET, using the isolated worker model. This model allows your project to target versions of .NET independently of other runtime components. For information about specific .NET versions supported, see supported version.
Use the following links to get started right away building .NET isolated worker model functions.
Getting started | Concepts | Samples |
---|---|---|
To learn just about deploying an isolated worker model project to Azure, see Deploy to Azure Functions.
There are two modes in which you can run your .NET class library functions: either in the same process as the Functions host runtime (in-process) or in an isolated worker process. When your .NET functions run in an isolated worker process, you can take advantage of the following benefits:
If you have an existing C# function app that runs in-process, you need to migrate your app to take advantage of these benefits. For more information, see Migrate .NET apps from the in-process model to the isolated worker model.
For a comprehensive comparison between the two modes, see Differences between in-process and isolate worker process .NET Azure Functions.
Versions of the Functions runtime support specific versions of .NET. To learn more about Functions versions, see Azure Functions runtime versions overview. Version support also depends on whether your functions run in-process or isolated worker process.
Note
To learn how to change the Functions runtime version used by your function app, see view and update the current runtime version.
The following table shows the highest level of .NET or .NET Framework that can be used with a specific version of Functions.
Functions runtime version | Isolated worker model | In-process model4 |
---|---|---|
Functions 4.x1 | .NET 9.0 .NET 8.0 .NET Framework 4.82 |
.NET 8.0 |
Functions 1.x3 | n/a | .NET Framework 4.8 |
1 .NET 6 was previously supported on both models but reached the end of official support on November 12, 2024. .NET 7 was previously supported on the isolated worker model but reached the end of official support on May 14, 2024.
2 The build process also requires the .NET SDK.
3 Support ends for version 1.x of the Azure Functions runtime on September 14, 2026. For more information, see this support announcement. For continued full support, you should migrate your apps to version 4.x.
4 Support ends for the in-process model on November 10, 2026. For more information, see this support announcement. For continued full support, you should migrate your apps to the isolated worker model.
For the latest news about Azure Functions releases, including the removal of specific older minor versions, monitor Azure App Service announcements.
A .NET project for Azure Functions using the isolated worker model is basically a .NET console app project that targets a supported .NET runtime. The following are the basic files required in any .NET isolated project:
For complete examples, see the .NET 8 sample project and the .NET Framework 4.8 sample project.
A .NET project for Azure Functions using the isolated worker model uses a unique set of packages, for both core functionality and binding extensions.
The following packages are required to run your .NET functions in an isolated worker process:
The 2.x versions of the core packages change the supported frameworks and bring in support for new .NET APIs from these later versions. When you target .NET 9 or later, your app needs to reference version 2.0.0 or later of both packages.
When updating to the 2.x versions, note the following changes:
dotnet run
when the Azure Functions Core Tools is installed.IHostApplicationBuilder
. Some examples in this guide include tabs to show alternatives using IHostApplicationBuilder
. These examples require the 2.x versions.EnableUserCodeException
option is enabled by default. The property is now marked as obsolete.IncludeEmptyEntriesInMessagePayload
option is enabled by default. With this option enabled, trigger payloads that represent collections always include empty entries. For example, if a message is sent without a body, an empty entry would still be present in string[]
for the trigger data. The inclusion of empty entries facilitates cross-referencing with metadata arrays which the function may also reference. You can disable this behavior by setting IncludeEmptyEntriesInMessagePayload
to false
in the WorkerOptions
service configuration.ILoggerExtensions
class is renamed to FunctionsLoggerExtensions
. The rename prevents an ambiguous call error when using LogMetric()
on an ILogger
instance.HttpResponseData
, the WriteAsJsonAsync()
method no longer will set the status code to 200 OK
. In 1.x, this overrode other error codes that had been set.Because .NET isolated worker process functions use different binding types, they require a unique set of binding extension packages.
You find these extension packages under Microsoft.Azure.Functions.Worker.Extensions.
When you use the isolated worker model, you have access to the start-up of your function app, which is usually in Program.cs
. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. With .NET Functions isolated worker process, you can much more easily add configurations, inject dependencies, and run your own middleware.
The following code shows an example of a HostBuilder pipeline:
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.ConfigureServices(s =>
{
s.AddApplicationInsightsTelemetryWorkerService();
s.ConfigureFunctionsApplicationInsights();
s.AddSingleton<IHttpResponderService, DefaultHttpResponderService>();
s.Configure<LoggerFilterOptions>(options =>
{
// The Application Insights SDK adds a default logging filter that instructs ILogger to capture only Warning and more severe logs. Application Insights requires an explicit override.
// Log levels can also be configured using appsettings.json. For more information, see https://learn.microsoft.com/en-us/azure/azure-monitor/app/worker-service#ilogger-logs
LoggerFilterRule? toRemove = options.Rules.FirstOrDefault(rule => rule.ProviderName
== "Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider");
if (toRemove is not null)
{
options.Rules.Remove(toRemove);
}
});
})
.Build();
This code requires using Microsoft.Extensions.DependencyInjection;
.
Before calling Build()
on the IHostBuilder
, you should:
ConfigureFunctionsWebApplication()
if using ASP.NET Core integration or ConfigureFunctionsWorkerDefaults()
otherwise. See HTTP trigger for details on these options.AddApplicationInsightsTelemetryWorkerService()
and ConfigureFunctionsApplicationInsights()
in the ConfigureServices()
delegate. See Application Insights for details.If your project targets .NET Framework 4.8, you also need to add FunctionsDebugger.Enable();
before creating the HostBuilder. It should be the first line of your Main()
method. For more information, see Debugging when targeting .NET Framework.
The HostBuilder is used to build and return a fully initialized IHost
instance, which you run asynchronously to start your function app.
await host.RunAsync();
The type of builder you use determines how you can configure the application.
The ConfigureFunctionsWorkerDefaults method is used to add the settings required for the function app to run. The method includes the following functionality:
.ConfigureFunctionsWorkerDefaults()
Having access to the host builder pipeline means that you can also set any app-specific configurations during initialization. You can call the ConfigureAppConfiguration method on HostBuilder one or more times to add any configuration sources required by your code. To learn more about app configuration, see Configuration in ASP.NET Core.
These configurations only apply to the worker code you author, and they don't directly influence the configuration of the Functions host or triggers and bindings. To make changes to the functions host or trigger and binding configuration, you still need to use the host.json file.
Note
Custom configuration sources cannot be used for configuration of triggers and bindings. Trigger and binding configuration must be available to the Functions platform, and not just your application code. You can provide this configuration through the application settings, Key Vault references, or App Configuration references features.
The isolated worker model uses standard .NET mechanisms for injecting services.
When you use a HostBuilder
, call ConfigureServices on the host builder and use the extension methods on IServiceCollection to inject specific services. The following example injects a singleton service dependency:
.ConfigureServices(services =>
{
services.AddSingleton<IHttpResponderService, DefaultHttpResponderService>();
})
This code requires using Microsoft.Extensions.DependencyInjection;
. To learn more, see Dependency injection in ASP.NET Core.
Dependency injection can be used to interact with other Azure services. You can inject clients from the Azure SDK for .NET using the Microsoft.Extensions.Azure package. After installing the package, register the clients by calling AddAzureClients()
on the service collection in Program.cs
. The following example configures a named client for Azure Blobs:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Azure;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.ConfigureServices((hostContext, services) =>
{
services.AddAzureClients(clientBuilder =>
{
clientBuilder.AddBlobServiceClient(hostContext.Configuration.GetSection("MyStorageConnection"))
.WithName("copierOutputBlob");
});
})
.Build();
host.Run();
The following example shows how we can use this registration and SDK types to copy blob contents as a stream from one container to another using an injected client:
using Microsoft.Extensions.Azure;
using Microsoft.Extensions.Logging;
namespace MyFunctionApp
{
public class BlobCopier
{
private readonly ILogger<BlobCopier> _logger;
private readonly BlobContainerClient _copyContainerClient;
public BlobCopier(ILogger<BlobCopier> logger, IAzureClientFactory<BlobServiceClient> blobClientFactory)
{
_logger = logger;
_copyContainerClient = blobClientFactory.CreateClient("copierOutputBlob").GetBlobContainerClient("samples-workitems-copy");
_copyContainerClient.CreateIfNotExists();
}
[Function("BlobCopier")]
public async Task Run([BlobTrigger("samples-workitems/{name}", Connection = "MyStorageConnection")] Stream myBlob, string name)
{
await _copyContainerClient.UploadBlobAsync(name, myBlob);
_logger.LogInformation($"Blob {name} copied!");
}
}
}
The ILogger<T>
in this example was also obtained through dependency injection, so it's registered automatically. To learn more about configuration options for logging, see Logging.
Tip
The example used a literal string for the name of the client in both Program.cs
and the function. Consider instead using a shared constant string defined on the function class. For example, you could add public const string CopyStorageClientName = nameof(_copyContainerClient);
and then reference BlobCopier.CopyStorageClientName
in both locations. You could similarly define the configuration section name with the function rather than in Program.cs
.
The isolated worker model also supports middleware registration, again by using a model similar to what exists in ASP.NET. This model gives you the ability to inject logic into the invocation pipeline, and before and after functions execute.
The ConfigureFunctionsWorkerDefaults extension method has an overload that lets you register your own middleware, as you can see in the following example.
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults(workerApplication =>
{
// Register our custom middlewares with the worker
workerApplication.UseMiddleware<ExceptionHandlingMiddleware>();
workerApplication.UseMiddleware<MyCustomMiddleware>();
workerApplication.UseWhen<StampHttpHeaderMiddleware>((context) =>
{
// We want to use this middleware only for http trigger invocations.
return context.FunctionDefinition.InputBindings.Values
.First(a => a.Type.EndsWith("Trigger")).Type == "httpTrigger";
});
})
.Build();
The UseWhen
extension method can be used to register a middleware that gets executed conditionally. You must pass to this method a predicate that returns a boolean value, and the middleware participates in the invocation processing pipeline when the return value of the predicate is true
.
The following extension methods on FunctionContext make it easier to work with middleware in the isolated model.
Method | Description |
---|---|
GetHttpRequestDataAsync |
Gets the HttpRequestData instance when called by an HTTP trigger. This method returns an instance of ValueTask<HttpRequestData?> , which is useful when you want to read message data, such as request headers and cookies. |
GetHttpResponseData |
Gets the HttpResponseData instance when called by an HTTP trigger. |
GetInvocationResult |
Gets an instance of InvocationResult , which represents the result of the current function execution. Use the Value property to get or set the value as needed. |
GetOutputBindings |
Gets the output binding entries for the current function execution. Each entry in the result of this method is of type OutputBindingData . You can use the Value property to get or set the value as needed. |
BindInputAsync |
Binds an input binding item for the requested BindingMetadata instance. For example, you can use this method when you have a function with a BlobInput input binding that needs to be used by your middleware. |
This is an example of a middleware implementation that reads the HttpRequestData
instance and updates the HttpResponseData
instance during function execution:
internal sealed class StampHttpHeaderMiddleware : IFunctionsWorkerMiddleware
{
public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
{
var requestData = await context.GetHttpRequestDataAsync();
string correlationId;
if (requestData!.Headers.TryGetValues("x-correlationId", out var values))
{
correlationId = values.First();
}
else
{
correlationId = Guid.NewGuid().ToString();
}
await next(context);
context.GetHttpResponseData()?.Headers.Add("x-correlationId", correlationId);
}
}
This middleware checks for the presence of a specific request header(x-correlationId), and when present uses the header value to stamp a response header. Otherwise, it generates a new GUID value and uses that for stamping the response header. For a more complete example of using custom middleware in your function app, see the custom middleware reference sample.
The isolated worker model uses System.Text.Json
by default. You can customize the behavior of the serializer by configuring services as part of your Program.cs
file. This section covers general-purpose serialization and won't influence HTTP trigger JSON serialization with ASP.NET Core integration, which must be configured separately.
The following example shows this using ConfigureFunctionsWebApplication
, but it will also work for ConfigureFunctionsWorkerDefaults
:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
var host = new HostBuilder()
.ConfigureFunctionsWebApplication((IFunctionsWorkerApplicationBuilder builder) =>
{
builder.Services.Configure<JsonSerializerOptions>(jsonSerializerOptions =>
{
jsonSerializerOptions.PropertyNamingPolicy = JsonNamingPolicy.CamelCase;
jsonSerializerOptions.DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull;
jsonSerializerOptions.ReferenceHandler = ReferenceHandler.Preserve;
// override the default value
jsonSerializerOptions.PropertyNameCaseInsensitive = false;
});
})
.Build();
host.Run();
You might want to instead use JSON.NET (Newtonsoft.Json
) for serialization. To do this, you would install the Microsoft.Azure.Core.NewtonsoftJson
package. Then, in your service registration, you would reassign the Serializer
property on the WorkerOptions
configuration. The following example shows this using ConfigureFunctionsWebApplication
, but it will also work for ConfigureFunctionsWorkerDefaults
:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
var host = new HostBuilder()
.ConfigureFunctionsWebApplication((IFunctionsWorkerApplicationBuilder builder) =>
{
builder.Services.Configure<WorkerOptions>(workerOptions =>
{
var settings = NewtonsoftJsonObjectSerializer.CreateJsonSerializerSettings();
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();
settings.NullValueHandling = NullValueHandling.Ignore;
workerOptions.Serializer = new NewtonsoftJsonObjectSerializer(settings);
});
})
.Build();
host.Run();
A function method is a public method of a public class with a Function
attribute applied to the method and a trigger attribute applied to an input parameter, as shown in the following example:
[Function(nameof(QueueFunction))]
[QueueOutput("output-queue")]
public string[] Run([QueueTrigger("input-queue")] Album myQueueItem, FunctionContext context)
The trigger attribute specifies the trigger type and binds input data to a method parameter. The previous example function is triggered by a queue message, and the queue message is passed to the method in the myQueueItem
parameter.
The Function
attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, _
, and -
, up to 127 characters in length. Project templates often create a method named Run
, but the method name can be any valid C# method name. The method must be a public member of a public class. It should generally be an instance method so that services can be passed in via dependency injection.
Here are some of the parameters that you can include as part of a function method signature:
.NET isolated passes a FunctionContext object to your function methods. This object lets you get an ILogger
instance to write to the logs by calling the GetLogger method and supplying a categoryName
string. You can use this context to obtain an ILogger
without having to use dependency injection. To learn more, see Logging.
A function can accept a CancellationToken parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn't terminate unexpectedly in a way that leaves data in an inconsistent state.
Cancellation tokens are supported in .NET functions when running in an isolated worker process. The following example raises an exception when a cancellation request is received:
[Function(nameof(ThrowOnCancellation))]
public async Task ThrowOnCancellation(
[EventHubTrigger("sample-workitem-1", Connection = "EventHubConnection")] string[] messages,
FunctionContext context,
CancellationToken cancellationToken)
{
_logger.LogInformation("C# EventHub {functionName} trigger function processing a request.", nameof(ThrowOnCancellation));
foreach (var message in messages)
{
cancellationToken.ThrowIfCancellationRequested();
await Task.Delay(6000); // task delay to simulate message processing
_logger.LogInformation("Message '{msg}' was processed.", message);
}
}
The following example performs clean-up actions when a cancellation request is received:
[Function(nameof(HandleCancellationCleanup))]
public async Task HandleCancellationCleanup(
[EventHubTrigger("sample-workitem-2", Connection = "EventHubConnection")] string[] messages,
FunctionContext context,
CancellationToken cancellationToken)
{
_logger.LogInformation("C# EventHub {functionName} trigger function processing a request.", nameof(HandleCancellationCleanup));
foreach (var message in messages)
{
if (cancellationToken.IsCancellationRequested)
{
_logger.LogInformation("A cancellation token was received, taking precautionary actions.");
// Take precautions like noting how far along you are with processing the batch
_logger.LogInformation("Precautionary activities complete.");
break;
}
await Task.Delay(6000); // task delay to simulate message processing
_logger.LogInformation("Message '{msg}' was processed.", message);
}
}
Bindings are defined by using attributes on methods, parameters, and return types. Bindings can provide data as strings, arrays, and serializable types, such as plain old class objects (POCOs). For some binding extensions, you can also bind to service-specific types defined in service SDKs.
For HTTP triggers, see the HTTP trigger section.
For a complete set of reference samples using triggers and bindings with isolated worker process functions, see the binding extensions reference sample.
A function can have zero or more input bindings that can pass data to a function. Like triggers, input bindings are defined by applying a binding attribute to an input parameter. When the function executes, the runtime tries to get data specified in the binding. The data being requested is often dependent on information provided by the trigger using binding parameters.
To write to an output binding, you must apply an output binding attribute to the function method, which defines how to write to the bound service. The value returned by the method is written to the output binding. For example, the following example writes a string value to a message queue named output-queue
by using an output binding:
[Function(nameof(QueueFunction))]
[QueueOutput("output-queue")]
public string[] Run([QueueTrigger("input-queue")] Album myQueueItem, FunctionContext context)
{
// Use a string array to return more than one message.
string[] messages = {
$"Album name = {myQueueItem.Name}",
$"Album songs = {myQueueItem.Songs}"};
_logger.LogInformation("{msg1},{msg2}", messages[0], messages[1]);
// Queue Output messages
return messages;
}
The data written to an output binding is always the return value of the function. If you need to write to more than one output binding, you must create a custom return type. This return type must have the output binding attribute applied to one or more properties of the class. The following example is an HTTP-triggered function using ASP.NET Core integration which writes to both the HTTP response and a queue output binding:
public class MultipleOutputBindings
{
private readonly ILogger<MultipleOutputBindings> _logger;
public MultipleOutputBindings(ILogger<MultipleOutputBindings> logger)
{
_logger = logger;
}
[Function("MultipleOutputBindings")]
public MyOutputType Run([HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req)
{
_logger.LogInformation("C# HTTP trigger function processed a request.");
var myObject = new MyOutputType
{
Result = new OkObjectResult("C# HTTP trigger function processed a request."),
MessageText = "some output"
};
return myObject;
}
public class MyOutputType
{
[HttpResult]
public IActionResult Result { get; set; }
[QueueOutput("myQueue")]
public string MessageText { get; set; }
}
}
When using custom return types for multiple output bindings with ASP.NET Core integration, you must add the [HttpResult]
attribute to the property that provides the result. The HttpResult
attribute is available when using SDK 1.17.3-preview2 or later along with version 3.2.0 or later of the HTTP extension and version 1.3.0 or later of the ASP.NET Core extension.
For some service-specific binding types, binding data can be provided using types from service SDKs and frameworks. These provide more capability beyond what a serialized string or plain-old CLR object (POCO) can offer. To use the newer types, your project needs to be updated to use newer versions of core dependencies.
Dependency | Version requirement |
---|---|
Microsoft.Azure.Functions.Worker | 1.18.0 or later |
Microsoft.Azure.Functions.Worker.Sdk | 1.13.0 or later |
When testing SDK types locally on your machine, you also need to use Azure Functions Core Tools, version 4.0.5000 or later. You can check your current version using the func version
command.
Each trigger and binding extension also has its own minimum version requirement, which is described in the extension reference articles. The following service-specific bindings provide SDK types:
Service | Trigger | Input binding | Output binding |
---|---|---|---|
Azure Blobs | Generally Available | Generally Available | SDK types not recommended.1 |
Azure Queues | Generally Available | Input binding doesn't exist | SDK types not recommended.1 |
Azure Service Bus | Generally Available | Input binding doesn't exist | SDK types not recommended.1 |
Azure Event Hubs | Generally Available | Input binding doesn't exist | SDK types not recommended.1 |
Azure Cosmos DB | SDK types not used2 | Generally Available | SDK types not recommended.1 |
Azure Tables | Trigger doesn't exist | Generally Available | SDK types not recommended.1 |
Azure Event Grid | Generally Available | Input binding doesn't exist | SDK types not recommended.1 |
1 For output scenarios in which you would use an SDK type, you should create and work with SDK clients directly instead of using an output binding. See Register Azure clients for a dependency injection example.
2 The Cosmos DB trigger uses the Azure Cosmos DB change feed and exposes change feed items as JSON-serializable types. The absence of SDK types is by-design for this scenario.
Note
When using binding expressions that rely on trigger data, SDK types for the trigger itself cannot be used.
HTTP triggers allow a function to be invoked by an HTTP request. There are two different approaches that can be used:
This section shows how to work with the underlying HTTP request and response objects using types from ASP.NET Core including HttpRequest, HttpResponse, and IActionResult. This model isn't available to apps targeting .NET Framework, which should instead use the built-in model.
Note
Not all features of ASP.NET Core are exposed by this model. Specifically, the ASP.NET Core middleware pipeline and routing capabilities are not available. ASP.NET Core integration requires you to use updated packages.
To enable ASP.NET Core integration for HTTP:
Add a reference in your project to the Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore package, version 1.0.0 or later.
Update your project to use these specific package versions:
In your Program.cs
file, update the host builder configuration to call ConfigureFunctionsWebApplication()
. This replaces ConfigureFunctionsWorkerDefaults()
if you would use that method otherwise. The following example shows a minimal setup without other customizations:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Hosting;
var host = new HostBuilder()
.ConfigureFunctionsWebApplication()
.Build();
host.Run();
Update any existing HTTP-triggered functions to use the ASP.NET Core types. This example shows the standard HttpRequest
and an IActionResult
used for a simple "hello, world" function:
[Function("HttpFunction")]
public IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req)
{
return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
}
ASP.NET Core has its own serialization layer, and it is not affected by customizing general serialization configuration. To customize the serialization behavior used for your HTTP triggers, you need to include an .AddMvc()
call as part of service registration. The returned IMvcBuilder
can be used to modify ASP.NET Core's JSON serialization settings. The following example shows how to configure JSON.NET (Newtonsoft.Json
) for serialization using this approach:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
var host = new HostBuilder()
.ConfigureFunctionsWebApplication()
.ConfigureServices(services =>
{
services.AddApplicationInsightsTelemetryWorkerService();
services.ConfigureFunctionsApplicationInsights();
services.AddMvc().AddNewtonsoftJson();
})
.Build();
host.Run();
In the built-in model, the system translates the incoming HTTP request message into an HttpRequestData object that is passed to the function. This object provides data from the request, including Headers
, Cookies
, Identities
, URL
, and optionally a message Body
. This object is a representation of the HTTP request but isn't directly connected to the underlying HTTP listener or the received message.
Likewise, the function returns an HttpResponseData object, which provides data used to create the HTTP response, including message StatusCode
, Headers
, and optionally a message Body
.
The following example demonstrates the use of HttpRequestData
and HttpResponseData
:
[Function(nameof(HttpFunction))]
public static HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequestData req,
FunctionContext executionContext)
{
var logger = executionContext.GetLogger(nameof(HttpFunction));
logger.LogInformation("message logged");
var response = req.CreateResponse(HttpStatusCode.OK);
response.Headers.Add("Content-Type", "text/plain; charset=utf-8");
response.WriteString("Welcome to .NET isolated worker !!");
return response;
}
You can write to logs by using an ILogger<T>
or ILogger
instance. The logger can be obtained through dependency injection of an ILogger<T>
or of an ILoggerFactory:
public class MyFunction {
private readonly ILogger<MyFunction> _logger;
public MyFunction(ILogger<MyFunction> logger) {
_logger = logger;
}
[Function(nameof(MyFunction))]
public void Run([BlobTrigger("samples-workitems/{name}", Connection = "")] string myBlob, string name)
{
_logger.LogInformation($"C# Blob trigger function Processed blob\n Name: {name} \n Data: {myBlob}");
}
}
The logger can also be obtained from a FunctionContext object passed to your function. Call the GetLogger<T> or GetLogger method, passing a string value that is the name for the category in which the logs are written. The category is usually the name of the specific function from which the logs are written. To learn more about categories, see the monitoring article.
Use the methods of ILogger<T>
and ILogger
to write various log levels, such as LogWarning
or LogError
. To learn more about log levels, see the monitoring article. You can customize the log levels for components added to your code by registering filters:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.ConfigureServices(services =>
{
// Registers IHttpClientFactory.
// By default this sends a lot of Information-level logs.
services.AddHttpClient();
})
.ConfigureLogging(logging =>
{
// Disable IHttpClientFactory Informational logs.
// Note -- you can also remove the handler that does the logging: https://github.com/aspnet/HttpClientFactory/issues/196#issuecomment-432755765
logging.AddFilter("System.Net.Http.HttpClient", LogLevel.Warning);
})
.Build();
As part of configuring your app in Program.cs
, you can also define the behavior for how errors are surfaced to your logs. The default behavior depends on the type of builder you're using.
When you use a HostBuilder
, by default, exceptions thrown by your code can end up wrapped in an RpcException
. To remove this extra layer, set the EnableUserCodeException
property to "true" as part of configuring the builder:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Hosting;
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults(builder => {}, options =>
{
options.EnableUserCodeException = true;
})
.Build();
host.Run();
You can configure your isolated process application to emit logs directly to Application Insights. This behavior replaces the default behavior of relaying logs through the host. Unless you are using .NET Aspire, configuring direct Application Insights integration is recommended because it gives you control over how those logs are emitted.
Application Insights integration is not enabled by default in all setup experiences. Some templates will create Functions projects with the necessary packages and startup code commented out. If you want to use Application Insights integration, you can uncomment these lines in Program.cs
and the project's .csproj
file. The instructions in the rest of this section also describe how to enable the integration.
If your project is part of a .NET Aspire orchestration, it uses OpenTelemetry for monitoring instead. You should not enable direct Application Insights integration within .NET Aspire projects. Instead, configure the Azure Monitor OpenTelemetry exporter as part of the service defaults project. If your Functions project uses Application Insights integration in a .NET Aspire context, the application will error on startup.
To write logs directly to Application Insights from your code, add references to these packages in your project:
You can run the following commands to add these references to your project:
dotnet add package Microsoft.ApplicationInsights.WorkerService
dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights
With the packages installed, you must call AddApplicationInsightsTelemetryWorkerService()
and ConfigureFunctionsApplicationInsights()
during service configuration in your Program.cs
file, as in this example:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.ConfigureServices(services => {
services.AddApplicationInsightsTelemetryWorkerService();
services.ConfigureFunctionsApplicationInsights();
})
.Build();
host.Run();
The call to ConfigureFunctionsApplicationInsights()
adds an ITelemetryModule
, which listens to a Functions-defined ActivitySource
. This creates the dependency telemetry required to support distributed tracing. To learn more about AddApplicationInsightsTelemetryWorkerService()
and how to use it, see Application Insights for Worker Service applications.
Important
The Functions host and the isolated process worker have separate configuration for log levels, etc. Any Application Insights configuration in host.json will not affect the logging from the worker, and similarly, configuration made in your worker code will not impact logging from the host. You need to apply changes in both places if your scenario requires customization at both layers.
The rest of your application continues to work with ILogger
and ILogger<T>
. However, by default, the Application Insights SDK adds a logging filter that instructs the logger to capture only warnings and more severe logs. If you want to disable this behavior, remove the filter rule as part of service configuration:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.ConfigureServices(services => {
services.AddApplicationInsightsTelemetryWorkerService();
services.ConfigureFunctionsApplicationInsights();
})
.ConfigureLogging(logging =>
{
logging.Services.Configure<LoggerFilterOptions>(options =>
{
LoggerFilterRule defaultRule = options.Rules.FirstOrDefault(rule => rule.ProviderName
== "Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider");
if (defaultRule is not null)
{
options.Rules.Remove(defaultRule);
}
});
})
.Build();
host.Run();
This section outlines options you can enable that improve performance around cold start.
In general, your app should use the latest versions of its core dependencies. At a minimum, you should update your project as follows:
Microsoft.AspNetCore.App
, unless your app targets .NET Framework.The following snippet shows this configuration in the context of a project file:
<ItemGroup>
<FrameworkReference Include="Microsoft.AspNetCore.App" />
<PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.21.0" />
<PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" />
</ItemGroup>
Placeholders are a platform capability that improves cold start for apps targeting .NET 6 or later. To use this optimization, you must explicitly enable placeholders using these steps:
Update your project configuration to use the latest dependency versions, as detailed in the previous section.
Set the WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED
application setting to 1
, which you can do by using this az functionapp config appsettings set command:
az functionapp config appsettings set -g <groupName> -n <appName> --settings 'WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED=1'
In this example, replace <groupName>
with the name of the resource group, and replace <appName>
with the name of your function app.
Make sure that the netFrameworkVersion
property of the function app matches your project's target framework, which must be .NET 6 or later. You can do this by using this az functionapp config set command:
az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
In this example, also replace <framework>
with the appropriate version string, such as v8.0
, according to your target .NET version.
Make sure that your function app is configured to use a 64-bit process, which you can do by using this az functionapp config set command:
az functionapp config set -g <groupName> -n <appName> --use-32bit-worker-process false
Important
When setting the WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED
to 1
, all other function app configurations must be set correctly. Otherwise, your function app might fail to start.
The function executor is a component of the platform that causes invocations to run. An optimized version of this component is enabled by default starting with version 1.16.2 of the SDK. No other configuration is required.
You can compile your function app as ReadyToRun binaries. ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the effect of cold starts when running in a Consumption plan. ReadyToRun is available in .NET 6 and later versions and requires version 4.0 or later of the Azure Functions runtime.
ReadyToRun requires you to build the project against the runtime architecture of the hosting app. If these are not aligned, your app will encounter an error at startup. Select your runtime identifier from this table:
Operating System | App is 32-bit1 | Runtime identifier |
---|---|---|
Windows | True | win-x86 |
Windows | False | win-x64 |
Linux | True | N/A (not supported) |
Linux | False | linux-x64 |
1 Only 64-bit apps are eligible for some other performance optimizations.
To check if your Windows app is 32-bit or 64-bit, you can run the following CLI command, substituting <group_name>
with the name of your resource group and <app_name>
with the name of your application. An output of "true" indicates that the app is 32-bit, and "false" indicates 64-bit.
az functionapp config show -g <group_name> -n <app_name> --query "use32BitWorkerProcess"
You can change your application to 64-bit with the following command, using the same substitutions:
az functionapp config set -g <group_name> -n <app_name> --use-32bit-worker-process false`
To compile your project as ReadyToRun, update your project file by adding the <PublishReadyToRun>
and <RuntimeIdentifier>
elements. The following example shows a configuration for publishing to a Windows 64-bit function app.
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<AzureFunctionsVersion>v4</AzureFunctionsVersion>
<RuntimeIdentifier>win-x64</RuntimeIdentifier>
<PublishReadyToRun>true</PublishReadyToRun>
</PropertyGroup>
If you don't want to set the <RuntimeIdentifier>
as part of the project file, you can also configure this as part of the publishing gesture itself. For example, with a Windows 64-bit function app, the .NET CLI command would be:
dotnet publish --runtime win-x64
In Visual Studio, the Target Runtime option in the publish profile should be set to the correct runtime identifier. When set to the default value of Portable, ReadyToRun isn't used.
When you deploy your function code project to Azure, it must run in either a function app or in a Linux container. The function app and other required Azure resources must exist before you deploy your code.
You can also deploy your function app in a Linux container. For more information, see Working with containers and Azure Functions.
You can create your function app and other required resources in Azure using one of these methods:
After creating your function app and other required resources in Azure, you can deploy the code project to Azure using one of these methods:
For more information, see Deployment technologies in Azure Functions.
Many of the deployment methods make use of a zip archive. If you're creating the zip archive yourself, it must follow the structure outlined in this section. If it doesn't, your app may experience errors at startup.
The deployment payload should match the output of a dotnet publish
command, though without the enclosing parent folder. The zip archive should be made from the following files:
.azurefunctions/
extensions.json
functions.metadata
host.json
worker.config.json
These files are generated by the build process, and they aren't meant to be edited directly.
When preparing a zip archive for deployment, you should only compress the contents of the output directory, not the enclosing directory itself. When the archive is extracted into the current working directory, the files listed above need to be immediately visible.
There are a few requirements for running .NET functions in the isolated worker model in Azure, depending on the operating system:
dotnet-isolated
.When you create your function app in Azure using the methods in the previous section, these required settings are added for you. When you create these resources by using ARM templates or Bicep files for automation, you must make sure to set them in the template.
.NET Aspire is an opinionated stack that simplifies development of distributed applications in the cloud. You can enlist .NET 8 and .NET 9 isolated worker model projects in Aspire 9.0 orchestrations using preview support. The section outlines the core requirements for enlistment.
This integration requires specific setup:
Program.cs
, you must also include the project by calling AddAzureFunctionsProject<TProject>()
on your IDistributedApplicationBuilder
. This method is used instead of the AddProject<TProject>()
that you use for other project types. If you just use AddProject<TProject>()
, the Functions project will not start properly.Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore
to the 2.x version as well.Program.cs
should use the IHostApplicationBuilder
version of host instance startup.IHostApplicationBuilder
in Program.cs
, you should also include a call to builder.AddServiceDefaults()
.local.settings.json
, aside from the FUNCTIONS_WORKER_RUNTIME
setting, which should remain "dotnet-isolated". Other configuration should be set through the app host project.The following example shows a minimal Program.cs
for an App Host project:
var builder = DistributedApplication.CreateBuilder(args);
builder.AddAzureFunctionsProject<Projects.MyFunctionsProject>("MyFunctionsProject");
builder.Build().Run();
The following example shows a minimal Program.cs
for a Functions project used in Aspire:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Azure.Functions.Worker.Builder;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
var builder = FunctionsApplication.CreateBuilder(args);
builder.AddServiceDefaults();
builder.ConfigureFunctionsWebApplication();
builder.Build().Run();
This does not include the default Application Insights configuration that you see in many of the other Program.cs
examples in this article. Instead, Aspire's OpenTelemetry integration is configured through the builder.AddServiceDefaults()
call.
Consider the following points when evaluating .NET Aspire with Azure Functions:
Program.cs
should use the IHostApplicationBuilder
version of host instance startup. This allows you to call builder.AddServiceDefaults()
to add .NET Aspire service defaults to your Functions project.Microsoft.ApplicationInsights.WorkerService
. You should remove any direct Application Insights integrations from your Functions project when using Aspire.local.settings.json
, other than the FUNCTIONS_WORKER_RUNTIME
setting. If the same environment variable is set by local.settings.json
and Aspire, the system uses the Aspire version.local.settings.json
. Many Functions starter templates include the emulator as a default for AzureWebJobsStorage
. However, emulator configuration can prompt some IDEs to start a version of the emulator that can conflict with the version that Aspire uses.Azure Functions requires a host storage connection (AzureWebJobsStorage
) for several of its core behaviors. When you call AddAzureFunctionsProject<TProject>()
in your app host project, a default AzureWebJobsStorage
connection is created and provided to the Functions project. This default connection uses the Storage emulator for local development runs and automatically provisions a storage account when deployed. For additional control, you can replace this connection by calling .WithHostStorage()
on the Functions project resource.
The following example shows a minimal Program.cs
for an app host project that replaces the host storage:
var builder = DistributedApplication.CreateBuilder(args);
var myHostStorage = builder.AddAzureStorage("myHostStorage");
builder.AddAzureFunctionsProject<Projects.MyFunctionsProject>("MyFunctionsProject")
.WithHostStorage(myHostStorage);
builder.Build().Run();
Note
When Aspire provisions the host storage in publish mode, it defaults to creating role assignments for the Storage Account Contributor, Storage Blob Data Contributor, Storage Queue Data Contributor, and Storage Table Data Contributor roles.
Your triggers and bindings reference connections by name. Some Aspire integrations are enabled to provide these through a call to WithReference()
on the project resource:
Aspire integration | Notes |
---|---|
Azure Blobs | When Aspire provisions the resource, it defaults to creating role assignments for the Storage Blob Data Contributor, Storage Queue Data Contributor, and Storage Table Data Contributor roles. |
Azure Queues | When Aspire provisions the resource, it defaults to creating role assignments for the Storage Blob Data Contributor, Storage Queue Data Contributor, and Storage Table Data Contributor roles. |
Azure Event Hubs | When Aspire provisions the resource, it defaults to creating a role assignment using the Azure Event Hubs Data Owner role. |
Azure Service Bus | When Aspire provisions the resource, it defaults to creating a role assignment using the Azure Service Bus Data Owner role. |
The following example shows a minimal Program.cs
for an app host project that configures a queue trigger. In this example, the corresponding queue trigger has its Connection
property set to "MyQueueTriggerConnection".
var builder = DistributedApplication.CreateBuilder(args);
var myAppStorage = builder.AddAzureStorage("myAppStorage").RunAsEmulator();
var queues = myAppStorage.AddQueues("queues");
builder.AddAzureFunctionsProject<Projects.MyFunctionsProject>("MyFunctionsProject")
.WithReference(queues, "MyQueueTriggerConnection");
builder.Build().Run();
For other integrations, calls to WithReference
set the configuration in a different way, making it available to Aspire client integrations, but not to triggers and bindings. For these integrations, you should call WithEnvironment()
to pass the connection information for the trigger or binding to resolve. The following example shows how to set the environment variable "MyBindingConnection" for a resource that exposes a connection string expression:
builder.AddAzureFunctionsProject<Projects.MyFunctionsProject>("MyFunctionsProject")
.WithEnvironment("MyBindingConnection", otherIntegration.Resource.ConnectionStringExpression);
You can configure both WithReference()
and WithEnvironment()
if you want a connection to be used both by Aspire client integrations and the triggers and bindings system.
For some resources, the structure of a connection might be different between when you run it locally and when you publish it to Azure. In the previous example, otherIntegration
could be a resource that runs as an emulator, so ConnectionStringExpression
would return an emulator connection string. However, when the resource is published, Aspire might set up an identity-based connection, and ConnectionStringExpression
would return the service's URI. In this case, to set up identity based connections for Azure Functions, you might need to provide a different environment variable name. The following example uses builder.ExecutionContext.IsPublishMode
to conditionally add the necessary suffix:
builder.AddAzureFunctionsProject<Projects.MyFunctionsProject>("MyFunctionsProject")
.WithEnvironment("MyBindingConnection" + (builder.ExecutionContext.IsPublishMode ? "__serviceUri" : ""), otherIntegration.Resource.ConnectionStringExpression);
Depending on your scenario, you may also need to adjust the permissions that will be assigned for an identity-based connection. You can use the ConfigureConstruct<T>()
method to customize how Aspire configures infrastructure when it publishes your project.
Consult each binding's reference pages for details on the connection formats it supports and the permissions those formats require.
When running locally using Visual Studio or Visual Studio Code, you're able to debug your .NET isolated worker project as normal. However, there are two debugging scenarios that don't work as expected.
Because your isolated worker process app runs outside the Functions runtime, you need to attach the remote debugger to a separate process. To learn more about debugging using Visual Studio, see Remote Debugging.
If your isolated project targets .NET Framework 4.8, you need to take manual steps to enable debugging. These steps aren't required if using another target framework.
Your app should start with a call to FunctionsDebugger.Enable();
as its first operation. This occurs in the Main()
method before initializing a HostBuilder. Your Program.cs
file should look similar to this:
using System;
using System.Diagnostics;
using Microsoft.Extensions.Hosting;
using Microsoft.Azure.Functions.Worker;
using NetFxWorker;
namespace MyDotnetFrameworkProject
{
internal class Program
{
static void Main(string[] args)
{
FunctionsDebugger.Enable();
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.Build();
host.Run();
}
}
}
Next, you need to manually attach to the process using a .NET Framework debugger. Visual Studio doesn't do this automatically for isolated worker process .NET Framework apps yet, and the "Start Debugging" operation should be avoided.
In your project directory (or its build output directory), run:
func host start --dotnet-isolated-debug
This starts your worker, and the process stops with the following message:
Azure Functions .NET Worker (PID: <process id>) initialized in debug mode. Waiting for debugger to attach...
Where <process id>
is the ID for your worker process. You can now use Visual Studio to manually attach to the process. For instructions on this operation, see How to attach to a running process.
After the debugger is attached, the process execution resumes, and you'll be able to debug.
Before a generally available release, a .NET version might be released in a Preview or Go-live state. See the .NET Official Support Policy for details on these states.
While it might be possible to target a given release from a local Functions project, function apps hosted in Azure might not have that release available. Azure Functions can only be used with Preview or Go-live releases noted in this section.
Azure Functions doesn't currently work with any "Preview" or "Go-live" .NET releases. See Supported versions for a list of generally available releases that you can use.
To use Azure Functions with a preview version of .NET, you need to update your project by:
TargetFramework
setting in your .csproj
fileWhen you deploy to your function app in Azure, you also need to ensure that the framework is made available to the app. During the preview period, some tools and experiences may not surface the new preview version as an option. If you don't see the preview version included in the Azure portal, for example, you can use the REST API, Bicep templates, or the Azure CLI to configure the version manually.
For apps hosted on Windows, use the following Azure CLI command. Replace <groupName>
with the name of the resource group, and replace <appName>
with the name of your function app. Replace <framework>
with the appropriate version string, such as v8.0
.
az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
Keep these considerations in mind when using Functions with preview versions of .NET:
When you author your functions in Visual Studio, you must use Visual Studio Preview, which supports building Azure Functions projects with .NET preview SDKs.
Make sure you have the latest Functions tools and templates. To update your tools:
During a preview period, your development environment might have a more recent version of the .NET preview than the hosted service. This can cause your function app to fail when deployed. To address this, you can specify the version of the SDK to use in global.json
.
dotnet --list-sdks
command and note the preview version you're currently using during local development.dotnet new globaljson --sdk-version <SDK_VERSION> --force
command, where <SDK_VERSION>
is the version you're using locally. For example, dotnet new globaljson --sdk-version dotnet-sdk-8.0.100-preview.7.23376.3 --force
causes the system to use the .NET 8 Preview 7 SDK when building your project.Note
Because of the just-in-time loading of preview frameworks, function apps running on Windows can experience increased cold start times when compared against earlier GA versions.
Training
Module
Chain Azure Functions together by using input and output bindings - Training
In this module, we learn how to integrate your Azure Function with various data sources by using bindings.