Уреди

Делите путем


Bindings for Durable Functions (Azure Functions)

The Durable Functions extension introduces three trigger bindings that control the execution of orchestrator, entity, and activity functions. It also introduces an output binding that acts as a client for the Durable Functions runtime.

Make sure to choose your Durable Functions development language at the top of the article.

Important

This article supports both Python v1 and Python v2 programming models for Durable Functions.

Python v2 programming model

Durable Functions is supported in the new Python v2 programming model. To use the v2 model, you must install the Durable Functions SDK, which is the PyPI package azure-functions-durable, version 1.2.2 or a later version. You must also check host.json to make sure your app is referencing Extension Bundles version 4.x to use the v2 model with Durable Functions.

You can provide feedback and suggestions in the Durable Functions SDK for Python repo.

Orchestration trigger

The orchestration trigger enables you to author durable orchestrator functions. This trigger executes when a new orchestration instance is scheduled and when an existing orchestration instance receives an event. Examples of events that can trigger orchestrator functions include durable timer expirations, activity function responses, and events raised by external clients.

When you author functions in .NET, the orchestration trigger is configured using the OrchestrationTriggerAttribute .NET attribute.

For Java, the @DurableOrchestrationTrigger annotation is used to configure the orchestration trigger.

When you write orchestrator functions, the orchestration trigger is defined by the following JSON object in the bindings array of the function.json file:

{
    "name": "<Name of input parameter in function signature>",
    "orchestration": "<Optional - name of the orchestration>",
    "type": "orchestrationTrigger",
    "direction": "in"
}
  • orchestration is the name of the orchestration that clients must use when they want to start new instances of this orchestrator function. This property is optional. If not specified, the name of the function is used.

Azure Functions supports two programming models for Python. The way that you define an orchestration trigger depends on your chosen programming model.

The Python v2 programming model lets you define an orchestration trigger using the orchestration_trigger decorator directly in your Python function code.

In the v2 model, the Durable Functions triggers and bindings are accessed from an instance of DFApp, which is a subclass of FunctionApp that additionally exports Durable Functions-specific decorators.

Internally, this trigger binding polls the configured durable store for new orchestration events, such as orchestration start events, durable timer expiration events, activity function response events, and external events raised by other functions.

Trigger behavior

Here are some notes about the orchestration trigger:

  • Single-threading - A single dispatcher thread is used for all orchestrator function execution on a single host instance. For this reason, it's important to ensure that orchestrator function code is efficient and doesn't perform any I/O. It is also important to ensure that this thread does not do any async work except when awaiting on Durable Functions-specific task types.
  • Poison-message handling - There's no poison message support in orchestration triggers.
  • Message visibility - Orchestration trigger messages are dequeued and kept invisible for a configurable duration. The visibility of these messages is renewed automatically as long as the function app is running and healthy.
  • Return values - Return values are serialized to JSON and persisted to the orchestration history table in Azure Table storage. These return values can be queried by the orchestration client binding, described later.

Warning

Orchestrator functions should never use any input or output bindings other than the orchestration trigger binding. Doing so has the potential to cause problems with the Durable Task extension because those bindings may not obey the single-threading and I/O rules. If you'd like to use other bindings, add them to an activity function called from your orchestrator function. For more information about coding constraints for orchestrator functions, see the Orchestrator function code constraints documentation.

Warning

Orchestrator functions should never be declared async.

Trigger usage

The orchestration trigger binding supports both inputs and outputs. Here are some things to know about input and output handling:

  • inputs - Orchestration triggers can be invoked with inputs, which are accessed through the context input object. All inputs must be JSON-serializable.
  • outputs - Orchestration triggers support output values as well as inputs. The return value of the function is used to assign the output value and must be JSON-serializable.

Trigger sample

The following example code shows what the simplest "Hello World" orchestrator function might look like. Note that this example orchestrator doesn't actually schedule any tasks.

The specific attribute used to define the trigger depends on whether you are running your C# functions in-process or in an isolated worker process.

[FunctionName("HelloWorld")]
public static string Run([OrchestrationTrigger] IDurableOrchestrationContext context)
{
    string name = context.GetInput<string>();
    return $"Hello {name}!";
}

Note

The previous code is for Durable Functions 2.x. For Durable Functions 1.x, you must use DurableOrchestrationContext instead of IDurableOrchestrationContext. For more information about the differences between versions, see the Durable Functions Versions article.

const df = require("durable-functions");

module.exports = df.orchestrator(function*(context) {
    const name = context.df.getInput();
    return `Hello ${name}!`;
});

Note

The durable-functions library takes care of calling the synchronous context.done method when the generator function exits.

import azure.functions as func
import azure.durable_functions as df

myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)

@myApp.orchestration_trigger(context_name="context")
def my_orchestrator(context):
    result = yield context.call_activity("Hello", "Tokyo")
    return result
param($Context)

$InputData = $Context.Input
$InputData
@FunctionName("HelloWorldOrchestration")
public String helloWorldOrchestration(
        @DurableOrchestrationTrigger(name = "ctx") TaskOrchestrationContext ctx) {
    return String.format("Hello %s!", ctx.getInput(String.class));
}

Most orchestrator functions call activity functions, so here is a "Hello World" example that demonstrates how to call an activity function:

[FunctionName("HelloWorld")]
public static async Task<string> Run(
    [OrchestrationTrigger] IDurableOrchestrationContext context)
{
    string name = context.GetInput<string>();
    string result = await context.CallActivityAsync<string>("SayHello", name);
    return result;
}

Note

The previous code is for Durable Functions 2.x. For Durable Functions 1.x, you must use DurableOrchestrationContext instead of IDurableOrchestrationContext. For more information about the differences between versions, see the Durable Functions versions article.

const df = require("durable-functions");

module.exports = df.orchestrator(function*(context) {
    const name = context.df.getInput();
    const result = yield context.df.callActivity("SayHello", name);
    return result;
});
@FunctionName("HelloWorld")
public String helloWorldOrchestration(
        @DurableOrchestrationTrigger(name = "ctx") TaskOrchestrationContext ctx) {
    String input = ctx.getInput(String.class);
    String result = ctx.callActivity("SayHello", input, String.class).await();
    return result;
}

Activity trigger

The activity trigger enables you to author functions that are called by orchestrator functions, known as activity functions.

The activity trigger is configured using the ActivityTriggerAttribute .NET attribute.

The activity trigger is configured using the @DurableActivityTrigger annotation.

The activity trigger is defined by the following JSON object in the bindings array of function.json:

{
    "name": "<Name of input parameter in function signature>",
    "activity": "<Optional - name of the activity>",
    "type": "activityTrigger",
    "direction": "in"
}
  • activity is the name of the activity. This value is the name that orchestrator functions use to invoke this activity function. This property is optional. If not specified, the name of the function is used.

The way that you define an activity trigger depends on your chosen programming model.

Using the activity_trigger decorator directly in your Python function code.

Internally, this trigger binding polls the configured durable store for new activity execution events.

Trigger behavior

Here are some notes about the activity trigger:

  • Threading - Unlike the orchestration trigger, activity triggers don't have any restrictions around threading or I/O. They can be treated like regular functions.
  • Poison-message handling - There's no poison message support in activity triggers.
  • Message visibility - Activity trigger messages are dequeued and kept invisible for a configurable duration. The visibility of these messages is renewed automatically as long as the function app is running and healthy.
  • Return values - Return values are serialized to JSON and persisted to the configured durable store.

Trigger usage

The activity trigger binding supports both inputs and outputs, just like the orchestration trigger. Here are some things to know about input and output handling:

  • inputs - Activity triggers can be invoked with inputs from an orchestrator function. All inputs must be JSON-serializable.
  • outputs - Activity functions support output values as well as inputs. The return value of the function is used to assign the output value and must be JSON-serializable.
  • metadata - .NET activity functions can bind to a string instanceId parameter to get the instance ID of the calling orchestration.

Trigger sample

The following example code shows what a simple SayHello activity function might look like.

[FunctionName("SayHello")]
public static string SayHello([ActivityTrigger] IDurableActivityContext helloContext)
{
    string name = helloContext.GetInput<string>();
    return $"Hello {name}!";
}

The default parameter type for the .NET ActivityTriggerAttribute binding is IDurableActivityContext (or DurableActivityContext for Durable Functions v1). However, .NET activity triggers also support binding directly to JSON-serializeable types (including primitive types), so the same function could be simplified as follows:

[FunctionName("SayHello")]
public static string SayHello([ActivityTrigger] string name)
{
    return $"Hello {name}!";
}
module.exports = async function(context) {
    return `Hello ${context.bindings.name}!`;
};

JavaScript bindings can also be passed in as additional parameters, so the same function could be simplified as follows:

module.exports = async function(context, name) {
    return `Hello ${name}!`;
};
import azure.functions as func
import azure.durable_functions as df

myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)

@myApp.activity_trigger(input_name="myInput")
def my_activity(myInput: str):
    return "Hello " + myInput
param($name)

"Hello $name!"
@FunctionName("SayHello")
public String sayHello(@DurableActivityTrigger(name = "name") String name) {
    return String.format("Hello %s!", name);
}

Using input and output bindings

You can use regular input and output bindings in addition to the activity trigger binding.

For example, you can take the input to your activity binding, and send a message to an Event Hub using the Event Hubs output binding:

{
  "bindings": [
    {
      "name": "message",
      "type": "activityTrigger",
      "direction": "in"
    },
    {
      "type": "eventHub",
      "name": "outputEventHubMessage",
      "connection": "EventhubConnectionSetting",
      "eventHubName": "eh_messages",
      "direction": "out"
  }
  ]
}
module.exports = async function (context) {
    context.bindings.outputEventHubMessage = context.bindings.message;
};

Orchestration client

The orchestration client binding enables you to write functions that interact with orchestrator functions. These functions are often referred to as client functions. For example, you can act on orchestration instances in the following ways:

  • Start them.
  • Query their status.
  • Terminate them.
  • Send events to them while they're running.
  • Purge instance history.

You can bind to the orchestration client by using the DurableClientAttribute attribute (OrchestrationClientAttribute in Durable Functions v1.x).

You can bind to the orchestration client by using the @DurableClientInput annotation.

The durable client trigger is defined by the following JSON object in the bindings array of function.json:

{
    "name": "<Name of input parameter in function signature>",
    "taskHub": "<Optional - name of the task hub>",
    "connectionName": "<Optional - name of the connection string app setting>",
    "type": "orchestrationClient",
    "direction": "in"
}
  • taskHub - Used in scenarios where multiple function apps share the same storage account but need to be isolated from each other. If not specified, the default value from host.json is used. This value must match the value used by the target orchestrator functions.
  • connectionName - The name of an app setting that contains a storage account connection string. The storage account represented by this connection string must be the same one used by the target orchestrator functions. If not specified, the default storage account connection string for the function app is used.

Note

In most cases, we recommend that you omit these properties and rely on the default behavior.

The way that you define a durable client trigger depends on your chosen programming model.

Using the durable_client_input decorator directly in your Python function code.

Client usage

You typically bind to IDurableClient (DurableOrchestrationClient in Durable Functions v1.x), which gives you full access to all orchestration client APIs supported by Durable Functions.

You typically bind to the DurableClientContext class.

You must use the language-specific SDK to get access to a client object.

Here's an example queue-triggered function that starts a "HelloWorld" orchestration.

[FunctionName("QueueStart")]
public static Task Run(
    [QueueTrigger("durable-function-trigger")] string input,
    [DurableClient] IDurableOrchestrationClient starter)
{
    // Orchestration input comes from the queue message content.
    return starter.StartNewAsync<string>("HelloWorld", input);
}

Note

The previous C# code is for Durable Functions 2.x. For Durable Functions 1.x, you must use OrchestrationClient attribute instead of the DurableClient attribute, and you must use the DurableOrchestrationClient parameter type instead of IDurableOrchestrationClient. For more information about the differences between versions, see the Durable Functions Versions article.

function.json

{
  "bindings": [
    {
      "name": "input",
      "type": "queueTrigger",
      "queueName": "durable-function-trigger",
      "direction": "in"
    },
    {
      "name": "starter",
      "type": "durableClient",
      "direction": "in"
    }
  ]
}

index.js

const df = require("durable-functions");

module.exports = async function (context) {
    const client = df.getClient(context);
    return instanceId = await client.startNew("HelloWorld", undefined, context.bindings.input);
};

run.ps1

param([string] $input, $TriggerMetadata)

$InstanceId = Start-DurableOrchestration -FunctionName $FunctionName -Input $input
import azure.functions as func
import azure.durable_functions as df

myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)

@myApp.route(route="orchestrators/{functionName}")
@myApp.durable_client_input(client_name="client")
async def durable_trigger(req: func.HttpRequest, client):
    function_name = req.route_params.get('functionName')
    instance_id = await client.start_new(function_name)
    response = client.create_check_status_response(req, instance_id)
    return response

function.json

{
  "bindings": [
    {
      "name": "input",
      "type": "queueTrigger",
      "queueName": "durable-function-trigger",
      "direction": "in"
    },
    {
      "name": "starter",
      "type": "durableClient",
      "direction": "in"
    }
  ]
}

run.ps1

param([string]$InputData, $TriggerMetadata)

$InstanceId = Start-DurableOrchestration -FunctionName 'HelloWorld' -Input $InputData
@FunctionName("QueueStart")
public void queueStart(
        @QueueTrigger(name = "input", queueName = "durable-function-trigger", connection = "Storage") String input,
        @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
    // Orchestration input comes from the queue message content.
    durableContext.getClient().scheduleNewOrchestrationInstance("HelloWorld", input);
}

More details on starting instances can be found in Instance management.

Entity trigger

Entity triggers allow you to author entity functions. This trigger supports processing events for a specific entity instance.

Note

Entity triggers are available starting in Durable Functions 2.x.

Internally, this trigger binding polls the configured durable store for new entity operations that need to be executed.

The entity trigger is configured using the EntityTriggerAttribute .NET attribute.

The entity trigger is defined by the following JSON object in the bindings array of function.json:

{
    "name": "<Name of input parameter in function signature>",
    "entityName": "<Optional - name of the entity>",
    "type": "entityTrigger",
    "direction": "in"
}

By default, the name of an entity is the name of the function.

Note

Entity triggers aren't yet supported for Java.

The way that you define a entity trigger depends on your chosen programming model.

Using the entity_trigger decorator directly in your Python function code.

Trigger behavior

Here are some notes about the entity trigger:

  • Single-threaded: A single dispatcher thread is used to process operations for a particular entity. If multiple messages are sent to a single entity concurrently, the operations will be processed one-at-a-time.
  • Poison-message handling - There's no poison message support in entity triggers.
  • Message visibility - Entity trigger messages are dequeued and kept invisible for a configurable duration. The visibility of these messages is renewed automatically as long as the function app is running and healthy.
  • Return values - Entity functions don't support return values. There are specific APIs that can be used to save state or pass values back to orchestrations.

Any state changes made to an entity during its execution will be automatically persisted after execution has completed.

For more information and examples on defining and interacting with entity triggers, see the Durable Entities documentation.

Entity client

The entity client binding enables you to asynchronously trigger entity functions. These functions are sometimes referred to as client functions.

You can bind to the entity client by using the DurableClientAttribute .NET attribute in .NET class library functions.

Note

The [DurableClientAttribute] can also be used to bind to the orchestration client.

The entity client is defined by the following JSON object in the bindings array of function.json:

{
    "name": "<Name of input parameter in function signature>",
    "taskHub": "<Optional - name of the task hub>",
    "connectionName": "<Optional - name of the connection string app setting>",
    "type": "durableClient",
    "direction": "in"
}
  • taskHub - Used in scenarios where multiple function apps share the same storage account but need to be isolated from each other. If not specified, the default value from host.json is used. This value must match the value used by the target entity functions.
  • connectionName - The name of an app setting that contains a storage account connection string. The storage account represented by this connection string must be the same one used by the target entity functions. If not specified, the default storage account connection string for the function app is used.

Note

In most cases, we recommend that you omit the optional properties and rely on the default behavior.

The way that you define a entity client depends on your chosen programming model.

Using the durable_client_input decorator directly in your Python function code.

Note

Entity clients aren't yet supported for Java.

For more information and examples on interacting with entities as a client, see the Durable Entities documentation.

host.json settings

Configuration settings for Durable Functions.

Note

All major versions of Durable Functions are supported on all versions of the Azure Functions runtime. However, the schema of the host.json configuration is slightly different depending on the version of the Azure Functions runtime and the Durable Functions extension version you use. The following examples are for use with Azure Functions 2.0 and 3.0. In both examples, if you're using Azure Functions 1.0, the available settings are the same, but the "durableTask" section of the host.json should go in the root of the host.json configuration instead of as a field under "extensions".

{
 "extensions": {
  "durableTask": {
    "hubName": "MyTaskHub",
    "storageProvider": {
      "connectionStringName": "AzureWebJobsStorage",
      "controlQueueBatchSize": 32,
      "controlQueueBufferThreshold": 256,
      "controlQueueVisibilityTimeout": "00:05:00",
      "maxQueuePollingInterval": "00:00:30",
      "partitionCount": 4,
      "trackingStoreConnectionStringName": "TrackingStorage",
      "trackingStoreNamePrefix": "DurableTask",
      "useLegacyPartitionManagement": true,
      "useTablePartitionManagement": false,
      "workItemQueueVisibilityTimeout": "00:05:00",
    },
    "tracing": {
      "traceInputsAndOutputs": false,
      "traceReplayEvents": false,
    },
    "notifications": {
      "eventGrid": {
        "topicEndpoint": "https://topic_name.westus2-1.eventgrid.azure.net/api/events",
        "keySettingName": "EventGridKey",
        "publishRetryCount": 3,
        "publishRetryInterval": "00:00:30",
        "publishEventTypes": [
          "Started",
          "Completed",
          "Failed",
          "Terminated"
        ]
      }
    },
    "maxConcurrentActivityFunctions": 10,
    "maxConcurrentOrchestratorFunctions": 10,
    "extendedSessionsEnabled": false,
    "extendedSessionIdleTimeoutInSeconds": 30,
    "useAppLease": true,
    "useGracefulShutdown": false,
    "maxEntityOperationBatchSize": 50,
    "storeInputsInOrchestrationHistory": false
  }
 }
}

Task hub names must start with a letter and consist of only letters and numbers. If not specified, the default task hub name for a function app is TestHubName. For more information, see Task hubs.

Property Default Description
hubName TestHubName (DurableFunctionsHub if using Durable Functions 1.x) Alternate task hub names can be used to isolate multiple Durable Functions applications from each other, even if they're using the same storage backend.
controlQueueBatchSize 32 The number of messages to pull from the control queue at a time.
controlQueueBufferThreshold Consumption plan for Python: 32
Consumption plan for JavaScript and C#: 128
Dedicated/Premium plan: 256
The number of control queue messages that can be buffered in memory at a time, at which point the dispatcher will wait before dequeuing any additional messages.
partitionCount 4 The partition count for the control queue. May be a positive integer between 1 and 16.
controlQueueVisibilityTimeout 5 minutes The visibility timeout of dequeued control queue messages.
workItemQueueVisibilityTimeout 5 minutes The visibility timeout of dequeued work item queue messages.
maxConcurrentActivityFunctions Consumption plan: 10
Dedicated/Premium plan: 10X the number of processors on the current machine
The maximum number of activity functions that can be processed concurrently on a single host instance.
maxConcurrentOrchestratorFunctions Consumption plan: 5
Dedicated/Premium plan: 10X the number of processors on the current machine
The maximum number of orchestrator functions that can be processed concurrently on a single host instance.
maxQueuePollingInterval 30 seconds The maximum control and work-item queue polling interval in the hh:mm:ss format. Higher values can result in higher message processing latencies. Lower values can result in higher storage costs because of increased storage transactions.
connectionName (2.7.0 and later)
connectionStringName (2.x)
azureStorageConnectionStringName (1.x)
AzureWebJobsStorage The name of an app setting or setting collection that specifies how to connect to the underlying Azure Storage resources. When a single app setting is provided, it should be an Azure Storage connection string.
trackingStoreConnectionName (2.7.0 and later)
trackingStoreConnectionStringName
The name of an app setting or setting collection that specifies how to connect to the History and Instances tables. When a single app setting is provided, it should be an Azure Storage connection string. If not specified, the connectionStringName (Durable 2.x) or azureStorageConnectionStringName (Durable 1.x) connection is used.
trackingStoreNamePrefix The prefix to use for the History and Instances tables when trackingStoreConnectionStringName is specified. If not set, the default prefix value will be DurableTask. If trackingStoreConnectionStringName is not specified, then the History and Instances tables will use the hubName value as their prefix, and any setting for trackingStoreNamePrefix will be ignored.
traceInputsAndOutputs false A value indicating whether to trace the inputs and outputs of function calls. The default behavior when tracing function execution events is to include the number of bytes in the serialized inputs and outputs for function calls. This behavior provides minimal information about what the inputs and outputs look like without bloating the logs or inadvertently exposing sensitive information. Setting this property to true causes the default function logging to log the entire contents of function inputs and outputs.
traceReplayEvents false A value indicating whether to write orchestration replay events to Application Insights.
eventGridTopicEndpoint The URL of an Azure Event Grid custom topic endpoint. When this property is set, orchestration life-cycle notification events are published to this endpoint. This property supports App Settings resolution.
eventGridKeySettingName The name of the app setting containing the key used for authenticating with the Azure Event Grid custom topic at EventGridTopicEndpoint.
eventGridPublishRetryCount 0 The number of times to retry if publishing to the Event Grid Topic fails.
eventGridPublishRetryInterval 5 minutes The Event Grid publishes retry interval in the hh:mm:ss format.
eventGridPublishEventTypes A list of event types to publish to Event Grid. If not specified, all event types will be published. Allowed values include Started, Completed, Failed, Terminated.
useAppLease true When set to true, apps will require acquiring an app-level blob lease before processing task hub messages. For more information, see the disaster recovery and geo-distribution documentation. Available starting in v2.3.0.
useLegacyPartitionManagement false When set to false, uses a partition management algorithm that reduces the possibility of duplicate function execution when scaling out. Available starting in v2.3.0.
useTablePartitionManagement false When set to true, uses a partition management algorithm designed to reduce costs for Azure Storage V2 accounts. Available starting in v2.10.0. This feature is currently in preview and not yet compatible with the Consumption plan.
useGracefulShutdown false (Preview) Enable gracefully shutting down to reduce the chance of host shutdowns failing in-process function executions.
maxEntityOperationBatchSize(2.6.1) Consumption plan: 50
Dedicated/Premium plan: 5000
The maximum number of entity operations that are processed as a batch. If set to 1, batching is disabled, and each operation message is processed by a separate function invocation.
storeInputsInOrchestrationHistory false When set to true, tells the Durable Task Framework to save activity inputs in the history table. This enables the displaying of activity function inputs when querying orchestration history.

Many of these settings are for optimizing performance. For more information, see Performance and scale.

Next steps