Use bulk operation messages

To get the best performance when you run operations on multiple rows of a Microsoft Dataverse table, use one of the following bulk operation messages:

  • CreateMultiple: Creates multiple records of the same type in a single request.
  • UpdateMultiple: Updates multiple records of the same type in a single request.
  • UpsertMultiple: Creates or updates multiple records of the same type in a single request.
  • DeleteMultiple (preview): For elastic tables only. Deletes multiple records of the same type in a single request.


For guidance about options when performing bulk operations, such when to use these APIs compared to batch APIs like ExecuteMultiple, see Optimize performance for bulk operations.


The following code samples show how to use bulk operation messages. You can download the samples from


Creates multiple records of the same type in a single request.

Uses the CreateMultipleRequest class.

/// <summary>
/// Demonstrates the use of the CreateMultiple Message
/// </summary>
/// <param name="service">The authenticated IOrganizationService instance.</param>
/// <param name="recordsToCreate">A list of records of the same table to create.</param>
/// <returns>The Guid values of the records created.</returns>
static Guid[] CreateMultipleExample(IOrganizationService service,
    List<Entity> recordsToCreate)

    // Create an EntityCollection populated with the list of entities.
    EntityCollection entities = new(recordsToCreate)
        // All the records must be for the same table.
        EntityName = recordsToCreate[0].LogicalName

    // Instantiate CreateMultipleRequest
    CreateMultipleRequest createMultipleRequest = new()
        Targets = entities,

    // Send the request
    CreateMultipleResponse createMultipleResponse =

    // Return the Ids of the records created.
    return createMultipleResponse.Ids;


Updates multiple records of the same type in a single request.

Just like when you update individual records, the data you send with UpdateMultiple must contain only the values you're changing. Learn how to update records with SDK for .NET and update records with the Web API.

Uses the UpdateMultipleRequest class.

/// <summary>
/// Demonstrates the use of the UpdateMultiple message.
/// </summary>
/// <param name="service">The authenticated IOrganizationService instance.</param>
/// <param name="recordsToUpdate">A list of records to create.</param>
static void UpdateMultipleExample(IOrganizationService service, List<Entity> recordsToUpdate) {
    // Create an EntityCollection populated with the list of entities.
    EntityCollection entities = new(recordsToUpdate)
        // All the records must be for the same table.
        EntityName = recordsToUpdate[0].LogicalName

    // Use UpdateMultipleRequest
    UpdateMultipleRequest updateMultipleRequest = new()
        Targets = entities,


Duplicate records in UpdateMultiple Targets parameter

Multiple records with the same primary key or alternate key values in the payload are not supported with UpdateMultiple. When more than one record in the Targets parameter is uniquely identified by a primary or alternate key, the operation is performed on the first record only. Any subsequent records with the same key value(s) in the payload are ignored. This behavior is different from UpsertMultiple.


Use Upsert to integrate data with external sources when you don't know whether the table exists in Dataverse or not. Upsert operations frequently depend on alternate keys to identify records. Use UpsertMultiple to perform Upsert operations in bulk.

Uses the UpsertMultipleRequest class.

This static UpsertMultipleExample method depends on a samples_bankaccount table that has a string column named samples_accountname configured as an alternate key. It also has a string column named samples_description. This code uses the Entity constructor that sets the keyName and keyValue to specify the alternate key value.

/// <summary>
/// Demonstrates using UpsertMultiple with alternate key values
/// </summary>
/// <param name="service">The authenticated IOrganizationService instance</param>
static void UpsertMultipleExample(IOrganizationService service)
    var tableLogicalName = "samples_bankaccount";
    // samples_accountname string column is configued as an alternate key
    // for the samples_bankaccount table
    var altKeyColumnLogicalName = "samples_accountname";

    // Create one record to update with upsert
    service.Create(new Entity(tableLogicalName)
        Attributes =
            {altKeyColumnLogicalName, "Record For Update"},
            {"samples_description","A record to update using Upsert" }

    // Using the Entity constructor to specify alternate key
    Entity toUpdate = new(
            entityName: tableLogicalName,
            keyName: altKeyColumnLogicalName,
            // Same alternate key value as created record.
            keyValue: "Record For Update");
    toUpdate["samples_description"] = "Updated using Upsert";

    Entity toCreate = new(
        entityName: tableLogicalName,
        keyName: altKeyColumnLogicalName,
        keyValue: "Record For Create");
    toCreate["samples_description"] = "A record to create using Upsert";

    // Add the records to a collection
    EntityCollection records = new()
        EntityName = tableLogicalName,
        Entities = { toUpdate, toCreate }

    // Send the request
    UpsertMultipleRequest request = new()
        Targets = records

    var response = (UpsertMultipleResponse)service.Execute(request);

    // Process the responses:
    foreach (UpsertResponse item in response.Results)
        Console.WriteLine($"Record {(item.RecordCreated ? "Created" : "Updated")}");


Record Updated
Record Created

Whether a record is created or updated in this example depends on whether records exist with the matching sample_keyattribute value. No data is returned to indicate whether a record was created or updated.

SDK examples

Within Sample: SDK for .NET Use bulk operations, look for the UpsertMultiple project


UpsertMultiple is available for tables that support CreateMultiple and UpdateMultiple. This includes all elastic tables. The queries found in Availability with standard tables will not return results for UpsertMultiple, but you can use them to detect whether a table supports both CreateMultiple and UpdateMultiple.

These queries will not return results for the UpsertMultiple message. A table that supports both CreateMultiple and UpdateMultiple will support UpsertMultiple.

Duplicate records in UpsertMultiple Targets parameter

Multiple records with the same primary key or alternate key values in the payload are not supported with UpsertMultiple. When more than one record in the Targets parameter is uniquely identified by a primary or alternate key, UpsertMultiple will return an error. This behavior is different from UpdateMultiple.

Standard and elastic table usage

Both standard and elastic tables benefit from a significant performance boost when you use bulk operation messages, but you need to use them differently. The following table summarizes the differences.

Difference Standard Elastic
Number of records Operations are more efficient with a larger number of records. There's no limit on the number of records, but there are message size and time limits. We recommend sending 100 - 1000 records at a time. We recommend sending 100 records at a time.
On Error behavior All operations roll back on error. Partial success is possible.
Availability Not all standard tables support these messages. Messages are available for all elastic tables.
DeleteMultiple Not available. Use the SDK BulkDeleteRequest class or the Web API BulkDelete action instead. Learn how to delete data in bulk. Available using the SDK OrganizationRequest class. The Web API DeleteMultiple action is private, but you can use it now. It will become public soon.

Standard and elastic table usage is different because standard tables use Azure SQL and support transactions. Elastic tables use Azure Cosmos DB, which doesn't support transactions but is able to handle large amounts of data at high levels of throughput with low latency. The following sections provide more details. Learn more about bulk operations on elastic tables.

Number of records

The number of records that you should include with each request depends on whether you're using standard or elastic tables.


Both standard and elastic tables have greater throughput when you send bulk operation messages in parallel.

Number of records with standard tables

Bulk operations on standard tables are optimized to perform on multiple rows in a single transaction. Operations become more efficient, and performance increases overall, as the number of operations per request increases. This optimization also allows for any plug-in steps that are registered for the bulk operation to be more efficient. Each time a plug-in is invoked for a single operation, some milliseconds are required to invoke the plug-in class containing the logic. When a plug-in is registered for a bulk operation message, the class is invoked once and can process all the operations more efficiently. Learn how to write plug-ins for CreateMultiple and UpdateMultiple.

This performance benefit gives you an incentive to send the largest number of records you can in each request. However, as the number of records increases, the size of the request increases, too, and the request takes longer to process. Eventually, you'll encounter message size and time limits. If you hit these limits, the entire operation fails. There's no set limit on the number of records you can send. You may need to experiment to find the best number. Generally, we expect that 100 - 1,000 records per request is a reasonable place to start if the size of the record data is small and there are no plug-ins. The kinds of errors you may encounter can usually be addressed by sending fewer records with each request. We recommend that you include the ability to configure the number of entities sent so that you can adapt by sending fewer.

Number of records with elastic tables

Because there's no transaction with elastic tables, there's no performance benefit in trying to send a large number of records per request. We recommend that you send 100 operations per request and send requests in parallel to achieve maximum throughput.

On Error behavior

The behavior when errors occur depends on whether you're using standard tables or elastic tables.

On Error behavior with standard tables

Any error that occurs in a bulk operation with a standard table causes the entire operation to roll back. You should only use bulk operations on standard tables when you have a high degree of confidence that all the operations will succeed. You may want to use the SDK ExecuteMultipleRequest class or Web API $batch to allow the set of operations to fall back if the bulk operation fails. If the success rate for your initial attempts is low, this strategy results in worse performance. Only use this fallback strategy when you expect most operations to succeed.

On Error behavior with elastic tables

With elastic tables, a bulk operation may partially succeed. You can use the error details to identify which records failed.

When you use the SDK to perform a bulk operation on an elastic table, a FaultException of type OrganizationServiceFault is thrown if a failure occurs. Use the following code to get the status of each record.

if (ex.Detail.ErrorDetails.TryGetValue("Plugin.BulkApiErrorDetails", out object errorDetails))
    List<BulkApiErrorDetail> bulkApiErrorDetails = JsonConvert.DeserializeObject<List<BulkApiErrorDetail>>(errorDetails.ToString());

public class BulkApiErrorDetail
    public int RequestIndex { get; set; }
    public string Id { get; set; }
    public int StatusCode { get; set; }


Bulk operation message availability depends on whether you're using standard tables or elastic tables. All elastic tables support the CreateMultiple, UpdateMultiple, UpsertMultiple, and DeleteMultiple messages.

See also:

Availability with standard tables

You can use the CreateMultiple and UpdateMultiple bulk operation messages with custom standard tables and many common standard tables, but not all. You should test whether individual standard tables support these messages. The following examples show you how to do that.

Use this static method to detect whether a given table supports CreateMultiple or UpdateMultiple.

/// <summary>
/// Detect whether a specified message is supported for the specified table.
/// </summary>
/// <param name="service">The IOrganizationService instance.</param>
/// <param name="entityLogicalName">The logical name of the table.</param>
/// <param name="messageName">The name of the message.</param>
/// <returns></returns>
public static bool IsMessageAvailable(
    IOrganizationService service,
    string entityLogicalName,
    string messageName)
    QueryExpression query = new("sdkmessagefilter")
        ColumnSet = new ColumnSet("sdkmessagefilterid"),
        Criteria = new FilterExpression(LogicalOperator.And)
            Conditions = {
            new ConditionExpression(
                conditionOperator: ConditionOperator.Equal,
                value: entityLogicalName)
        LinkEntities = {
            new LinkEntity(
                joinOperator: JoinOperator.Inner)
                    LinkCriteria = new FilterExpression(LogicalOperator.And){
                    Conditions = {
                        new ConditionExpression(
                            conditionOperator: ConditionOperator.Equal,
                            value: messageName)

    EntityCollection entityCollection = service.RetrieveMultiple(query);

    return entityCollection.Entities.Count.Equals(1);

Message pipelines merged

Each of the bulk operation messages has a corresponding message that operates on individual rows: Create, Update and Delete. These messages have existed for a long time, and many organizations have customized logic that depends on the events that occur when these messages are used.

A key requirement of the bulk operation messages is that organizations must not be required to maintain custom logic in two places. To have the same custom logic and maintain it in one place, we've merged the message processing pipelines for these messages. What does this mean?

  • When a bulk operation message is used, the respective Create and Update event occurs for each Entity instance in the Targets parameter. Any plug-ins or other event handlers for the corresponding individual events continue to work as they always have. You don't need to write new plug-ins to manage events raised by these messages.

  • When a single operation message is used, the respective bulk operation event occurs with an EntityCollection containing a single Entity instance passed in the Targets parameter. You can move any logic that responds to single operation events to the more efficient bulk operation events and the logic is applied for both individual and multiple operations.

Before the introduction of bulk operation messages, all custom logic was on the single operation messages. That logic must continue to be applied when client applications use the bulk operation messages. For tables used with high-volume bulk operations, we recommend that you begin to move any synchronous logic from single message events to bulk operation events. If you're introducing new logic, use the bulk operation events rather than the single operation events.


With this design, duplicate logic can potentially be applied on both the single and multiple versions of events. Dataverse doesn't try to prevent this because we can't know your intent.

It's your responsibility to make sure that the same logic applied for the single version of events is migrated to the multiple version of the event and removed from the single version of the event. Otherwise, the logic will be applied twice.

Learn how to write plug-ins for CreateMultiple and UpdateMultiple.


Keep the following limitations in mind when you use bulk operation messages.

Message size and time limits

With standard tables, there's a performance incentive to send more records with each request. However, the number of records you can send is limited by the size of the payload and the amount of time required to process the operation.

Message size limits

When you have a plug-in registered for any message, you may encounter the "Message size exceeded when sending context to Sandbox" error when the total size of the request exceeds 116.85 MB. With bulk operation messages, you're more likely to hit this limit as you send larger payloads.

This error doesn't occur if there's no plug-in registered for the event. To avoid the error, disable any plugins or send your request with the BypassCustomPluginExecution optional parameter.

Time limits

If you're using the Dataverse ServiceClient, you may encounter this error:

The request channel timed out attempting to send after 00:04:00. 
Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. 
The time allotted to this operation may have been a portion of a longer timeout.

The default timeout set using ServiceClient is 4 minutes, which is long for any synchronous operation. You can change this value using the static ServiceClient.MaxConnectionTimeout property. The default timeout using CrmServiceClient is 2 minutes.


Before you increase the time limits, you should consider reducing the number of records that are passed in the Targets parameter.

Not supported for use in plug-ins

At this time, we don't support using bulk operation messages in plug-in code. More information: Don't use batch request types in plug-ins and workflow activities.

However, you should write plug-ins for the CreateMultiple and UpdateMultiple messages as described in Write plug-ins for CreateMultiple and UpdateMultiple.

Troubleshooting common errors

If you encounter errors while using bulk operations, please refer to the following articles:

Frequently asked questions (FAQ)

If you don't find an answer in this article to questions you have about using bulk operation messages, use the button at the bottom of the page to Submit and view feedback for This Page. You need a GitHub account to submit feedback.

Will Retrieve and RetrieveMultiple logic be merged?

We don't plan to change Retrieve and RetrieveMultiple message behavior. These have been separate messages for many years and developers have always needed to maintain logic for them separately. Attempting to merge the message pipeline for them would be highly problematic. Also, we discourage applying custom logic for these messages due to the impact they can have on performance.

How are API limits applied?

There are two types of API limits. The bulk operation messages don't provide any way to bypass either type.

Service protection limits

As described in Service protection API limits, limits have three facets. Two of these limits are evaluated on a five-minute sliding window and apply when using these messages.

  • Number of requests: Each bulk operation message counts as single request that accrues to the limit of 6,000 requests per user, per server, during the five-minute window. Because these requests group individual operations, the likelihood of hitting this limit is reduced.
  • Execution time: Because each bulk operation message request typically takes longer, if you're sending requests in parallel, you're more likely to hit the execution time limit of 20 minutes per user, per server, during the five-minute window.

Power Platform Request (API Entitlement) limits

These limits are based on data changes: Create, Update, and Delete operations. Each item included in the Targets parameter of a bulk operation request accrues to this limit. Learn more about request limits and allocations.

See also

Use UpsertMultiple (preview)
Use DeleteMultiple (preview)
Elastic tables
Write plug-ins for CreateMultiple and UpdateMultiple
Sample: SDK for .NET Use bulk operations
Sample: Web API Use bulk operations
Sample: CreateMultiple and UpdateMultiple plug-ins
Use messages with the SDK for .NET
Optimize performance for bulk operations