Use IoT Hub message routing to send device-to-cloud messages to different endpoints

Note

Some of the features mentioned in this article, like cloud-to-device messaging, device twins, and device management, are only available in the standard tier of IoT Hub. For more information about the basic and standard/free IoT Hub tiers, see Choose the right IoT Hub tier for your solution.

Message routing enables you to send messages from your devices to cloud services in an automated, scalable, and reliable manner. Message routing can be used for:

  • Sending device telemetry messages as well as events namely, device lifecycle events, device twin change events, digital twin change events, and device connection state events to the built-in endpoint and custom endpoints. Learn about routing endpoints. To learn more about the events sent from IoT Plug and Play devices, see Understand IoT Plug and Play digital twins.

  • Filtering data before routing it to various endpoints by applying rich queries. Message routing allows you to query on the message properties and message body as well as device twin tags and device twin properties. Learn more about using queries in message routing.

IoT Hub needs write access to these service endpoints for message routing to work. If you configure your endpoints through the Azure portal, the necessary permissions are added for you. Make sure you configure your services to support the expected throughput. For example, if you're using Event Hubs as a custom endpoint, you must configure the throughput units for that event hub so it can handle the ingress of events you plan to send via IoT Hub message routing. Similarly, when using a Service Bus Queue as an endpoint, you must configure the maximum size to ensure the queue can hold all the data ingressed, until it's egressed by consumers. When you first configure your IoT solution, you may need to monitor your other endpoints and make any necessary adjustments for the actual load.

The IoT Hub defines a common format for all device-to-cloud messaging for interoperability across protocols. If a message matches multiple routes that point to the same endpoint, IoT Hub delivers message to that endpoint only once. Therefore, you don't need to configure deduplication on your Service Bus queue or topic. Use this tutorial to learn how to configure message routing.

Routing endpoints

An IoT hub has a default built-in endpoint (messages/events) that is compatible with Event Hubs. You can create custom endpoints to route messages to by linking other services in your subscription to the IoT hub.

Each message is routed to all endpoints whose routing queries it matches. In other words, a message can be routed to multiple endpoints.

If your custom endpoint has firewall configurations, consider using the Microsoft trusted first party exception.

IoT Hub currently supports the following endpoints:

  • Built-in endpoint
  • Storage containers
  • Service Bus Queues and Service Bus Topics
  • Event Hubs
  • Cosmos DB (preview)

Built-in endpoint as a routing endpoint

You can use standard Event Hubs integration and SDKs to receive device-to-cloud messages from the built-in endpoint (messages/events). Once a route is created, data stops flowing to the built-in endpoint unless a route is created to that endpoint. Even if no routes are created, a fallback route must be enabled to route messages to the built-in endpoint. The fallback is enabled by default if you create your hub using the portal or the CLI.

Azure Storage as a routing endpoint

There are two storage services IoT Hub can route messages to: Azure Blob Storage and Azure Data Lake Storage Gen2 (ADLS Gen2) accounts. Azure Data Lake Storage accounts are hierarchical namespace-enabled storage accounts built on top of blob storage. Both of these use blobs for their storage.

IoT Hub supports writing data to Azure Storage in the Apache Avro format and the JSON format. The default is AVRO. When using JSON encoding, you must set the contentType property to application/json and contentEncoding property to UTF-8 in the message system properties. Both of these values are case-insensitive. If the content encoding isn't set, then IoT Hub will write the messages in base 64 encoded format.

The encoding format can be only set when the blob storage endpoint is configured; it can't be edited for an existing endpoint. To switch encoding formats for an existing endpoint, you'll need to first delete the endpoint, and then re-create it with the format you want. One helpful strategy might be to create a new custom endpoint with your desired encoding format and add a parallel route to that endpoint. In this way, you can verify your data before deleting the existing endpoint.

You can select the encoding format using the IoT Hub Create or Update REST API, specifically the RoutingStorageContainerProperties, the Azure portal, Azure CLI, or Azure PowerShell. The following image shows how to select the encoding format in the Azure portal.

Blob storage endpoint encoding.

IoT Hub batches messages and writes data to storage whenever the batch reaches a certain size or a certain amount of time has elapsed. IoT Hub defaults to the following file naming convention:

{iothub}/{partition}/{YYYY}/{MM}/{DD}/{HH}/{mm}

You may use any file naming convention, however you must use all listed tokens. IoT Hub will write to an empty blob if there's no data to write.

We recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a Microsoft-initiated failover or IoT Hub manual failover. You can use the List Blobs API to enumerate the list of blobs or List ADLS Gen2 API for the list of files. See the following sample as guidance.

public void ListBlobsInContainer(string containerName, string iothub)
{
    var storageAccount = CloudStorageAccount.Parse(this.blobConnectionString);
    var cloudBlobContainer = storageAccount.CreateCloudBlobClient().GetContainerReference(containerName);
    if (cloudBlobContainer.Exists())
    {
        var results = cloudBlobContainer.ListBlobs(prefix: $"{iothub}/");
        foreach (IListBlobItem item in results)
        {
            Console.WriteLine(item.Uri);
        }
    }
}

To create an Azure Data Lake Gen2-compatible storage account, create a new V2 storage account and select Enable hierarchical namespace from the Data Lake Storage Gen2 section of the Advanced tab, as shown in the following image:

Select Azure Date Lake Gen2 storage.

Service Bus Queues and Service Bus Topics as a routing endpoint

Service Bus queues and topics used as IoT Hub endpoints must not have Sessions or Duplicate Detection enabled. If either of those options are enabled, the endpoint appears as Unreachable in the Azure portal.

Event Hubs as a routing endpoint

Apart from the built-in-Event Hubs compatible endpoint, you can also route data to custom endpoints of type Event Hubs.

Azure Cosmos DB as a routing endpoint (preview)

You can send data directly to Azure Cosmos DB from IoT Hub. Cosmos DB is a fully managed hyperscale multi-model database service. It provides low latency and high availability, making it a great choice for scenarios like connected solutions and manufacturing that require extensive downstream data analysis.

IoT Hub supports writing to Cosmos DB in JSON (if specified in the message content-type) or as Base64 encoded binary. You can set up a Cosmos DB endpoint for message routing by performing the following steps in the Azure portal:

  1. Navigate to your provisioned IoT hub.

  2. In the resource menu, select Message routing from Hub settings.

  3. Select the Custom endpoints tab in the working pane, then select Add and choose Cosmos DB (preview) from the dropdown list.

    The following image shows the endpoint addition options in the working pane of Azure portal:

    Screenshot that shows how to add a Cosmos DB endpoint.

  4. Type a name for your Cosmos DB endpoint in Endpoint name.

  5. In Cosmos DB account, choose an existing Cosmos DB account from a list of Cosmos DB accounts available for selection, then select an existing database and collection in Database and Collection, respectively.

  6. In Generate a synthetic partition key for messages, select Enable if needed.

    To effectively support high-scale scenarios, you can enable synthetic partition keys for the Cosmos DB endpoint. As Cosmos DB is a hyperscale data store, all data/documents written to it must contain a field that represents a logical partition. Each logical partition has a maximum size of 20 GB. You can specify the partition key property name in Partition key name. The partition key property name is defined at the container level and can't be changed once it has been set.

    You can configure the synthetic partition key value by specifying a template in Partition key template based on your estimated data volume. For example, in manufacturing scenarios, your logical partition might be expected to approach its maximum limit of 20 GB within a month. In that case, you can define a synthetic partition key as a combination of the device ID and the month. The generated partition key value is automatically added to the partition key property for each new Cosmos DB record, ensuring logical partitions are created each month for each device.

  7. In Authentication type, choose an authentication type for your Cosmos DB endpoint. You can choose any of the supported authentication types for accessing the database, based on your system setup.

    Caution

    If you're using the system assigned managed identity for authenticating to Cosmos DB, you must use Azure CLI or Azure PowerShell to assign the Cosmos DB Built-in Data Contributor built-in role definition to the identity. Role assignment for Cosmos DB isn't currently supported from the Azure portal. For more information about the various roles, see Configure role-based access for Azure Cosmos DB. To understand assigning roles via CLI, see Manage Azure Cosmos DB SQL role resources.

  8. Select Create to complete the creation of your custom endpoint.

To learn more about using the Azure portal to create message routes and endpoints for your IoT hub, see Message routing with IoT Hub — Azure portal.

Reading data that has been routed

You can configure a route by following this tutorial.

Use the following tutorials to learn how to read messages from an endpoint.

Fallback route

The fallback route sends all the messages that don't satisfy query conditions on any of the existing routes to the built-in endpoint (messages/events), which is compatible with Event Hubs. If message routing is enabled, you can enable the fallback route capability. Once a route is created, data stops flowing to the built-in endpoint, unless a route is created to that endpoint. If there are no routes to the built-in endpoint and a fallback route is enabled, only messages that don't match any query conditions on routes will be sent to the built-in endpoint. Also, if all existing routes are deleted, fallback route capability must be enabled to receive all data at the built-in endpoint.

You can enable or disable the fallback route in the Azure portal, from the Message routing blade. You can also use Azure Resource Manager for FallbackRouteProperties to use a custom endpoint for the fallback route.

Non-telemetry events

In addition to device telemetry, message routing also enables sending non-telemetry events, including:

  • Device twin change events
  • Device lifecycle events
  • Device job lifecycle events
  • Digital twin change events
  • Device connection state events
  • MQTT broker messages

For example, if a route is created with the data source set to Device Twin Change Events, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with the data source set to Device Lifecycle Events, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see Device and module lifecycle notifications. When using Azure IoT Plug and Play, a developer can create routes with the data source set to Digital Twin Change Events and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to Device Connection State Events, IoT Hub sends a message indicating whether the device was connected or disconnected.

IoT Hub also integrates with Azure Event Grid to publish device events to support real-time integrations and automation of workflows based on these events. See key differences between message routing and Event Grid to learn which works best for your scenario.

Limitations for device connection state events

Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the cloud-to-device receive message operation or the device-to-cloud send telemetry operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging topics. Over AMQP these operations equate to attaching or transferring a message on the appropriate link paths.

IoT Hub doesn't report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic, 60-second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60-second window.

Testing routes

When you create a new route or edit an existing route, you should test the route query with a sample message. You can test individual routes or test all routes at once and no messages are routed to the endpoints during the test. Azure portal, Azure Resource Manager, Azure PowerShell, and Azure CLI can be used for testing. Outcomes help identify whether the sample message matched or didn't match the query, or if the test couldn't run because the sample message or query syntax are incorrect. To learn more, see Test Route and Test All Routes.

Latency

When you route device-to-cloud telemetry messages using built-in endpoints, there's a slight increase in the end-to-end latency after the creation of the first route.

In most cases, the average increase in latency is less than 500 milliseconds. However, the latency you experience can vary and can be higher depending on the tier of your IoT hub and your solution architecture. You can monitor the latency using the Routing: message latency for messages/events or d2c.endpoints.latency.builtIn.events IoT Hub metrics. Creating or deleting any route after the first one doesn't impact the end-to-end latency.

Monitoring and troubleshooting

IoT Hub provides several metrics related to routing and endpoints to give you an overview of the health of your hub and messages sent. For a list of all of the IoT Hub metrics broken out by functional category, see the Metrics section of Monitoring Azure IoT Hub data reference. You can track errors that occur during evaluation of a routing query and endpoint health as perceived by IoT Hub with the routes category in IoT Hub resource logs. To learn more about using metrics and resource logs with IoT Hub, see Monitoring Azure IoT Hub.

You can use the REST API Get Endpoint Health to get the health status of the endpoints.

Use the troubleshooting guide for routing for more details and support for troubleshooting routing.

Next steps