Edit

Share via


Event Grid data connection (Preview)

Important

Azure Synapse Analytics Data Explorer (Preview) will be retired on October 7, 2025. After this date, workloads running on Synapse Data Explorer will be deleted, and the associated application data will be lost. We highly recommend migrating to Eventhouse in Microsoft Fabric.

The Microsoft Cloud Migration Factory (CMF) program is designed to assist customers in migrating to Fabric. The program offers hands-on keyboard resources at no cost to the customer. These resources are assigned for a period of 6-8 weeks, with a predefined and agreed-upon scope. Customer nominations are accepted from the Microsoft account team or directly by submitting a request for help to the CMF team.

Event Grid ingestion is a pipeline that listens to Azure storage, and updates Azure Data Explorer to pull information when subscribed events occur. Data Explorer offers continuous ingestion from Azure Storage (Blob storage and ADLSv2) with Azure Event Grid subscription for blob created or blob renamed notifications and streaming these notifications to Data Explorer via an Event Hub.

The Event Grid ingestion pipeline goes through several steps. You create a target table in Data Explorer into which the data in a particular format will be ingested. Then you create an Event Grid data connection in Data Explorer. The Event Grid data connection needs to know events routing information, such as what table to send the data to and the table mapping. You also specify ingestion properties, which describe the data to be ingested, the target table, and the mapping. You can generate sample data and upload blobs or rename blobs to test your connection. Delete blobs after ingestion. This process can be managed through the Azure portal.

Data format

  • See supported formats.
  • See supported compressions.
    • The original uncompressed data size should be part of the blob metadata, or else Data Explorer will estimate it. The ingestion uncompressed size limit per file is 4 GB.

Note

Event Grid notification subscription can be set on Azure Storage accounts for BlobStorage, StorageV2, or Data Lake Storage Gen2.

Ingestion properties

You can specify ingestion properties of the blob ingestion via the blob metadata. You can set the following properties:

Property Property description
rawSizeBytes Size of the raw (uncompressed) data. For Avro/ORC/Parquet, that is the size before format-specific compression is applied. Provide the original data size by setting this property to the uncompressed data size in bytes.
kustoTable Name of the existing target table. Overrides the Table set on the Data Connection blade.
kustoDataFormat Data format. Overrides the Data format set on the Data Connection blade.
kustoIngestionMappingReference Name of the existing ingestion mapping to be used. Overrides the Column mapping set on the Data Connection blade.
kustoIgnoreFirstRecord If set to true, Kusto ignores the first row of the blob. Use in tabular format data (CSV, TSV, or similar) to ignore headers.
kustoExtentTags String representing tags that will be attached to resulting extent.
kustoCreationTime Overrides $IngestionTime for the blob, formatted as an ISO 8601 string. Use for backfilling.

Events routing

When setting up a blob storage connection to Data Explorer cluster, specify target table properties:

  • table name
  • data format
  • mapping

This setup is the default routing for your data, sometimes referred to as static routing. You can also specify target table properties for each blob, using blob metadata. The data will dynamically route, as specified by ingestion properties.

The following example shows you how to set ingestion properties on the blob metadata before uploading it. Blobs are routed to different tables.

For more information, see upload blobs.

// Blob is dynamically routed to table `Events`, ingested using `EventsMapping` data mapping
blob = container.GetBlockBlobReference(blobName2);
blob.Metadata.Add("rawSizeBytes", "4096‬"); // the uncompressed size is 4096 bytes
blob.Metadata.Add("kustoTable", "Events");
blob.Metadata.Add("kustoDataFormat", "json");
blob.Metadata.Add("kustoIngestionMappingReference", "EventsMapping");
blob.UploadFromFile(jsonCompressedLocalFileName);

Upload blobs

You can create a blob from a local file, set ingestion properties to the blob metadata, and upload it. For examples, see Ingest blobs into Data Explorer by subscribing to Event Grid notifications

Note

  • Use BlockBlob to generate data. AppendBlob is not supported.
  • Using Azure Data Lake Gen2 storage SDK requires using CreateFile for uploading files and Flush at the end with the close parameter set to "true".
  • When the Event Hub endpoint doesn't acknowledge receipt of an event, Azure Event Grid activates a retry mechanism. If this retry delivery fails, Event Grid can deliver the undelivered events to a storage account using a process of dead-lettering. For more information, see Event Grid message delivery and retry.

Rename blobs

When using ADLSv2, you can rename a blob to trigger blob ingestion to Data Explorer. For example, see Ingest blobs into Data Explorer by subscribing to Event Grid notifications.

Note

  • Directory renaming is possible in ADLSv2, but it doesn't trigger blob renamed events and ingestion of blobs inside the directory. To ingest blobs following renaming, directly rename the desired blobs.
  • If you defined filters to track specific subjects while creating the data connection.

Delete blobs using storage lifecycle

Data Explorer won't delete the blobs after ingestion. Use Azure Blob storage lifecycle to manage your blob deletion. It's recommended to keep the blobs for three to five days.

Known Event Grid issues

  • When using Data Explorer to export the files used for event grid ingestion, note:
    • Event Grid notifications aren't triggered if the connection string provided to the export command or the connection string provided to an external table is a connecting string in ADLS Gen2 format (for example, abfss://filesystem@accountname.dfs.core.windows.net) but the storage account isn't enabled for hierarchical namespace.
    • If the account isn't enabled for hierarchical namespace, connection string must use the Blob Storage format (for example, https://accountname.blob.core.windows.net). The export works as expected even when using the ADLS Gen2 connection string, but notifications won't be triggered and Event Grid ingestion won't work.

Next steps