Rediģēt

Kopīgot, izmantojot


Migrate from the HTTP Data Collector API to the Log Ingestion API to send data to Azure Monitor Logs

The Azure Monitor Log Ingestion API provides more processing power and greater flexibility in ingesting logs and managing tables than the legacy HTTP Data Collector API. This article describes the differences between the Data Collector API and the Log Ingestion API and provides guidance and best practices for migrating to the new Log Ingestion API.

Note

As a Microsoft MVP, Morten Waltorp Knudsen contributed to and provided material feedback for this article. For an example of how you can automate the setup and ongoing use of the Log Ingestion API, see Morten's publicly available AzLogDcrIngestPS PowerShell module.

Advantages of the Log Ingestion API

The Log Ingestion API provides the following advantages over the Data Collector API:

  • Supports transformations, which enable you to modify the data before it's ingested into the destination table, including filtering and data manipulation.
  • Lets you send data to multiple destinations.
  • Enables you to manage the destination table schema, including column names, and whether to add new columns to the destination table when the source data schema changes.

Prerequisites

The migration procedure described in this article assumes you have:

Permissions required

Action Permissions required
Create a data collection endpoint. Microsoft.Insights/dataCollectionEndpoints/write permissions as provided by the Monitoring Contributor built-in role, for example.
Create or modify a data collection rule. Microsoft.Insights/DataCollectionRules/Write permissions as provided by the Monitoring Contributor built-in role, for example.
Convert a table that uses the Data Collector API to data collection rules and the Log Ingestion API. Microsoft.OperationalInsights/workspaces/tables/migrate/action permissions as provided by the Log Analytics Contributor built-in role, for example.
Create new tables or modify table schemas. microsoft.operationalinsights/workspaces/tables/write permissions as provided by the Log Analytics Contributor built-in role, for example.
Call the Log Ingestion API. See Assign permissions to a DCR.

Create new resources required for the Log ingestion API

The Log Ingestion API requires you to create two new types of resources, which the HTTP Data Collector API doesn't require:

Migrate existing custom tables or create new tables

If you have an existing custom table to which you currently send data using the Data Collector API, you can:

  • Migrate the table to continue ingesting data into the same table using the Log Ingestion API.

  • Maintain the existing table and data and set up a new table into which you ingest data using the Log Ingestion API. You can then delete the old table when you're ready.

    This is the preferred option, especially if you to need to make changes to the existing table. Changes to existing data types and multiple schema changes to existing Data Collector API custom tables can lead to errors.

Tip

To identify which tables use the Data Collector API, view table properties. The Type property of tables that use the Data Collector API is set to Custom table (classic). Note that tables that ingest data using the legacy Log Analytics agent (MMA) also have the Type property set to Custom table (classic). Be sure to migrate from Log Analytics agent to Azure Monitor Agent before converting MMA tables. Otherwise, you'll stop ingesting data into custom fields in these tables after the table conversion.

This table summarizes considerations to keep in mind for each option:

Table migration Side-by-side implementation
Table and column naming Reuse existing table name.
Column naming options:
- Use new column names and define a transformation to direct incoming data to the newly named column.
- Continue using old names.
Set the new table name freely.
Need to adjust integrations, dashboards, and alerts before switching to the new table.
Migration procedure One-off table migration. Not possible to roll back a migrated table. Migration can be done gradually, per table.
Post-migration You can continue to ingest data using the HTTP Data Collector API with existing columns, except custom columns.
Ingest data into new columns using the Log Ingestion API only.
Data in the old table is available until the end of retention period.
When you first set up a new table or make schema changes, it can take 10-15 minutes for the data changes to start appearing in the destination table.

To convert a table that uses the Data Collector API to data collection rules and the Log Ingestion API, issue this API call against the table:

POST https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}/migrate?api-version=2021-12-01-preview

This call is idempotent, so it has no effect if the table has already been converted.

The API call enables all DCR-based custom logs features on the table. The Data Collector API will continue to ingest data into existing columns, but won't create any new columns. Any previously defined custom fields won't continue to be populated. Another way to migrate an existing table to using data collection rules, but not necessarily the Log Ingestion API is applying a workspace transformation to the table.

Important

  • Column names must start with a letter and can consist of up to 45 alphanumeric characters and underscores (_).
  • _ResourceId, id, _ResourceId, _SubscriptionId, TenantId, Type, UniqueId, and Title are reserved column names.
  • Custom columns you add to an Azure table must have the suffix _CF.
  • If you update the table schema in your Log Analytics workspace, you must also update the input stream definition in the data collection rule to ingest data into new or modified columns.

Call the Log Ingestion API

The Log Ingestion API lets you send up to 1 MB of compressed or uncompressed data per call. If you need to send more than 1 MB of data, you can send multiple calls in parallel. This is a change from the Data Collector API, which lets you send up to 32 MB of data per call.

For information about how to call the Log Ingestion API, see Log Ingestion REST API call.

Modify table schemas and data collection rules based on changes to source data object

While the Data Collector API automatically adjusts the destination table schema when the source data object schema changes, the Log Ingestion API doesn't. This ensures that you don't collect new data into columns that you didn't intend to create.

When the source data schema changes, you can:

Note

You can't reuse a column name with a data type that's different to the original data type defined for the column.

Next steps