Migration issue from HTTP Data Collector API to Log Ingestion API for the existing tables

Dhwani Shah 0 Reputation points
2025-11-28T10:42:20.84+00:00

As the support of HTTP Data Collector API is coming to End of Life, we have migrated to Log Ingestion API for several apps.

But, in the Log Ingestion API, the tables are tightly bound to DCR (Data Collection Rule) and DCE (Data Collection Endpoint) and this causes issue of backward compatibility.

When the older version of app (using HTTP data collector API) is already running and the tables are created in a workspace and when the newer version of the app (using Log Ingestion API) is deployed in the same workspace with the same tables, then we are facing following error.

User's image

In this case, the customer who has lot of data ingested in an existing table, they have to face data loss as the newer version either have to be deployed in the different workspace or the customer has to migrate or delete the tables in the existing workspace. But, both these cases has data loss.

Could you please suggest if there is any alternate way possible for this?

Azure Monitor
Azure Monitor
An Azure service that is used to collect, analyze, and act on telemetry data from Azure and on-premises environments.
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Jerald Felix 9,840 Reputation points
    2025-11-29T07:36:34.55+00:00

    Hello Dhwani Shah,

    Thanks for posting this question in Q&A Forum.

    I understand that you are migrating from the deprecated HTTP Data Collector API to the new Logs Ingestion API, but are facing backward compatibility issues where the new DCR (Data Collection Rule) based ingestion fails when targeting existing custom tables (_CL) created by the old API, potentially leading to data loss or the need for table deletion.

    This is a common challenge during this migration because the new DCR-based method enforces strict schema definition and transformation streams, whereas the old API allowed for dynamic schema inference.

    Here is how you can resolve this without deleting your existing tables or losing data:

    1. The Migration Path: "Migrate to DCR-based Custom Logs"

    You do not need to create new tables. You can "upgrade" your existing custom tables (created by the Data Collector API) to be compatible with DCRs.

    • Action: You must explicitly define the schema of your existing table in the DCR.
    • Unlike the old API which just accepted JSON, the DCR requires you to map the incoming data stream to the existing table columns.

    2. Handling the "Backward Compatibility" Error

    The error you are seeing usually happens because the DCR is trying to create a new table or stream that conflicts with the legacy properties of the existing table.

    • Solution: When defining the Stream Declaration in your DCR, you must ensure the output stream name matches the pattern Custom-<TableName>_CL.
    • Crucial Step: You must manually update the existing table's schema to align with the DCR requirements if it hasn't been migrated yet. You can use the API to update the table's plan or schema.

    3. Parallel Ingestion Strategy

    To support both the old app (HTTP Data Collector) and the new app (Logs Ingestion API) simultaneously during the transition:

    • Do NOT delete the table.
    • Configure the DCR to send data to the same existing table name (MyTable_CL).
    • Ensure the transformation KQL in the DCR maps your new input fields to the exact same column names that exist in the table.
    • Result: The old app continues sending data (until the API hard deprecation in 2026), and the new app sends data via the DCR. Both land in the same table.

    If you are receiving a specific error like "The table does not support DCR ingestion," it is because the table is still in the "Classic" mode. You may need to run a simple API call to the Tables endpoint to "touch" or update the schema, which effectively migrates the table metadata to support DCRs without deleting rows.

    If helps, approve the answer.

    Best Regards,

    Jerald Felix

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.