Use the ingestion wizard to ingest JSON data from a local file to an existing table in Azure Data Explorer

The ingestion wizard allows you to ingest data in various formats and create mapping structures, as a one-time or continuous ingestion process.

This document describes using the ingestion wizard to ingest JSON data from a local file into an existing table. Use the same process with slight adaptations to cover different use cases.

Note

To enable access between a cluster and a storage account without public access (restricted to private endpoint/service endpoint), see Create a Managed Private Endpoint.

Ingest data

  1. In the left menu of the Azure Data Explorer web UI, select Data.

  2. From the Quick actions section, select Ingest data. Alternatively, from the All actions section, select Ingest data and then Ingest.

    Screenshot for the Azure Data Explorer web UI to select the ingestion wizard for a table.

Select an ingestion type

  1. In the Ingest data window, the Destination tab is selected.

  2. The Cluster and Database fields are auto-populated. You may select a different cluster or database from the drop-down menus.

    1. To add a new connection to a cluster, select Add cluster connection below the auto-populated cluster name.

      Screenshot of the ingest data tab to add a new cluster connection.

    2. In the popup window, enter the Connection URI for the cluster you're connecting.

    3. Enter a Display Name that you want to use to identify this cluster, and select Add.

      Screenshot of  the add cluster URI and description  to add a new cluster connection in Azure Data Explorer.

  3. If the Table field isn't automatically filled, select an existing table name from the drop-down menu.

  4. Select Next: Source

Source tab

  1. Under Source type, do the following steps:

    1. Select from file

    2. Select Browse to locate up to 10 files, or drag the files into the field. The schema-defining file can be chosen using the blue star.

    3. Select Next: Schema

      Screenshot to ingest from file with the ingestion wizard.

Edit the schema

The Schema tab opens.

  • Compression type is selected automatically by the source file name. In this case, the compression type is JSON

  • If you select Ignore data format errors, the data is ingested in JSON format. If you leave this check box unselected, the data is ingested in multijson format.

  • When you select JSON, you must also select Nested levels, from 1 to 100. The levels determine the table column data division.

    Screenshot completing ingestion information for ingesting a JSON file.

  • For tabular formats, you can select Keep current table schema. Tabular data doesn't necessarily include the column names that are used to map source data to the existing columns. When this option is checked, mapping is done by-order, and the table schema remains the same. If this option is unchecked, new columns are created for incoming data, regardless of data structure.

    Screenshot showing the 'keep current table schema' option checked when using tabular data format.

Add nested JSON data

To add columns from JSON levels that are different than the main Nested levels, do the following steps:

  1. Select on the arrow next to any column name, and select New column.

    Screenshot of options in the schema tab to add a new column using the ingestion wizard for Azure Data Explorer.

  2. Enter a new Column Name and select the Column Type from the dropdown menu.

  3. Under Source, select Create new.

    Screenshot to create a new source for adding nested JSON data in the ingestion process for Azure Data Explorer.

  4. Enter the new source for this column and select OK. This source can come from any JSON level.

    Screenshot showing a window to name the new data source for the added column.

  5. Select Create. Your new column will be added at the end of the table.

    Screenshot to create a new column using the ingestion wizard in Azure Data Explorer.

Edit the table

The changes you can make in a table depend on the following parameters:

  • Table type is new or existing
  • Mapping type is new or existing
Table type Mapping type Available adjustments
New table New mapping Change data type, Rename column, New column, Delete column, Update column, Sort ascending, Sort descending
Existing table New mapping New column (on which you can then change data type, rename, and update),
Update column, Sort ascending, Sort descending
Existing mapping Sort ascending, Sort descending

Note

When adding a new column or updating a column, you can change mapping transformations. For more information, see Mapping transformations

Note

  • For tabular formats, you can't map a column twice. To map to an existing column, first delete the new column.
  • You can't change an existing column type. If you try to map to a column having a different format, you may end up with empty columns.

Command editor

Above the Editor pane, select the v button to open the editor. In the editor, you can view and copy the automatic commands generated from your inputs.

Screenshot of ingestion wizard edit view.

Start ingestion

Select Next: Start ingestion to begin data ingestion.

Screenshot of ingestion wizard fields completed to start ingestion.

Complete data ingestion

In the Data ingestion completed window, all three steps are marked with green check marks when data ingestion finishes successfully.

Screenshot of ingestion wizard summary when ingestion is completed.

Important

To set up continuous ingestion from a container, see Ingest data from a container or Azure Data Lake Storage into Azure Data Explorer

Explore quick queries and tools

In the tiles below the ingestion progress, explore Quick queries or Tools:

  • Quick queries include links to the Azure Data Explorer web UI with example queries.

  • Tools includes links to Undo or Delete new data on the web UI, which enable you to troubleshoot issues by running the relevant .drop commands.

    Note

    You might lose data when you use .drop commands. Use them carefully. Drop commands will only revert the changes that were made by this ingestion flow (new extents and columns). Nothing else will be dropped.

Next steps

For another ingestion scenario, see the following article:

To get started querying data, see the following articles: