Editja

Ixxerja permezz ta’


Get data from Eventstream

In this article, you learn how to get data from an existing eventstream into a new or existing table.

You can ingest data from default or derived streams. A derived stream is created by adding a series of stream operations to the eventstream, such as Filter or Manage Fields. For more information, see Eventstream concepts.

To get data from a new eventstream, see Get data from a new eventstream.

Warning

  • Ingestion from an eventstream using a private link isn't supported.
  • Data preview from an eventstream with large sample events (10 MB or larger) isn't supported in the Get Data wizard. Use small sample events (about 1 MB each) to configure the data connection.

Prerequisites

Step 1: Source

To get data from an eventstream, select the eventstream as your data source. Select an eventstream in the following ways:

On the ribbon of the KQL database, either:

  • From the Get Data option on the ribbon, select the Eventstream tile.

  • From the Get Data dropdown menu, select Eventstream > Existing Eventstream.

  • From the Get Data dropdown menu, select Real-Time data hub, and select an eventstream from the list.

Step 2: Configure

  1. Select a target table. To ingest data into a new table, select + New table and enter a table name.

    Note

    Table names can be up to 1,024 characters including spaces, alphanumeric, hyphens, and underscores. Special characters aren't supported.

  2. Under Configure the data source, complete the settings using the information in the following table:

    • When you select Eventstream as your source, specify the Workspace, Eventstream, and default or derived Stream.

    Important

    The feature to get data from derived streams is in preview.

    Screenshot of the configure tab with a new table entered and one eventstream selected.

    • When you select Real-Time hub as your source, you choose a default or derived stream from the list, and the Workspace, Eventstream, and Stream are automatically populated and don't require configuration.

    Screenshot of configure tab with new table entered and read-only configure data source settings.

    Setting Description
    Workspace Your eventstream workspace location. Select a workspace from the dropdown.
    Eventstream The name of your eventstream. Select an eventstream from the dropdown.
    Stream The name of the default or derived stream. Select a stream from the dropdown.
    * For default streams, the stream name format is Eventstream-stream.
    * For derived streams, the name was defined when the stream was created.
    Process event before ingestion in Eventstream This option allows you to configure data processing before data is ingested into the destination table. If selected, you continue the data ingestion process in Eventstream. For more information, see Process event before ingestion in Eventstream.
    Data connection name The name used to reference and manage your data connection in your workspace. The data connection name is automatically populated, and you can edit the name to simplify managing the data connection in the workspace. The name can contain only alphanumeric, dash, and dot characters, and be up to 40 characters long.
  3. Select Next to continue.


Process event before ingestion in Eventstream

The Process event before ingestion in Eventstream option enables you to process the data before it's ingested into the destination table. With this option, the get data process seamlessly continues in Eventstream, with the destination table and data source details automatically populated.

To process event before ingestion in Eventstream:

  1. On the Configure tab, select Process event before ingestion in Eventstream.

  2. In the Process events in Eventstream dialog box, select Continue in Eventstream.

    Important

    Selecting Continue in Eventstream ends the get data process in Real-Time Intelligence and continues in Eventstream with the destination table and data source details automatically populated.

    Screenshot of the Process events in Eventstream dialog box.

  3. In Eventstream, select the KQL Database destination node, and in the KQL Database pane, verify that Event processing before ingestion is selected and that the destination details are correct.

    Screenshot of the Process events in Eventstream page.

  4. Select Open event processor to configure the data processing and then select Save. For more information, see Process event data with event processor editor.

  5. Back in the KQL Database pane, select Add to complete the KQL Database destination node setup.

  6. Verify data is ingested into the destination table.

Note

The process event before ingestion in Eventstream process is complete and the remaining steps in this article aren't required.

Step 3: Inspect

The Inspect tab shows a preview of the data.

Select Finish to complete the ingestion process.

Screenshot of the inspect tab.

Optional:

  • Use the file type dropdown to explore Advanced options based on data type.

  • Use the Table_mapping dropdown to define a new mapping.

  • Select </> to open the command viewer to view and copy the automatic commands generated from your inputs. You can also open the commands in a queryset.

  • Select the pencil icon to Edit columns.

Edit columns

Note

  • For tabular formats (CSV, TSV, PSV), you can't map a column twice. To map to an existing column, first delete the new column.
  • You can't change an existing column type. If you try to map to a column having a different format, you may end up with empty columns.

The changes you can make in a table depend on the following parameters:

  • Table type is new or existing
  • Mapping type is new or existing
Table type Mapping type Available adjustments
New table New mapping Rename column, change data type, change data source, mapping transformation, add column, delete column
Existing table New mapping Add column (on which you can then change data type, rename, and update)
Existing table Existing mapping none

Screenshot of columns open for editing.

Mapping transformations

Some data format mappings (Parquet, JSON, and Avro) support simple ingest-time transformations. To apply mapping transformations, create or update a column in the Edit columns window.

Mapping transformations can be performed on a column of type string or datetime, with the source having data type int or long. For more information, see the full list of supported mapping transformations.

Advanced options based on data type

Tabular (CSV, TSV, and PSV):

  • If you're ingesting tabular formats in an existing table, you can select Table_mapping > Use existing mapping. Tabular data doesn't always include the column names used to map source data to the existing columns. When this option is checked, mapping is done by-order, and the table schema remains the same. If this option is unchecked, new columns are created for incoming data, regardless of data structure.

JSON:

  • Select Nested levels to determine the column division of JSON data, from 1 to 100.

Step 4: Summary

In the Summary window, all the steps are marked as completed when data ingestion finishes successfully. Select a card to explore the data, delete the ingested data, or create a dashboard with key metrics. Select Close to close the window.

Screenshot of the summary page showing successful data ingestion.