Get data from Eventstream

In this article, you learn how to get data from an existing eventstream into either a new or existing table.

To get data from a new eventstream, see Get data from a new eventstream.

Prerequisites

Source

To get data from an eventstream, you need to select the eventstream as your data source. You can select an existing eventstream in the following ways:

On the lower ribbon of your KQL database, either:

  • From the Get Data dropdown menu, then under Continuous, select Eventstream > Existing Eventstream.

  • Select Get Data and then in the Get data window, select Eventstream.

  • From the Get Data drop down menu, under Continuous, select Real-Time data hub > Existing Eventstream.

    Screenshot of get data window with source tab selected.

Configure

  1. Select a target table. If you want to ingest data into a new table, select + New table and enter a table name.

    Note

    Table names can be up to 1024 characters including spaces, alphanumeric, hyphens, and underscores. Special characters aren't supported.

  2. Under Configure the data source, fill out the settings using the information in the following table:

    Screenshot of configure tab with new table entered and one sample data file selected.

    Setting Description
    Workspace Your eventstream workspace location. Select a workspace from the dropdown.
    Eventstream Name The name of your eventstream. Select an eventstream from the dropdown.
    Data connection name The name used to reference and manage your data connection in your workspace. The data connection name is automatically filled. Optionally, you can enter a new name. The name can only contain alphanumeric, dash, and dot characters, and be up to 40 characters in length.
    Process event before ingestion in Eventstream This option allows you to configure data processing before data is ingested into the destination table. If selected, you continue the data ingestion process in Eventstream. For more information, see Process event before ingestion in Eventstream.
    Advanced filters
    Compression Data compression of the events, as coming from the event hub. Options are None (default), or Gzip compression.
    Event system properties If there are multiple records per event message, the system properties are added to the first one. For more information, see Event system properties.
    Event retrieval start date The data connection retrieves existing events created since the Event retrieval start date. It can only retrieve events retained by the event hub, based on its retention period. The time zone is UTC. If no time is specified, the default time is the time at which the data connection is created.
  3. Select Next

Process event before ingestion in Eventstream

The Process event before ingestion in Eventstream option enables you to process the data before it's ingested into the destination table. With this option, the get data process seamlessly continues in Eventstream, with the destination table and data source details automatically populated.

To process event before ingestion in Eventstream:

  1. On the Configure tab, select Process event before ingestion in Eventstream.

  2. In the Process events in Eventstream dialog box, select Continue in Eventstream.

    Important

    Selecting Continue in Eventstream ends the get data process in Real-Time Intelligence and continues in Eventstream with the destination table and data source details automatically populated.

    Screenshot of the Process events in Eventstream dialog box.

  3. In Eventstream, select the KQL Database destination node, and in the KQL Database pane, verify that Event processing before ingestion is selected and that the destination details are correct.

    Screenshot of the Process events in Eventstream page.

  4. Select Open event processor to configure the data processing and then select Save. For more information, see Process event data with event processor editor.

  5. Back in the KQL Database pane, select Add to complete the KQL Database destination node setup.

  6. Verify data is ingested into the destination table.

Note

The process event before ingestion in Eventstream process is complete and the remaining steps in this article aren't required.

Inspect

The Inspect tab opens with a preview of the data.

To complete the ingestion process, select Finish.

Screenshot of the inspect tab.

Optionally:

  • Select Command viewer to view and copy the automatic commands generated from your inputs.
  • Change the automatically inferred data format by selecting the desired format from the dropdown. Data is read from the event hub in form of EventData objects. Supported formats are CSV, JSON, PSV, SCsv, SOHsv TSV, TXT, and TSVE.
  • Edit columns.
  • Explore Advanced options based on data type.

Edit columns

Note

  • For tabular formats (CSV, TSV, PSV), you can't map a column twice. To map to an existing column, first delete the new column.
  • You can't change an existing column type. If you try to map to a column having a different format, you may end up with empty columns.

The changes you can make in a table depend on the following parameters:

  • Table type is new or existing
  • Mapping type is new or existing
Table type Mapping type Available adjustments
New table New mapping Rename column, change data type, change data source, mapping transformation, add column, delete column
Existing table New mapping Add column (on which you can then change data type, rename, and update)
Existing table Existing mapping none

Screenshot of columns open for editing.

Mapping transformations

Some data format mappings (Parquet, JSON, and Avro) support simple ingest-time transformations. To apply mapping transformations, create or update a column in the Edit columns window.

Mapping transformations can be performed on a column of type string or datetime, with the source having data type int or long. Supported mapping transformations are:

  • DateTimeFromUnixSeconds
  • DateTimeFromUnixMilliseconds
  • DateTimeFromUnixMicroseconds
  • DateTimeFromUnixNanoseconds

Advanced options based on data type

Tabular (CSV, TSV, PSV):

Tabular data doesn't necessarily include the column names that are used to map source data to the existing columns. To use the first row as column names, turn on First row is column header.

Screenshot of the First row is column header switch.

JSON:

To determine column division of JSON data, select Advanced > Nested levels, from 1 to 100.

Screenshot of nested levels JSON options.

Summary

In the Data preparation window, all three steps are marked with green check marks when data ingestion finishes successfully. You can select a card to query, drop the ingested data, or see a dashboard of your ingestion summary. Select Close to close the window.

Screenshot of summary page with successful ingestion completed.