แก้ไข

แชร์ผ่าน


Ingest data from Splunk Universal Forwarder to Azure Data Explorer

Important

This connector can be used in Real-Time Intelligence in Microsoft Fabric. Use the instructions in this article with the following exceptions:

Splunk Universal Forwarder is a lightweight version of the Splunk Enterprise software that allows you to ingest data from many sources simultaneously. It's designed for collecting and forwarding log data and machine data from various sources to a central Splunk Enterprise server or a Splunk Cloud deployment. Splunk Universal Forwarder serves as an agent that simplifies the process of data collection and forwarding, making it an essential component in a Splunk deployment. Azure Data Explorer is a fast and highly scalable data exploration service for log and telemetry data.

In this article, learn how to use the Kusto Splunk Universal Forwarder Connector to send data to a table in your cluster. You initially create a table and data mapping, then direct Splunk to send data into the table, and then validate the results.

Prerequisites

Create an Azure Data Explorer table

Create a table to receive the data from Splunk Universal Forwarder and then grant the service principal access to this table.

In the following steps, you create a table named SplunkUFLogs with a single column (RawText). This is because Splunk Universal Forwarder sends data in a raw text format by default. The following commands can be run in the web UI query editor.

  1. Create a table:

    .create table SplunkUFLogs (RawText: string)
    
  2. Verify that the table SplunkUFLogs was created and is empty:

    SplunkUFLogs
    | count
    
  3. Use the service principal from the Prerequisites to grant permission to work with the database containing your table.

    .add database YOUR_DATABASE_NAME admins ('aadapp=YOUR_APP_ID;YOUR_TENANT_ID') 'Entra service principal: Splunk UF'
    

Configure the Splunk Universal Forwarder

When you download Splunk Universal Forwarder, a wizard opens to configure the forwarder.

  1. In the wizard, set the Receiving Indexer to point to the system hosting the Kusto Splunk Universal Forwarder connector. Enter 127.0.0.1 for the Hostname or IP and 9997 for the port. Leave the Destination Indexer blank.

    For more information, see Enable a receiver for Splunk Enterprise.

  2. Go to the folder where Splunk Universal Forwarder is installed and then to the /etc/system/local folder. Create or modify the inputs.conf file to allow the forwarder to read logs:

    [default]
    index = default
    disabled = false
    
    [monitor://C:\Program Files\Splunk\var\log\splunk\modinput_eventgen.log*]
    sourcetype = modinput_eventgen
    

    For more information, see Monitor files and directories with inputs.conf.

  3. Go to the folder where Splunk Universal Forwarder is installed and then to the /etc/system/local folder. Create or modify the outputs.conf file to determine the destination location for the logs, which is the hostname and port of the system hosting Kusto Splunk Universal Forwarder connector:

    [tcpout]
    defaultGroup = default-autolb-group
    sendCookedData = false
    
    [tcpout:default-autolb-group]
    server = 127.0.0.1:9997
    
    [tcpout-server://127.0.0.1:9997]
    

    For more information, see Configure forwarding with outputs.conf.

  4. Restart Splunk Universal Forwarder.

Configure the Kusto Splunk Universal connector

To configure the Kusto Splunk Universal connector to send logs to your Azure Data Explorer table:

  1. Download or clone the connector from the GitHub repository.

  2. Go to the base directory of the connector:

    cd .\SplunkADXForwarder\
    
  3. Edit the config.yml to contain the following properties:

    ingest_url: <ingest_url>
    client_id: <ms_entra_app_client_id>
    client_secret: <ms_entra_app_client_secret>
    authority: <ms_entra_authority>
    database_name: <database_name>
    table_name: <table_name>
    table_mapping_name: <table_mapping_name>
    data_format: csv
    
    Field Description
    ingest_url The ingestion URL for your Azure Data Explorer cluster. You can find it in the Azure portal under the Data ingestion URI in the Overview tab of your cluster. It should be in the format https://ingest-<clusterName>.<region>.kusto.windows.net.
    client_id The client ID of your Microsoft Entra application registration created in the Prerequisites section.
    client_secret The client secret of your Microsoft Entra application registration created in the Prerequisites section.
    authority The ID of the tenant that holds your Microsoft Entra application registration created in the Prerequisites section.
    database_name The name of your Azure Data Explorer database.
    table_name The name of your Azure Data Explorer destination table.
    table_mapping_name The name of the ingestion data mapping for your table. If you don't have a mapping, you can omit this property from the configuration file. You can always parse data into various columns later.
    data_format The expected data format for incoming data. The incoming data is in raw text format, so the recommended format is csv, which maps the raw text to the zero index by default.
  4. Build the docker image:

    docker build -t splunk-forwarder-listener
    
  5. Run the docker container:

    docker run -p 9997:9997 splunk-forwarder-listener
    

Verify that data is ingested into Azure Data Explorer

Once the docker is running, data is sent to your Azure Data Explorer table. You can verify that the data is ingested by running a query in the web UI query editor.

  1. Run the following query to verify that data is ingested into the table:

    SplunkUFLogs
    | count
    
  2. Run the following query to view the data:

    SplunkUFLogs
    | take 100