Ingest data with Fluent Bit into Azure Data Explorer
Fluent Bit is an open-source agent that collects logs, metrics, and traces from various sources. It allows you to filter, modify, and aggregate event data before sending it to storage. Azure Data Explorer is a fast and highly scalable data exploration service for log and telemetry data. This article guides you through the process of using Fluent Bit to send data to Azure Data Explorer.
In this article, you'll learn how to:
For a complete list of data connectors, see Data connectors overview.
Prerequisites
- Fluent Bit.
- An Azure Data Explorer cluster and database. Create a cluster and database.
You can use any of the available Query tools for your query environment.
Create a table to store your logs
Fluent Bit forwards logs in JSON format with three properties: log
(dynamic), tag
(string), and timestamp
(datetime).
You can create a table with columns for each of these properties. Alternatively, if you have structured logs, you can create a table with log properties mapped to custom columns. To learn more, select the relevant tab.
To create a table for incoming logs from Fluent Bit:
Browse to your query environment.
Select the database where you'd like to create the table.
Run the following
.create table
command:.create table FluentBitLogs (log:dynamic, tag:string, timestamp:datetime)
The incoming JSON properties are automatically mapped into the correct column.
Register a Microsoft Entra app with permissions to ingest data
The Microsoft Entra service principal can be created through the Azure portal or programatically, as in the following example.
This service principal is the identity used by the connector to write data your table in Kusto. You'll later grant permissions for this service principal to access Kusto resources.
Sign in to your Azure subscription via Azure CLI. Then authenticate in the browser.
az login
Choose the subscription to host the principal. This step is needed when you have multiple subscriptions.
az account set --subscription YOUR_SUBSCRIPTION_GUID
Create the service principal. In this example, the service principal is called
my-service-principal
.az ad sp create-for-rbac -n "my-service-principal" --role Contributor --scopes /subscriptions/{SubID}
From the returned JSON data, copy the
appId
,password
, andtenant
for future use.{ "appId": "00001111-aaaa-2222-bbbb-3333cccc4444", "displayName": "my-service-principal", "name": "my-service-principal", "password": "00001111-aaaa-2222-bbbb-3333cccc4444", "tenant": "00001111-aaaa-2222-bbbb-3333cccc4444" }
You've created your Microsoft Entra application and service principal.
Grant permissions to the service principal
Run the following command, replacing <MyDatabase>
with the name of the database:
.add database MyDatabase ingestors ('aadapp=<Application (client) ID>;<Directory (tenant) ID>')
This command grants the application permissions to ingest data into your table. For more information, see role-based access control.
Configure Fluent Bit to send logs to your table
To configure Fluent Bit to send logs to your Azure Data Explorer table, create a classic mode or YAML mode configuration file with the following output properties:
Field | Description |
---|---|
Name | azure_kusto |
Match | A pattern to match against the tags of incoming records. It's case-sensitive and supports the star (* ) character as a wildcard. |
Tenant_Id | Directory (tenant) ID from Register a Microsoft Entra app with permissions to ingest data. |
Client_Id | Application (client) ID from Register a Microsoft Entra app with permissions to ingest data. |
Client_Secret | The client secret key value Register a Microsoft Entra app with permissions to ingest data. |
Ingestion_Endpoint | Use the Data Ingestion URI found in the Azure portal under your cluster overview. |
Database_Name | The name of the database that contains your logs table. |
Table_Name | The name of the table from Create a table to store your logs. |
Ingestion_Mapping_Reference | The name of the ingestion mapping from Create a table. If you didn't create an ingestion mapping, remove the property from the configuration file. |
To see an example configuration file, select the relevant tab:
[SERVICE]
Daemon Off
Flush 1
Log_Level trace
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
Health_Check On
[INPUT]
Name tail
Path /var/log/containers/*.log
Tag kube.*
Mem_Buf_Limit 1MB
Skip_Long_Lines On
Refresh_Interval 10
[OUTPUT]
Name azure_kusto
Match *
Tenant_Id azure-tenant-id
Client_Id azure-client-id
Client_Secret azure-client-secret
Ingestion_Endpoint azure-data-explorer-ingestion-endpoint
Database_Name azure-data-explorer-database-name
Table_Name azure-data-explorer-table-name
Verify that data has landed in your table
Once the configuration is complete, logs should arrive in your table.
To verify that logs are ingested, run the following query:
FluentBitLogs | count
To view a sample of log data, run the following query:
FluentBitLogs | take 100