Integrations overview
There are many data connectors, tools, and integrations that work seamlessly with the platform for ingestion, orchestration, output, and data query. This document is a high level overview about the available connectors, tools, and integrations. Detailed information is provided for each connector, along with links to its full documentation.
For overview pages on a specific type of integration, select one of the following buttons.
Comparison tables
The following tables summarize the capabilities of each item. Select the tab corresponding to connectors or tools and integrations. Each item name is linked to its detailed description.
The following table summarizes the available connectors and their capabilities:
Name | Ingest | Export | Orchestrate | Query |
---|---|---|---|---|
Apache Kafka | ✔️ | |||
Apache Flink | ✔️ | |||
Apache Log4J 2 | ✔️ | |||
Apache Spark | ✔️ | ✔️ | ✔️ | |
Apache Spark for Azure Synapse Analytics | ✔️ | ✔️ | ✔️ | |
Azure Cosmos DB | ✔️ | |||
Azure Data Factory | ✔️ | ✔️ | ||
Azure Event Grid | ✔️ | |||
Azure Event Hubs | ✔️ | |||
Azure Functions | ✔️ | ✔️ | ||
Azure IoT Hubs | ✔️ | |||
Azure Stream Analytics | ✔️ | |||
Cribl Stream | ✔️ | |||
Fluent Bit | ✔️ | |||
JDBC | ✔️ | |||
Logic Apps | ✔️ | ✔️ | ✔️ | |
Logstash | ✔️ | |||
Matlab | ✔️ | |||
NLog | ✔️ | |||
ODBC | ✔️ | |||
Open Telemetry | ✔️ | |||
Power Apps | ✔️ | ✔️ | ||
Power Automate | ✔️ | ✔️ | ✔️ | |
Serilog | ✔️ | |||
Splunk | ✔️ | |||
Splunk Universal Forwarder | ✔️ | |||
Telegraf | ✔️ |
Detailed descriptions
The following are detailed descriptions of connectors and tools and integrations. Select the tab corresponding to connectors or tools and integrations. All available items are summarized in the Comparison tables above.
Apache Kafka
Apache Kafka is a distributed streaming platform for building real-time streaming data pipelines that reliably move data between systems or applications. Kafka Connect is a tool for scalable and reliable streaming of data between Apache Kafka and other data systems. The Kafka Sink serves as the connector from Kafka and doesn't require using code. This is gold certified by Confluent - has gone through comprehensive review and testing for quality, feature completeness, compliance with standards, and for performance.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Logs, Telemetry, Time series
- Underlying SDK: Java
- Repository: Microsoft Azure - https://github.com/Azure/kafka-sink-azure-kusto/
- Documentation: Ingest data from Apache Kafka
- Community Blog: Kafka ingestion into Azure Data Explorer
Apache Flink
Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. The connector implements data sink for moving data across Azure Data Explorer and Flink clusters. Using Azure Data Explorer and Apache Flink, you can build fast and scalable applications targeting data driven scenarios. For example, machine learning (ML), Extract-Transform-Load (ETL), and Log Analytics.
- Functionality: Ingestion
- Ingestion type supported: Streaming
- Use cases: Telemetry
- Underlying SDK: Java
- Repository: Microsoft Azure - https://github.com/Azure/flink-connector-kusto/
- Documentation: Ingest data from Apache Flink
Apache Log4J 2
Log4J is a popular logging framework for Java applications maintained by the Apache Foundation. Log4j allows developers to control which log statements are output with arbitrary granularity based on the logger's name, logger level, and message pattern. The Apache Log4J 2 sink allows you to stream your log data to your database, where you can analyze and visualize your logs in real time.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Logs
- Underlying SDK: Java
- Repository: Microsoft Azure - https://github.com/Azure/azure-kusto-log4j
- Documentation: Ingest data with the Apache Log4J 2 connector
- Community Blog: Getting started with Apache Log4J and Azure Data Explorer
Apache Spark
Apache Spark is a unified analytics engine for large-scale data processing. The Spark connector is an open source project that can run on any Spark cluster. It implements data source and data sink for moving data to or from Spark clusters. Using the Apache Spark connector, you can build fast and scalable applications targeting data driven scenarios. For example, machine learning (ML), Extract-Transform-Load (ETL), and Log Analytics. With the connector, your database becomes a valid data store for standard Spark source and sink operations, such as read, write, and writeStream.
- Functionality: Ingestion, Export
- Ingestion type supported: Batching, Streaming
- Use cases: Telemetry
- Underlying SDK: Java
- Repository: Microsoft Azure - https://github.com/Azure/azure-kusto-spark/
- Documentation: Apache Spark connector
- Community Blog: Data preprocessing for Azure Data Explorer for Azure Data Explorer with Apache Spark
Apache Spark for Azure Synapse Analytics
Apache Spark is a parallel processing framework that supports in-memory processing to boost the performance of big data analytic applications. Apache Spark in Azure Synapse Analytics is one of Microsoft's implementations of Apache Spark in the cloud. You can access a database from Synapse Studio with Apache Spark for Azure Synapse Analytics.
- Functionality: Ingestion, Export
- Ingestion type supported: Batching
- Use cases: Telemetry
- Underlying SDK: Java
- Documentation: Connect to an Azure Synapse workspace
Azure Cosmos DB
The Azure Cosmos DB change feed data connection is an ingestion pipeline that listens to your Cosmos DB change feed and ingests the data into your database.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Change feed
- Documentation: Ingest data from Azure Cosmos DB (Preview)
Azure Data Factory
Azure Data Factory (ADF) is a cloud-based data integration service that allows you to integrate different data stores and perform activities on the data.
- Functionality: Ingestion, Export
- Ingestion type supported: Batching
- Use cases: Data orchestration
- Documentation: Copy data to your database by using Azure Data Factory
Azure Event Grid
Event Grid ingestion is a pipeline that listens to Azure storage, and updates your database to pull information when subscribed events occur. You can configure continuous ingestion from Azure Storage (Blob storage and ADLSv2) with an Azure Event Grid subscription for blob created or blob renamed notifications and streaming the notifications via Azure Event Hubs.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Event processing
- Documentation: Event Grid data connection
Azure Event Hubs
Azure Event Hubs is a big data streaming platform and event ingestion service. You can configure continuous ingestion from customer-managed Event Hubs.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Documentation: Azure Event Hubs data connection
Azure Functions
Azure Functions allow you to run serverless code in the cloud on a schedule or in response to an event. With input and output bindings for Azure Functions, you can integrate your database into your workflows to ingest data and run queries against your database.
- Functionality: Ingestion, Export
- Ingestion type supported: Batching
- Use cases: Workflow integrations
- Documentation: Integrating Azure Functions using input and output bindings (preview)
- Community Blog: Azure Data Explorer (Kusto) Bindings for Azure Functions
Azure IoT Hubs
Azure IoT Hub is a managed service, hosted in the cloud, that acts as a central message hub for bi-directional communication between your IoT application and the devices it manages. You can configure continuous ingestion from customer-managed IoT Hubs, using its Event Hubs compatible built in endpoint of device-to-cloud messages.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: IoT data
- Documentation: IoT Hub data connection
Azure Stream Analytics
Azure Stream Analytics is a real-time analytics and complex event-processing engine that's designed to process high volumes of fast streaming data from multiple sources simultaneously.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Event processing
- Documentation: Ingest data from Azure Stream Analytics
Cribl Stream
Cribl stream is a processing engine that securely collects, processes, and streams machine event data from any source. It allows you to parse and process that data for any destination for analysis.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Machine data processing including logs, metrics, instrumentation data
- Documentation: Ingest data from Cribl Stream into Azure Data Explorer
Fluent Bit
Fluent Bit is an open-source agent that collects logs, metrics, and traces from various sources. It allows you to filter, modify, and aggregate event data before sending it to storage.
- Functionality: Ingestion
- Ingestion type supported: Batching
- Use cases: Logs, Metrics, Traces
- Repository: fluent-bit Kusto Output Plugin
- Documentation: Ingest data with Fluent Bit into Azure Data Explorer
- Community Blog: Getting started with Fluent bit and Azure Data Explorer
JDBC
Java Database Connectivity (JDBC) is a Java API used to connect to databases and execute queries. You can use JDBC to connect to Azure Data Explorer.
- Functionality: Query, visualization
- Underlying SDK: Java
- Documentation: Connect to Azure Data Explorer with JDBC
Logic Apps
The Microsoft Logic Apps connector allows you to run queries and commands automatically as part of a scheduled or triggered task.
- Functionality: Ingestion, Export
- Ingestion type supported: Batching
- Use cases: Data orchestration
- Documentation: Microsoft Logic Apps and Azure Data Explorer
Logstash
The Logstash plugin enables you to process events from Logstash into an Azure Data Explorer database for later analysis.
- Functionality: Ingestion
- Ingestion type supported: Batching
- Use cases: Logs
- Underlying SDK: Java
- Repository: Microsoft Azure - https://github.com/Azure/logstash-output-kusto/
- Documentation: Ingest data from Logstash
- Community Blog: How to migrate from Elasticsearch to Azure Data Explorer
Matlab
MATLAB is a programming and numeric computing platform used to analyze data, develop algorithms, and create models. You can get an authorization token in MATLAB for querying your data in Azure Data Explorer.
- Functionality: Query
- Documentation: Query data using MATLAB
NLog
NLog is a flexible and free logging platform for various .NET platforms, including .NET standard. NLog allows you to write to several targets, such as a database, file, or console. With NLog you can change the logging configuration on-the-fly. The NLog sink is a target for NLog that allows you to send your log messages to your database. The plugin provides an efficient way to sink your logs to your cluster.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Telemetry, Logs, Metrics
- Underlying SDK: .NET
- Repository: Microsoft Azure - https://github.com/Azure/azure-kusto-nlog-sink
- Documentation: Ingest data with the NLog sink
- Community Blog: Getting started with NLog sink and Azure Data Explorer
ODBC
Open Database Connectivity (ODBC) is a widely accepted application programming interface (API) for database access. Azure Data Explorer is compatible with a subset of the SQL Server communication protocol (MS-TDS). This compatibility enables the use of the ODBC driver for SQL Server with Azure Data Explorer.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Telemetry, Logs, Metrics
- Documentation: Connect to Azure Data Explorer with ODBC
Open Telemetry
The OpenTelemetry connector supports ingestion of data from many receivers into your database. It works as a bridge to ingest data generated by Open telemetry to your database by customizing the format of the exported data according to your needs.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Traces, Metrics, Logs
- Underlying SDK: Go
- Repository: Open Telemetry - https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/azuredataexplorerexporter
- Documentation: Ingest data from OpenTelemetry
- Community Blog: Getting started with Open Telemetry and Azure Data Explorer
Power Apps
Power Apps is a suite of apps, services, connectors, and data platform that provides a rapid application development environment to build custom apps that connect to your business data. The Power Apps connector is useful if you have a large and growing collection of streaming data in Azure Data Explorer and want to build a low code, highly functional app to make use of this data.
- Functionality: Query, Ingestion, Export
- Ingestion type supported: Batching
- Documentation: Use Power Apps to query data in Azure Data Explorer
Power Automate
Power Automate is an orchestration service used to automate business processes. The Power Automate (previously Microsoft Flow) connector enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task.
- Functionality: Ingestion, Export
- Ingestion type supported: Batching
- Use cases: Data orchestration
- Documentation: Microsoft Power Automate connector
Serilog
Serilog is a popular logging framework for .NET applications. Serilog allows developers to control which log statements are output with arbitrary granularity based on the logger's name, logger level, and message pattern. The Serilog sink, also known as an appender, streams your log data to your database, where you can analyze and visualize your logs in real time.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Logs
- Underlying SDK: .NET
- Repository: Microsoft Azure - https://github.com/Azure/serilog-sinks-azuredataexplorer
- Documentation: Ingest data with the Serilog sink
- Community Blog: Getting started with Serilog sink and Azure Data Explorer
Splunk
Splunk Enterprise is a software platform that allows you to ingest data from many sources simultaneously.The Azure Data Explorer add-on sends data from Splunk to a table in your cluster.
- Functionality: Ingestion
- Ingestion type supported: Batching
- Use cases: Logs
- Underlying SDK: Python
- Repository: Microsoft Azure - https://github.com/Azure/azure-kusto-splunk/tree/main/splunk-adx-alert-addon
- Documentation: Ingest data from Splunk
- Splunk Base: Microsoft Azure Data Explorer Add-On for Splunk
- Community Blog: Getting started with Microsoft Azure Data Explorer Add-On for Splunk
Splunk Universal Forwarder
- Functionality: Ingestion
- Ingestion type supported: Batching
- Use cases: Logs
- Repository: Microsoft Azure - https://github.com/Azure/azure-kusto-splunk
- Documentation: Ingest data from Splunk Universal Forwarder to Azure Data Explorer
- Community Blog: Ingest data using Splunk Universal forwarder into Azure Data Explorer
Telegraf
Telegraf is an open source, lightweight, minimal memory foot print agent for collecting, processing and writing telemetry data including logs, metrics, and IoT data. Telegraf supports hundreds of input and output plugins. It's widely used and well supported by the open source community. The output plugin serves as the connector from Telegraf and supports ingestion of data from many types of input plugins into your database.
- Functionality: Ingestion
- Ingestion type supported: Batching, Streaming
- Use cases: Telemetry, Logs, Metrics
- Underlying SDK: Go
- Repository: InfluxData - https://github.com/influxdata/telegraf/tree/master/plugins/outputs/azure_data_explorer
- Documentation: Ingest data from Telegraf
- Community Blog: New Azure Data Explorer output plugin for Telegraf enables SQL monitoring at huge scale