Κοινή χρήση μέσω


Connect to StreamSets

Important

This feature is in Public Preview.

StreamSets helps you to manage and monitor your data flow throughout its lifecycle. StreamSets native integration with Azure Databricks and Delta Lake allows you to pull data from various sources and manage your pipelines easily.

For a general demonstration of StreamSets, watch the following YouTube video (10 minutes).

Here are the steps for using StreamSets with Azure Databricks.

Step 1: Generate a Databricks personal access token

StreamSets authenticates with Azure Databricks using an Azure Databricks personal access token.

Note

As a security best practice, when you authenticate with automated tools, systems, scripts, and apps, Databricks recommends that you use personal access tokens belonging to service principals instead of workspace users. To create tokens for service principals, see Manage tokens for a service principal.

Step 2: Set up a cluster to support integration needs

StreamSets will write data to an Azure Data Lake Storage path and the Azure Databricks integration cluster will read data from that location. Therefore the integration cluster requires secure access to the Azure Data Lake Storage path.

Secure access to an Azure Data Lake Storage path

To secure access to data in Azure Data Lake Storage (ADLS) you can use an Azure storage account access key (recommended) or a Microsoft Entra ID service principal.

Use an Azure storage account access key

You can configure a storage account access key on the integration cluster as part of the Spark configuration. Ensure that the storage account has access to the ADLS container and file system used for staging data and the ADLS container and file system where you want to write the Delta Lake tables. To configure the integration cluster to use the key, follow the steps in Connect to Azure Data Lake Storage Gen2 and Blob Storage.

Use a Microsoft Entra ID service principal

You can configure a service principal on the Azure Databricks integration cluster as part of the Spark configuration. Ensure that the service principal has access to the ADLS container used for staging data and the ADLS container where you want to write the Delta tables. To configure the integration cluster to use the service principal, follow the steps in Access ADLS Gen2 with service principal.

Specify the cluster configuration

  1. Set Cluster Mode to Standard.

  2. Set Databricks Runtime Version to Runtime: 6.3 or above.

  3. Enable optimized writes and auto compaction by adding the following properties to your Spark configuration:

    spark.databricks.delta.optimizeWrite.enabled true
    spark.databricks.delta.autoCompact.enabled true
    
  4. Configure your cluster depending on your integration and scaling needs.

For cluster configuration details, see Compute configuration reference.

See Get connection details for an Azure Databricks compute resource for the steps to obtain the JDBC URL and HTTP path.

Step 3: Obtain JDBC and ODBC connection details to connect to a cluster

To connect an Azure Databricks cluster to StreamSets you need the following JDBC/ODBC connection properties:

  • JDBC URL
  • HTTP Path

Step 4: Get StreamSets for Azure Databricks

Sign up for StreamSets for Databricks, if you do not already have a StreamSets account. You can get started for free and upgrade when you’re ready; see StreamSets DataOps Platform Pricing.

Step 5: Learn how to use StreamSets to load data into Delta Lake

Start with a sample pipeline or check out StreamSets solutions to learn how to build a pipeline that ingests data into Delta Lake.

Additional resources

Support