Microsoft SQL Server ingestion connector

This page helps you understand the SQL Server ingestion workflow, including the factors that determine your setup approach and the steps involved for different user personas.

What to know before you start

Topic Why it matters
Azure Databricks user persona The workflow depends on your Azure Databricks user persona:
  • Single-user: An admin user configures the source database and creates a Unity Catalog connection, an ingestion gateway, and an ingestion pipeline.
  • Multi-user: An admin user configures the source database and creates a connection for non-admin users to create gateway-pipeline pairs with.
Database variation The source database configuration depends on the SQL Server deployment environment.
Change tracking method The source database configuration depends on how you choose to track changes in the source.
Authentication method The steps to create a connection depend on the authentication method you choose.
Interface The steps to create a connection, a gateway, and a pipeline depend on the interface.
Ingestion frequency The pipeline schedule depends on your latency and cost requirements.
Common patterns Depending on your ingestion needs, the pipeline might use configurations like history tracking, column selection, and row filtering. Supported configurations vary by connector. See Feature availability.

Start ingesting from SQL Server

The following table provides an overview of the end-to-end SQL Server ingestion workflow, based on user type:

User Steps
Admin
Non-admin Use any supported interface to create a gateway and a pipeline. See Ingest data from SQL Server.

Feature availability

Feature Availability
UI-based pipeline authoring Green check icon Supported
API-based pipeline authoring Green check icon Supported
Declarative Automation Bundles Green check icon Supported
Incremental ingestion Green check icon Supported
Unity Catalog governance Green check icon Supported
Orchestration using Databricks Workflows Green check icon Supported
SCD type 2 Green check icon Supported
API-based column selection and deselection Green check icon Supported
API-based row filtering Red X icon Not supported
Automated schema evolution: New and deleted columns Green check icon Supported
Automated schema evolution: Data type changes Red X icon Not supported
Automated schema evolution: Column renames Red X icon Not supported
Requires a full refresh.
Automated schema evolution: New tables Green check icon Supported
If you ingest the entire schema. See the limitations on the number of tables per pipeline.
Maximum number of tables per pipeline 250

Authentication methods

Authentication method Availability
OAuth U2M Red X icon Not supported
OAuth M2M Red X icon Not supported
OAuth (manual refresh token) Red X icon Not supported
Basic authentication (username/password) Green check icon Supported
Basic authentication (API key) Red X icon Not supported
Basic authentication (service account JSON key) Red X icon Not supported