Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Important
The PostgreSQL connector is in Public Preview. Contact your Azure Databricks account team to request access.
This page helps you understand the PostgreSQL ingestion workflow, including the factors that determine your setup approach and the steps involved for different user personas.
What to know before you start
| Topic | Why it matters |
|---|---|
| Azure Databricks user persona | The workflow depends on your Azure Databricks user persona:
|
| Deployment environment | The source database configuration depends on the PostgreSQL deployment environment. |
| Interface | The steps to create a connection, a gateway, and a pipeline depend on the interface. |
| Ingestion frequency | The pipeline schedule depends on your latency and cost requirements. |
| Common patterns | Depending on your ingestion needs, the pipeline might use configurations like history tracking, column selection, and row filtering. Supported configurations vary by connector. See Feature availability. |
Start ingesting from PostgreSQL
The following table provides an overview of the end-to-end PostgreSQL ingestion workflow, based on user type:
| User | Steps |
|---|---|
| Admin |
|
| Non-admin | Use any supported interface to create a gateway and a pipeline. See Ingest data from PostgreSQL. |