Share via


Lakeflow Declarative Pipelines

Lakeflow Declarative Pipelines is a framework for creating batch and streaming data pipelines in SQL and Python. Common use cases for Lakeflow Declarative Pipelines include data ingestion from sources such as cloud storage (such as Amazon S3, Azure ADLS Gen2, and Google Cloud Storage) and message buses (such as Apache Kafka, Amazon Kinesis, Google Pub/Sub, Azure EventHub, and Apache Pulsar), and incremental batch and streaming transformations.

Note

Lakeflow Declarative Pipelines requires the Premium plan. Contact your Databricks account team for more information.

This section provides detailed information about using Lakeflow Declarative Pipelines. The following topics will help you to get started.

Topic Description
Lakeflow Declarative Pipelines concepts Learn about the high-level concepts of Lakeflow Declarative Pipelines, including pipelines, flows, streaming tables, and materialized views.
Tutorials Follow tutorials to give you hands-on experience with using Lakeflow Declarative Pipelines.
Develop pipelines Learn how to develop and test pipelines that create flows for ingesting and transforming data.
Configure pipelines Learn how to schedule and configure pipelines.
Monitor pipelines Learn how to monitor your pipelines and troubleshoot pipeline queries.
Developers Learn how to use Python and SQL when developing Lakeflow Declarative Pipelines.
Lakeflow Declarative Pipelines in Databricks SQL Learn about using Lakeflow Declarative Pipelines streaming tables and materialized views in Databricks SQL.

More information