Implement a data engineering solution with Azure Databricks

At a glance

Learn how to harness the power of Apache Spark and powerful clusters running on the Azure Databricks platform to run large data engineering workloads in the cloud.

Prerequisites

None

Modules in this learning path

You explore different features and tools to help you understand and work with incremental processing with spark structured streaming.

You explore different features and tools to help you develop architecture patterns with Azure Databricks Delta Live Tables.

Learn how to Optimize performance with Spark and Delta Live Tables in Azure Databricks.

Learn how to implement CI/CD workflows in Azure Databricks to automate the integration and delivery of code changes.

Learn how to orchestrate and schedule data workflows with Azure Databricks Jobs. Define and monitor complex pipelines, integrate with tools like Azure Data Factory and Azure DevOps, and reduce manual intervention, leading to improved efficiency, faster insights, and adaptability to business needs.

In this module, you explore different features and approaches to help you secure and manage your data within Azure Databricks using tools, such as Unity Catalog.

Azure Databricks provides SQL Warehouses that enable data analysts to work with data using familiar relational SQL queries.

Using pipelines in Azure Data Factory to run notebooks in Azure Databricks enables you to automate data engineering processes at cloud scale.