Study guide for Exam DP-203: Data Engineering on Microsoft Azure

Purpose of this document

This study guide should help you understand what to expect on the exam and includes a summary of the topics the exam might cover and links to additional resources. The information and materials in this document should help you focus your studies as you prepare for the exam.

Useful links Description
Review the skills measured as of November 2, 2023 This list represents the skills measured AFTER the date provided. Study this list if you plan to take the exam AFTER that date.
Review the skills measured prior to November 2, 2023 Study this list of skills if you take your exam PRIOR to the date provided.
Change log You can go directly to the change log if you want to see the changes that will be made on the date provided.
How to earn the certification Some certifications only require passing one exam, while others require passing multiple exams.
Certification renewal Microsoft associate, expert, and specialty certifications expire annually. You can renew by passing a free online assessment on Microsoft Learn.
Your Microsoft Learn profile Connecting your certification profile to Microsoft Learn allows you to schedule and renew exams and share and print certificates.
Exam scoring and score reports A score of 700 or greater is required to pass.
Exam sandbox You can explore the exam environment by visiting our exam sandbox.
Request accommodations If you use assistive devices, require extra time, or need modification to any part of the exam experience, you can request an accommodation.
Take a free Practice Assessment Test your skills with practice questions to help you prepare for the exam.

Updates to the exam

Our exams are updated periodically to reflect skills that are required to perform a role. We have included two versions of the Skills Measured objectives depending on when you are taking the exam.

We always update the English language version of the exam first. Some exams are localized into other languages, and those are updated approximately eight weeks after the English version is updated. While Microsoft makes every effort to update localized versions as noted, there may be times when the localized versions of an exam are not updated on this schedule. Other available languages are listed in the Schedule Exam section of the Exam Details webpage. If the exam isn't available in your preferred language, you can request an additional 30 minutes to complete the exam.

Note

The bullets that follow each of the skills measured are intended to illustrate how we are assessing that skill. Related topics may be covered in the exam.

Note

Most questions cover features that are general availability (GA). The exam may contain questions on Preview features if those features are commonly used.

Skills measured as of November 2, 2023

Audience profile

As a candidate for this exam, you should have subject matter expertise in integrating, transforming, and consolidating data from various structured, unstructured, and streaming data systems into a suitable schema for building analytics solutions.

As an Azure data engineer, you help stakeholders understand the data through exploration, and build and maintain secure and compliant data processing pipelines by using different tools and techniques. You use various Azure data services and frameworks to store and produce cleansed and enhanced datasets for analysis. This data store can be designed with different architecture patterns based on business requirements, including:

  • Management data warehouse (MDW)

  • Big data

  • Lakehouse architecture

As an Azure data engineer, you also help to ensure that the operationalization of data pipelines and data stores are high-performing, efficient, organized, and reliable, given a set of business requirements and constraints. You help to identify and troubleshoot operational and data quality issues. You also design, implement, monitor, and optimize data platforms to meet the data pipelines.

As a candidate for this exam, you must have solid knowledge of data processing languages, including:

  • SQL

  • Python

  • Scala

You need to understand parallel processing and data architecture patterns. You should be proficient in using the following to create data processing solutions:

  • Azure Data Factory

  • Azure Synapse Analytics

  • Azure Stream Analytics

  • Azure Event Hubs

  • Azure Data Lake Storage

  • Azure Databricks

Skills at a glance

  • Design and implement data storage (15–20%)

  • Develop data processing (40–45%)

  • Secure, monitor, and optimize data storage and data processing (30–35%)

Design and implement data storage (15–20%)

Implement a partition strategy

  • Implement a partition strategy for files

  • Implement a partition strategy for analytical workloads

  • Implement a partition strategy for streaming workloads

  • Implement a partition strategy for Azure Synapse Analytics

  • Identify when partitioning is needed in Azure Data Lake Storage Gen2

Design and implement the data exploration layer

  • Create and execute queries by using a compute solution that leverages SQL serverless and Spark cluster

  • Recommend and implement Azure Synapse Analytics database templates

  • Push new or updated data lineage to Microsoft Purview

  • Browse and search metadata in Microsoft Purview Data Catalog

Develop data processing (40–45%)

Ingest and transform data

  • Design and implement incremental loads

  • Transform data by using Apache Spark

  • Transform data by using Transact-SQL (T-SQL) in Azure Synapse Analytics

  • Ingest and transform data by using Azure Synapse Pipelines or Azure Data Factory

  • Transform data by using Azure Stream Analytics

  • Cleanse data

  • Handle duplicate data

  • Avoiding duplicate data by using Azure Stream Analytics Exactly Once Delivery

  • Handle missing data

  • Handle late-arriving data

  • Split data

  • Shred JSON

  • Encode and decode data

  • Configure error handling for a transformation

  • Normalize and denormalize data

  • Perform data exploratory analysis

Develop a batch processing solution

  • Develop batch processing solutions by using Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, and Azure Data Factory

  • Use PolyBase to load data to a SQL pool

  • Implement Azure Synapse Link and query the replicated data

  • Create data pipelines

  • Scale resources

  • Configure the batch size

  • Create tests for data pipelines

  • Integrate Jupyter or Python notebooks into a data pipeline

  • Upsert data

  • Revert data to a previous state

  • Configure exception handling

  • Configure batch retention

  • Read from and write to a delta lake

Develop a stream processing solution

  • Create a stream processing solution by using Stream Analytics and Azure Event Hubs

  • Process data by using Spark structured streaming

  • Create windowed aggregates

  • Handle schema drift

  • Process time series data

  • Process data across partitions

  • Process within one partition

  • Configure checkpoints and watermarking during processing

  • Scale resources

  • Create tests for data pipelines

  • Optimize pipelines for analytical or transactional purposes

  • Handle interruptions

  • Configure exception handling

  • Upsert data

  • Replay archived stream data

Manage batches and pipelines

  • Trigger batches

  • Handle failed batch loads

  • Validate batch loads

  • Manage data pipelines in Azure Data Factory or Azure Synapse Pipelines

  • Schedule data pipelines in Data Factory or Azure Synapse Pipelines

  • Implement version control for pipeline artifacts

  • Manage Spark jobs in a pipeline

Secure, monitor, and optimize data storage and data processing (30–35%)

Implement data security

  • Implement data masking

  • Encrypt data at rest and in motion

  • Implement row-level and column-level security

  • Implement Azure role-based access control (RBAC)

  • Implement POSIX-like access control lists (ACLs) for Data Lake Storage Gen2

  • Implement a data retention policy

  • Implement secure endpoints (private and public)

  • Implement resource tokens in Azure Databricks

  • Load a DataFrame with sensitive information

  • Write encrypted data to tables or Parquet files

  • Manage sensitive information

Monitor data storage and data processing

  • Implement logging used by Azure Monitor

  • Configure monitoring services

  • Monitor stream processing

  • Measure performance of data movement

  • Monitor and update statistics about data across a system

  • Monitor data pipeline performance

  • Measure query performance

  • Schedule and monitor pipeline tests

  • Interpret Azure Monitor metrics and logs

  • Implement a pipeline alert strategy

Optimize and troubleshoot data storage and data processing

  • Compact small files

  • Handle skew in data

  • Handle data spill

  • Optimize resource management

  • Tune queries by using indexers

  • Tune queries by using cache

  • Troubleshoot a failed Spark job

  • Troubleshoot a failed pipeline run, including activities executed in external services

Study resources

We recommend that you train and get hands-on experience before you take the exam. We offer self-study options and classroom training as well as links to documentation, community sites, and videos.

Study resources Links to learning and documentation
Get trained Choose from self-paced learning paths and modules or take an instructor-led course
Find documentation Azure Data Lake Storage
Azure Synapse Analytics
Azure Databricks
Data Factory
Azure Stream Analytics
Event Hubs
Azure Monitor
Ask a question Microsoft Q&A | Microsoft Docs
Get community support Analytics on Azure | TechCommunity
Azure Synapse Analytics | TechCommunity
Follow Microsoft Learn Microsoft Learn - Microsoft Tech Community
Find a video Exam Readiness Zone
Data Exposed
Browse other Microsoft Learn shows

Change log

Key to understanding the table: The topic groups (also known as functional groups) are in bold typeface followed by the objectives within each group. The table is a comparison between the two versions of the exam skills measured and the third column describes the extent of the changes.

Skill area prior to November 2, 2023 Skill area as of November 2, 2023 Change
Audience profile No change
Design and implement data storage Design and implement data storage No change
Implement a partition strategy Implement a partition strategy No change
Design and implement the data exploration layer Design and implement the data exploration layer No change
Develop data processing Develop data processing No change
Ingest and transform data Ingest and transform data Minor
Develop a batch processing solution Develop a batch processing solution No change
Develop a stream processing solution Develop a stream processing solution No change
Manage batches and pipelines Manage batches and pipelines No change
Secure, monitor, and optimize data storage and data processing Secure, monitor, and optimize data storage and data processing No change
Implement data security Implement data security No change
Monitor data storage and data processing Monitor data storage and data processing No change
Optimize and troubleshoot data storage and data processing Optimize and troubleshoot data storage and data processing No change

Skills measured prior to November 2, 2023

Audience profile

Candidates for this exam should have subject matter expertise in integrating, transforming, and consolidating data from various structured, unstructured, and streaming data systems into a suitable schema for building analytics solutions.

Azure data engineers help stakeholders understand the data through exploration, and they build and maintain secure and compliant data processing pipelines by using different tools and techniques. These professionals use various Azure data services and frameworks to store and produce cleansed and enhanced datasets for analysis. This data store can be designed with different architecture patterns based on business requirements, including modern data warehouse (MDW), big data, or lakehouse architecture.

Azure data engineers also help to ensure that the operationalization of data pipelines and data stores are high-performing, efficient, organized, and reliable, given a set of business requirements and constraints. These professionals help to identify and troubleshoot operational and data quality issues. They also design, implement, monitor, and optimize data platforms to meet the data pipelines.

Candidates for this exam must have solid knowledge of data processing languages, including SQL, Python, and Scala, and they need to understand parallel processing and data architecture patterns. They should be proficient in using Azure Data Factory, Azure Synapse Analytics, Azure Stream Analytics, Azure Event Hubs, Azure Data Lake Storage, and Azure Databricks to create data processing solutions.

Skills at a glance

  • Design and implement data storage (15–20%)

  • Develop data processing (40–45%)

  • Secure, monitor, and optimize data storage and data processing (30–35%)

Design and implement data storage (15–20%)

Implement a partition strategy

  • Implement a partition strategy for files

  • Implement a partition strategy for analytical workloads

  • Implement a partition strategy for streaming workloads

  • Implement a partition strategy for Azure Synapse Analytics

  • Identify when partitioning is needed in Azure Data Lake Storage Gen2

Design and implement the data exploration layer

  • Create and execute queries by using a compute solution that leverages SQL serverless and Spark cluster

  • Recommend and implement Azure Synapse Analytics database templates

  • Push new or updated data lineage to Microsoft Purview

  • Browse and search metadata in Microsoft Purview Data Catalog

Develop data processing (40–45%)

Ingest and transform data

  • Design and implement incremental loads

  • Transform data by using Apache Spark

  • Transform data by using Transact-SQL (T-SQL) in Azure Synapse Analytics

  • Ingest and transform data by using Azure Synapse Pipelines or Azure Data Factory

  • Transform data by using Azure Stream Analytics

  • Cleanse data

  • Handle duplicate data

  • Handle missing data

  • Handle late-arriving data

  • Split data

  • Shred JSON

  • Encode and decode data

  • Configure error handling for a transformation

  • Normalize and denormalize data

  • Perform data exploratory analysis

Develop a batch processing solution

  • Develop batch processing solutions by using Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, and Azure Data Factory

  • Use PolyBase to load data to a SQL pool

  • Implement Azure Synapse Link and query the replicated data

  • Create data pipelines

  • Scale resources

  • Configure the batch size

  • Create tests for data pipelines

  • Integrate Jupyter or Python notebooks into a data pipeline

  • Upsert data

  • Revert data to a previous state

  • Configure exception handling

  • Configure batch retention

  • Read from and write to a delta lake

Develop a stream processing solution

  • Create a stream processing solution by using Stream Analytics and Azure Event Hubs

  • Process data by using Spark structured streaming

  • Create windowed aggregates

  • Handle schema drift

  • Process time series data

  • Process data across partitions

  • Process within one partition

  • Configure checkpoints and watermarking during processing

  • Scale resources

  • Create tests for data pipelines

  • Optimize pipelines for analytical or transactional purposes

  • Handle interruptions

  • Configure exception handling

  • Upsert data

  • Replay archived stream data

Manage batches and pipelines

  • Trigger batches

  • Handle failed batch loads

  • Validate batch loads

  • Manage data pipelines in Azure Data Factory or Azure Synapse Pipelines

  • Schedule data pipelines in Data Factory or Azure Synapse Pipelines

  • Implement version control for pipeline artifacts

  • Manage Spark jobs in a pipeline

Secure, monitor, and optimize data storage and data processing (30–35%)

Implement data security

  • Implement data masking

  • Encrypt data at rest and in motion

  • Implement row-level and column-level security

  • Implement Azure role-based access control (RBAC)

  • Implement POSIX-like access control lists (ACLs) for Data Lake Storage Gen2

  • Implement a data retention policy

  • Implement secure endpoints (private and public)

  • Implement resource tokens in Azure Databricks

  • Load a DataFrame with sensitive information

  • Write encrypted data to tables or Parquet files

  • Manage sensitive information

Monitor data storage and data processing

  • Implement logging used by Azure Monitor

  • Configure monitoring services

  • Monitor stream processing

  • Measure performance of data movement

  • Monitor and update statistics about data across a system

  • Monitor data pipeline performance

  • Measure query performance

  • Schedule and monitor pipeline tests

  • Interpret Azure Monitor metrics and logs

  • Implement a pipeline alert strategy

Optimize and troubleshoot data storage and data processing

  • Compact small files

  • Handle skew in data

  • Handle data spill

  • Optimize resource management

  • Tune queries by using indexers

  • Tune queries by using cache

  • Troubleshoot a failed Spark job

  • Troubleshoot a failed pipeline run, including activities executed in external services