Study Guide for Exam DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB
This study guide should help you understand what to expect on the exam and includes a summary of the topics the exam might cover and links to additional resources. The information and materials in this document should help you focus your studies as you prepare for the exam.
Useful links | Description |
---|---|
How to earn the certification | Some certifications only require passing one exam, while others require passing multiple exams. |
Certification renewal | Microsoft associate, expert, and specialty certifications expire annually. You can renew by passing a free online assessment on Microsoft Learn. |
Your Microsoft Learn profile | Connecting your certification profile to Microsoft Learn allows you to schedule and renew exams and share and print certificates. |
Exam scoring and score reports | A score of 700 or greater is required to pass. |
Exam sandbox | You can explore the exam environment by visiting our exam sandbox. |
Request accommodations | If you use assistive devices, require extra time, or need modification to any part of the exam experience, you can request an accommodation. |
Take a free Practice Assessment | Test your skills with practice questions to help you prepare for the exam. |
Our exams are updated periodically to reflect skills that are required to perform a role.
We always update the English language version of the exam first. Some exams are localized into other languages, and those are updated approximately eight weeks after the English version is updated. Other available languages are listed in the Schedule Exam section of the Exam Details webpage. If the exam isn't available in your preferred language, you can request an additional 30 minutes to complete the exam.
The bullets that follow each of the skills measured are intended to illustrate how we are assessing that skill. Related topics may be covered in the exam.
Most questions cover features that are general availability (GA). The exam may contain questions on Preview features if those features are commonly used.
As a candidate for this exam, you should have subject matter expertise designing, implementing, and monitoring cloud-native applications that store and manage data.
Your responsibilities for this role include:
Designing and implementing data models and data distribution.
Loading data into an Azure Cosmos DB database.
Optimizing and maintaining the solution.
As a professional in this role, you integrate the solution with other Azure services. You also design, implement, and monitor solutions that consider security, availability, resilience, and performance requirements.
As a candidate for this exam, you must have solid knowledge and experience with:
Developing apps for Azure.
Working with Azure Cosmos DB database technologies.
Creating server-side objects with JavaScript.
You should be proficient at developing applications that use the Azure Cosmos DB for NoSQL API. You should be able to:
Write efficient SQL queries for the API.
Create appropriate indexing policies.
Interpret JSON.
Read C# or Java code.
Use PowerShell.
Additionally, you should be familiar with provisioning and managing resources in Azure.
Design and implement data models (35–40%)
Design and implement data distribution (5–10%)
Integrate an Azure Cosmos DB solution (5–10%)
Optimize an Azure Cosmos DB solution (15–20%)
Maintain an Azure Cosmos DB solution (25–30%)
Develop a design by storing multiple entity types in the same container
Develop a design by storing multiple related entities in the same document
Develop a model that denormalizes data across documents
Develop a design by referencing between documents
Identify primary and unique keys
Identify data and associated access patterns
Specify a default time to live (TTL) on a container for a transactional store
Develop a design for versioning documents
Develop a design for document schema versioning
Choose a partitioning strategy based on a specific workload
Choose a partition key
Plan for transactions when choosing a partition key
Evaluate the cost of using a cross-partition query
Calculate and evaluate data distribution based on partition key selection
Calculate and evaluate throughput distribution based on partition key selection
Construct and implement a synthetic partition key
Design and implement a hierarchical partition key
Design partitioning for workloads that require multiple partition keys
Evaluate the throughput and data storage requirements for a specific workload
Choose between serverless, provisioned and free models
Choose when to use database-level provisioned throughput
Design for granular scale units and resource governance
Evaluate the cost of the global distribution of data
Configure throughput for Azure Cosmos DB by using the Azure portal
Choose a connectivity mode (gateway versus direct)
Implement a connectivity mode
Create a connection to a database
Enable offline development by using the Azure Cosmos DB emulator
Handle connection errors
Implement a singleton for the client
Specify a region for global distribution
Configure client-side threading and parallelism options
Enable SDK logging
Implement queries that use arrays, nested objects, aggregation, and ordering
Implement a correlated subquery
Implement queries that use array and type-checking functions
Implement queries that use mathematical, string, and date functions
Implement queries based on variable data
Choose when to use a point operation versus a query operation
Implement a point operation that creates, updates, and deletes documents
Implement an update by using a patch operation
Manage multi-document transactions using SDK Transactional Batch
Perform a multi-document load using Bulk Support in the SDK
Implement optimistic concurrency control using ETags
Override default consistency by using query request options
Implement session consistency by using session tokens
Implement a query operation that includes pagination
Implement a query operation by using a continuation token
Handle transient errors and 429s
Specify TTL for a document
Retrieve and use query metrics
Write, deploy, and call a stored procedure
Design stored procedures to work with multiple documents transactionally
Implement and call triggers
Implement a user-defined function
Choose when to distribute data
Define automatic failover policies for regional failure for Azure Cosmos DB for NoSQL
Perform manual failovers to move single master write regions
Choose a consistency model
Identify use cases for different consistency models
Evaluate the impact of consistency model choices on availability and associated request unit (RU) cost
Evaluate the impact of consistency model choices on performance and latency
Specify application connections to replicated data
Choose when to use multi-region write
Implement multi-region write
Implement a custom conflict resolution policy for Azure Cosmos DB for NoSQL
Enable Azure Synapse Link
Choose between Azure Synapse Link and Spark Connector
Enable the analytical store on a container
Implement custom partitioning in Azure Synapse Link
Enable a connection to an analytical store and query from Azure Synapse Spark or Azure Synapse SQL
Perform a query against the transactional store from Spark
Write data back to the transactional store from Spark
Implement Change Data Capture in the Azure Cosmos DB analytical store
Implement time travel in Azure Synapse Link for Azure Cosmos DB
Integrate events with other applications by using Azure Functions and Azure Event Hubs
Denormalize data by using Change Feed and Azure Functions
Enforce referential integrity by using Change Feed and Azure Functions
Aggregate data by using Change Feed and Azure Functions, including reporting
Archive data by using Change Feed and Azure Functions
Implement Azure AI Search for an Azure Cosmos DB solution
Adjust indexes on the database
Calculate the cost of the query
Retrieve request unit cost of a point operation or query
Implement Azure Cosmos DB integrated cache
Develop an Azure Functions trigger to process a change feed
Consume a change feed from within an application by using the SDK
Manage the number of change feed instances by using the change feed estimator
Implement denormalization by using a change feed
Implement referential enforcement by using a change feed
Implement aggregation persistence by using a change feed
Implement data archiving by using a change feed
Choose when to use a read-heavy versus write-heavy index strategy
Choose an appropriate index type
Configure a custom indexing policy by using the Azure portal
Implement a composite index
Optimize index performance
Evaluate response status code and failure metrics
Monitor metrics for normalized throughput usage by using Azure Monitor
Monitor server-side latency metrics by using Azure Monitor
Monitor data replication in relation to latency and availability
Configure Azure Monitor alerts for Azure Cosmos DB
Implement and query Azure Cosmos DB logs
Monitor throughput across partitions
Monitor distribution of data across partitions
Monitor security by using logging and auditing
Choose between periodic and continuous backup
Configure periodic backup
Configure continuous backup and recovery
Locate a recovery point for a point-in-time recovery
Recover a database or container from a recovery point
Choose between service-managed and customer-managed encryption keys
Configure network-level access control for Azure Cosmos DB
Configure data encryption for Azure Cosmos DB
Manage control plane access to Azure Cosmos DB by using Azure role-based access control (RBAC)
Manage control plane access to Azure Cosmos DB Data Explorer by using Azure role-based access control (RBAC)
Manage data plane access to Azure Cosmos DB by using Microsoft Entra ID
Configure cross-origin resource sharing (CORS) settings
Manage account keys by using Azure Key Vault
Implement customer-managed keys for encryption
Implement Always Encrypted
Choose a data movement strategy
Move data by using client SDK bulk operations
Move data by using Azure Data Factory and Azure Synapse pipelines
Move data by using a Kafka connector
Move data by using Azure Stream Analytics
Move data by using the Azure Cosmos DB Spark Connector
Configure Azure Cosmos DB as a custom endpoint for an Azure IoT Hub
Choose when to use declarative versus imperative operations
Provision and manage Azure Cosmos DB resources by using Azure Resource Manager templates
Migrate between standard and autoscale throughput by using PowerShell or Azure CLI
Initiate a regional failover by using PowerShell or Azure CLI
Maintain indexing policies in production by using Azure Resource Manager templates
We recommend that you train and get hands-on experience before you take the exam. We offer self-study options and classroom training as well as links to documentation, community sites, and videos.
Study resources | Links to learning and documentation |
---|---|
Get trained | Choose from self-paced learning paths and modules or take an instructor-led course |
Find documentation | Azure Cosmos DB documentation Azure documentation |
Ask a question | Microsoft Q&A | Microsoft Docs |
Get community support | Analytics on Azure - Microsoft Tech Community Azure Data Factory - Microsoft Tech Community Azure - Microsoft Tech Community |
Follow Microsoft Learn | Microsoft Learn - Microsoft Tech Community |
Find a video | Exam Readiness Zone Data Exposed Browse other Microsoft Learn shows |
The table below summarizes the changes between the current and previous version of the skills measured. The functional groups are in bold typeface followed by the objectives within each group. The table is a comparison between the previous and current version of the exam skills measured and the third column describes the extent of the changes.
Skill area prior to January 27, 2025 | Skill area as of January 27, 2025 | Change |
---|---|---|
Maintain an Azure Cosmos DB solution | Maintain an Azure Cosmos DB solution | No change |
Implement security for an Azure Cosmos DB solution | Implement security for an Azure Cosmos DB solution | Major |