Developing and Debugging Desired State Configuration Scripts for Azure VMs

PowerShell Desired State Configuration (DSC) scripts are a popular way to configure Virtual Machines in Azure. In previous blog posts, I have made extensive use of them to configure SQL Server Always On and Team Foundation Server in Azure among other scenarios. These deployments use a combination of Azure Resource Manager templates and DSC scripts. Specifically, the DSC scripts are applied to the Azure Virtual Machines after they have deployed to install software and make changes.

When developing DSC scripts for template deployments, you will likely have to do a fair amount of debugging, which can be frustrating and time consuming. In this blog post, I have collected some tips and tricks based on how I usually do this work. It has taken me some time to adjust my workflow to be more efficient, and I hope that some of the workflows I have adopted can help others. There are sure to be better ways to do some of this. Please share if you have other useful hints.

I will use some example template and DSC code, you can find much of it in my Infrastructure as Code repository and also in this simplified template for deploying an IIS Web Server in Azure. This is not a blog post on how to write DSC scripts in general, please refer to other tutorials out there.

Authoring Templates and Packaging Modules

Have a look at this template for an example of deploying a VM and configuring it with DSC. There are a few practices that I want to point out. The DSC script itself is added to the VM with the following resource:

"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'), '/configureweb')]",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines', parameters('vmName'))]"
"apiVersion": "2016-03-30",
"location": "[resourceGroup().location]",
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.21",
"autoUpgradeMinorVersion": true,
"settings": {
"modulesURL": "[variables('webConfigureModuleURL')]",
"configurationFunction": "[variables('webConfigureFunction')]",
"properties": {

Notice that the module is identified with a URL (modulesURL) and a function name (configurationFunction). In the template, I generate these in the variables section:

"variables": {
"baseUri": "[deployment().properties.templateLink.uri]",
"webConfigureModuleURL": "[uri(variables('baseUri'), 'DSC/')]",
"webConfigureFunction": "SetupWebServer.ps1\\ConfigureWebDsc"


Notice that I use the URL of the template itself as the reference point for all URLs. Some developers prefer to generate the URL based on an absolute URL reference, typically a path, but as I have argued in a previous post, this can cause a lot of problems when revising the template or if others clone your repository. I recommend staying away from the absolute GitHub URLs in templates. I would also recommend using my Get-GitHubRawPath tool as discussed in that blog post. It makes it easy to generate URLs for your template deployments as you work your way through revisions. You can find that tool in my HansenAzurePS PowerShell module.

As you can see, the template actually references a zip file rather than the PowerShell script itself. This zip file contains not only the PowerShell script but also any dependent modules that are loaded, i.e., it includes dependencies. As you are developing your scripts, you will need to generate such packages. You can do that with a command like:

cd .\DSC
Publish-AzureRmVMDscConfiguration .\SetupWebServer.ps1 -OutputArchivePath .\

The Publish-AzureRmVMDscConfiguration script will either upload to a storage account or generate a local zip archive. Many of the Azure Quickstart Templates use an artifact location parameter, which can point to a storage account location, but as I explained above, I prefer to stay away from passing paths into the templates and relying on relative paths instead. It forces you to push the templates and artifacts to GitHub before testing the deployment, but one way or another something has to be uploaded and I prefer to push to a repo and keep track of the revisions. If you are concerned about making many commits as you iterate, simply make the commits to a separate branch and squash the commits when you merge.

Debugging Scripts

Even the best developers will need to iterate several times to iron out problems and bugs in the DSC scripts. Frequently, you will get an error during the deployment of the DSC resource, now what? It is tempting to try to locate bug, save the script and the new zip archive, and simply repeat the deployment. While this can work in some simple cases, it is not very efficient if you have to do several iterations. Instead, I recommend logging into the VM, locating the DSC script and running it locally as you debug.

If you log into the Windows VM, you will find the script at C:\Packages\Plugins\Microsoft.Powershell.DSC\\DSCWork. The version may be different of course. Right click the script and open it in PowerShell ISE. Here you can edit it (fix the bug) and simply run it again. Running the script is a 3 step process: 1. Load the script, 2. Generate Managed Object Format (MOF) file, 3. Start DSC configuration. In the IIS Web Server example, one would run the script with:

# Load
. .\SetupWebServer.ps1

# Generate MOF file

# Start it
Start-DscConfiguration .\ConfigureWebDsc -Wait -Force -Verbose

If the script requires any parameters, you can add them as command line parameters when you generate the MOF file. You can find the parameters that were passed to the DSC during template deployment in C:\Packages\Plugins\Microsoft.Powershell.DSC\\RuntimeSettings.

Some scripts take in passwords, credentials, and other secrets. If you simply pass the credentials on the command line while debugging you may get errors due to passwords in plain text or errors related to passing and using domain credentials in the scripts. You can avoid these errors during debugging by temporarily allowing plain text passwords and/or domain credentials. Here is an example of what that workflow might look like:

#Setting cofiguration data
$cd = @{
AllNodes = @(
PSDcsAllowDomainUser = $true
PSDscAllowPlainTextPassword = $true

#Ask for credentials
$creds = Get-Credential

. .\ScriptFileName.ps1

#Generate MOF
ConfigScriptDsc -DomainName -Admincreds $creds -ConfigurationData $cd

#Run it
Start-DscConfiguration .\ConfigScriptDsc -Wait -Force -Verbose

This will allow you to bypass the security restrictions while doing the development and once you have completed the debugging and run the deployment again, the credentials will be secure.

Using Script Resources

The DSC scripts are generally made up of a set of resources that achieve some part of the desired state. For example, the xWebAdministration module used in the example web server script, includes the xWebSite, which is used to set up a specific web site. There are situations where you need to make configurations that are not supported by resources that you find in the community. In such cases, you can use a Script resource.

Here is an example of a script resource used to download the TFS server installer:

Script DownloadTFS
GetScript = {
return @{ 'Result' = $true }
SetScript = {
Write-Host "Downloading TFS: " + $using:currentDownloadLink
Invoke-WebRequest -Uri $using:currentDownloadLink -OutFile $using:installerDownload
TestScript = {
Test-Path $using:installerDownload
DependsOn = "[xDisk]ADDataDisk"

I wanted to point out two hints on using these. Firstly, the script resource consist of 3 scripts, Get, Test, and Set. They may look just like regular PowerShell sections, but they are actually text strings that are not executed until after the MOF file is created and script parameters cannot be used directly in the scripts. To use an input parameter (e.g., currentDownloadLink in the script above), you must use the syntax $using:currentDownloadLink. If you use $currentDownloadLink directly, it will not work. Secondly, I would recommend making sure you write a good Test script. The resource will get executed only if the Test script evaluates to false. In the example above, it is not the end of the world if it is executed twice, but there are other scripts that should only be executed if they have not yet been executed. It is common for developers to let the Test script always return false, because these DSC scripts for VMs typically only get executed once (if successful), but if you take care and write your Test sections correctly, you can rerun the scripts without any problems, which is an important part of debugging. Imagine you were trying to debug a script with several script resources and you couldn't run the script again while debugging because a previously deployed resource was being deployed twice. That makes debugging very hard. It is not always easy, but often worth the time to write a good Test script.


In this post, I reviewed a few of the techniques I use when developing DSC scripts for Azure VMs. The take home messages are:

  1. Avoid absolute URL path references (those URLs).
  2. Always push revisions to GitHub and use Get-GitHubRawPath tool to generate paths for specific revisions.
  3. If a script fails to deploy, log into the VM and do the debugging locally to allow many fast iterations.
  4. If your script requires credentials, disable requirements regarding plain text passwords and domain credentials using ConfigurationData.
  5. If you use Script resources, remember the $using: syntax for script parameters.
  6. Remember to write good Test scripts for Script resources to enable efficient debugging.

I hope some of these tips and tricks can help you be more efficient when developing DSC scripts for Azure VMs. Please share your experience if you have other good tips. And as always, let me know if you have questions/comments/concerns.