Mount highly available Service Fabric Reliable Disk based volume in a Service Fabric Mesh application


The preview of Azure Service Fabric Mesh has been retired. New deployments will no longer be permitted through the Service Fabric Mesh API. Support for existing deployments will continue through April 28, 2021.

For details, see Azure Service Fabric Mesh Preview Retirement.

The common method of persisting state with container apps is to use remote storage like Azure File Storage or database like Azure Cosmos DB. This incurs significant read and write network latency to the remote store.

This article shows how to store state in highly available Service Fabric Reliable Disk by mounting a volume inside the container of a Service Fabric Mesh application. Service Fabric Reliable Disk provides volumes for local reads with writes replicated within the Service Fabric Cluster for high availability. This removes network calls for reads and reduces network latency for writes. If the container restarts or moves to another node, new container instance will see the same volume as older one. Thus it is both efficient and highly available.

In this example, the Counter application has an ASP.NET Core service with a web page that shows counter value in a browser.

The counterService periodically reads a counter value from a file, increments it and write it back to the file. The file is stored in a folder that is mounted on the volume backed by Service Fabric Reliable Disk.


You can use the Azure Cloud Shell or a local installation of the Azure CLI to complete this task. To use the Azure CLI with this article, ensure that az --version returns at least azure-cli (2.0.43). Install (or update) the Azure Service Fabric Mesh CLI extension module by following these instructions.

Sign in to Azure

Sign in to Azure and set your subscription.

az login
az account set --subscription "<subscriptionID>"

Create a resource group

Create a resource group to deploy the application to. The following command creates a resource group named myResourceGroup in a location in the eastern United States. If you change the resource group name in below command, remember to change it in all commands that follow.

az group create --name myResourceGroup --location eastus

Deploy the template


Effective November 2, 2020, download rate limits apply to anonymous and authenticated requests to Docker Hub from Docker Free plan accounts and are enforced by IP address.

This template makes use of public images from Docker Hub. Please note that you may be rate limited. For more details, see Authenticate with Docker Hub.

The following command deploys a Linux application using the counter.sfreliablevolume.linux.json template. To deploy a Windows application, use the template. Be aware that larger container images may take longer to deploy.

az mesh deployment create --resource-group myResourceGroup --template-uri

You can also see the state of the deployment with the command

az deployment group show --name counter.sfreliablevolume.linux --resource-group myResourceGroup

Notice the name of gateway resource which has resource type as Microsoft.ServiceFabricMesh/gateways. This will be used in getting public IP address of the app.

Open the application

Once the application successfully deploys, get the ipAddress of the gateway resource for the app. Use the gateway name you noticed in above section.

az mesh gateway show --resource-group myResourceGroup --name counterGateway

The output should have a property ipAddress which is the public IP address for the service endpoint. Open it from a browser. It will display a web page with the counter value being updated every second.

Verify that the application is able to use the volume

The application creates a file named counter.txt in the volume inside counter/counterService folder. The content of this file is the counter value being displayed on the web page.

Delete the resources

Frequently delete the resources you are no longer using in Azure. To delete the resources related to this example, delete the resource group in which they were deployed (which deletes everything associated with the resource group) with the following command:

az group delete --resource-group myResourceGroup

Next steps