Uredi

Deli z drugimi prek


Planning for an Azure File Sync deployment

Interview and demo introducing Azure File Sync - click to play!

Azure File Sync is a service that allows you to cache several Azure file shares on an on-premises Windows Server or cloud VM.

This article introduces you to Azure File Sync concepts and features. Once you're familiar with Azure File Sync, consider following the Azure File Sync deployment guide to try out this service.

The files will be stored in the cloud in Azure file shares. Azure file shares can be used in two ways: by directly mounting these serverless Azure file shares (SMB) or by caching Azure file shares on-premises using Azure File Sync. Which deployment option you choose changes the aspects you need to consider as you plan for your deployment.

  • Direct mount of an Azure file share: Because Azure Files provides SMB access, you can mount Azure file shares on-premises or in the cloud using the standard SMB client available in Windows, macOS, and Linux. Because Azure file shares are serverless, deploying for production scenarios doesn't require managing a file server or NAS device. This means you don't have to apply software patches or swap out physical disks.

  • Cache Azure file share on-premises with Azure File Sync: Azure File Sync enables you to centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms an on-premises (or cloud) Windows Server into a quick cache of your Azure file share.

Management concepts

An Azure File Sync deployment has three fundamental management objects:

  • Azure file share: An Azure file share is a serverless cloud file share, which provides the cloud endpoint of an Azure File Sync sync relationship. Files in an Azure file share can be accessed directly with SMB or the FileREST protocol, although we encourage you to primarily access the files through the Windows Server cache when the Azure file share is being used with Azure File Sync. This is because Azure Files today lacks an efficient change detection mechanism like Windows Server has, so changes to the Azure file share directly will take time to propagate back to the server endpoints.
  • Server endpoint: The path on the Windows Server that is being synced to an Azure file share. This can be a specific folder on a volume or the root of the volume. Multiple server endpoints can exist on the same volume if their namespaces don't overlap.
  • Sync group: The object that defines the sync relationship between a cloud endpoint, or Azure file share, and a server endpoint. Endpoints within a sync group are kept in sync with each other. If for example, you have two distinct sets of files that you want to manage with Azure File Sync, you would create two sync groups and add different endpoints to each sync group.

Azure file share management concepts

Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares, as well as other storage resources such as blob containers, queues, or tables. All storage resources that are deployed into a storage account share the limits that apply to that storage account. For current storage account limits, see Azure Files scalability and performance targets.

There are two main types of storage accounts you will use for Azure Files deployments:

  • General purpose version 2 (GPv2) storage accounts: GPv2 storage accounts allow you to deploy Azure file shares on standard/hard disk-based (HDD-based) hardware. In addition to storing Azure file shares, GPv2 storage accounts can store other storage resources such as blob containers, queues, or tables.
  • FileStorage storage accounts: FileStorage storage accounts allow you to deploy Azure file shares on premium/solid-state disk-based (SSD-based) hardware. FileStorage accounts can only be used to store Azure file shares; no other storage resources (blob containers, queues, tables, etc.) can be deployed in a FileStorage account. Only FileStorage accounts can deploy both SMB and NFS file shares.

There are several other storage account types you may come across in the Azure portal, PowerShell, or CLI. Two storage account types, BlockBlobStorage and BlobStorage storage accounts, cannot contain Azure file shares. The other two storage account types you may see are general purpose version 1 (GPv1) and classic storage accounts, both of which can contain Azure file shares. Although GPv1 and classic storage accounts may contain Azure file shares, most new features of Azure Files are available only in GPv2 and FileStorage storage accounts. We therefore recommend to only use GPv2 and FileStorage storage accounts for new deployments, and to upgrade GPv1 and classic storage accounts if they already exist in your environment.

Azure File Sync management concepts

Sync groups are deployed into Storage Sync Services, which are top-level objects that register servers for use with Azure File Sync and contain the sync group relationships. The Storage Sync Service resource is a peer of the storage account resource, and can similarly be deployed to Azure resource groups. A Storage Sync Service can create sync groups that contain Azure file shares across multiple storage accounts and multiple registered Windows Servers.

Before you can create a sync group in a Storage Sync Service, you must first register a Windows Server with the Storage Sync Service. This creates a registered server object, which represents a trust relationship between your server or cluster and the Storage Sync Service. To register a Storage Sync Service, you must first install the Azure File Sync agent on the server. An individual server or cluster can be registered with only one Storage Sync Service at a time.

A sync group contains one cloud endpoint, or Azure file share, and at least one server endpoint. The server endpoint object contains the settings that configure the cloud tiering capability, which provides the caching capability of Azure File Sync. In order to sync with an Azure file share, the storage account containing the Azure file share must be in the same Azure region as the Storage Sync Service.

Important

You can make changes to the namespace of any cloud endpoint or server endpoint in the sync group and have your files synced to the other endpoints in the sync group. If you make a change to the cloud endpoint (Azure file share) directly, changes first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint only once every 24 hours. For more information, see Azure Files frequently asked questions.

Consider the count of Storage Sync Services needed

A previous section discusses the core resource to configure for Azure File Sync: a Storage Sync Service. A Windows Server can only be registered to one Storage Sync Service. So it's often best to only deploy a single Storage Sync Service and register all servers on it.

Create multiple Storage Sync Services only if you have:

  • distinct sets of servers that must never exchange data with one another. In this case, you want to design the system to exclude certain sets of servers to sync with an Azure file share that is already in use as a cloud endpoint in a sync group in a different Storage Sync Service. Another way to look at this is that Windows Servers registered to different storage sync service cannot sync with the same Azure file share.
  • a need to have more registered servers or sync groups than a single Storage Sync Service can support. Review the Azure File Sync scale targets for more details.

Plan for balanced sync topologies

Before you deploy any resources, it's important to plan out what you will sync on a local server, with which Azure file share. Making a plan will help you determine how many storage accounts, Azure file shares, and sync resources you'll need. These considerations are still relevant, even if your data doesn't currently reside on a Windows Server or the server you want to use long term. The migration section can help determine appropriate migration paths for your situation.

In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or cluster) can sync up to 30 Azure file shares.

You might have more folders on your volumes that you currently share out locally as SMB shares to your users and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we recommend a 1:1 mapping.

If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary. Consider the following options.

Share grouping

For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises shares.

Volume sync

Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all subfolders and files will go to the same Azure file share.

Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't beneficial only for file sync. A lower number of items also benefits scenarios like these:

  • Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to appear on a server enabled for Azure File Sync.
  • Cloud-side restore from an Azure file share snapshot will be faster.
  • Disaster recovery of an on-premises server can speed up significantly.
  • Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.

Tip

If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.

A structured approach to a deployment map

Before you deploy cloud storage in a later step, it's important to create a map between on-premises folders and Azure file shares. This mapping will inform how many and which Azure File Sync sync group resources you'll provision. A sync group ties the Azure file share and the folder on your server together and establishes a sync connection.

To decide how many Azure file shares you need, review the following limits and best practices. Doing so will help you optimize your map.

  • A server on which the Azure File Sync agent is installed can sync with up to 30 Azure file shares.

  • An Azure file share is deployed in a storage account. That arrangement makes the storage account a scale target for performance numbers like IOPS and throughput.

    Pay attention to a storage account's IOPS limitations when deploying Azure file shares. Ideally, you should map file shares 1:1 with storage accounts. However, this might not always be possible due to various limits and restrictions, both from your organization and from Azure. When it's not possible to have only one file share deployed in one storage account, consider which shares will be highly active and which shares will be less active to ensure that the hottest file shares don't get put in the same storage account together.

    If you plan to lift an app to Azure that will use the Azure file share natively, you might need more performance from your Azure file share. If this type of use is a possibility, even in the future, it's best to create a single standard Azure file share in its own storage account.

  • There's a limit of 250 storage accounts per subscription per Azure region.

Tip

Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.

This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a common root. Nothing else changes.

Important

The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced. Review the Azure File Sync scale targets for more details.

It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders) per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will provide you with room to grow.

It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file share balanced across the server. You can also split your on-premises shares and sync across more on-premises servers, adding the ability to sync with 30 more Azure file shares per extra server.

Common file sync scenarios and considerations

# Sync scenario Supported Considerations (or limitations) Solution (or workaround)
1 File server with multiple disks/volumes and multiple shares to same target Azure file share (consolidation) No A target Azure file share (cloud endpoint) only supports syncing with one sync group.

A sync group only supports one server endpoint per registered server.
1) Start with syncing one disk (its root volume) to target Azure file share. Starting with largest disk/volume will help with storage requirements on-premises. Configure cloud tiering to tier all data to cloud, thereby freeing up space on the file server disk. Move data from other volumes/shares into the current volume which is syncing. Continue the steps one by one until all data is tiered up to cloud/migrated.
2) Target one root volume (disk) at a time. Use cloud tiering to tier all data to target Azure file share. Remove server endpoint from sync group, re-create the endpoint with the next root volume/disk, sync, and repeat the process. Note: Agent re-install might be required.
3) Recommend using multiple target Azure file shares (same or different storage account based on performance requirements)
2 File server with single volume and multiple shares to same target Azure file share (consolidation) Yes Can't have multiple server endpoints per registered server syncing to same target Azure file share (same as above) Sync root of the volume holding multiple shares or top-level folders. Refer to Share grouping concept and Volume sync for more information.
3 File server with multiple shares and/or volumes to multiple Azure file shares under single storage account (1:1 share mapping) Yes A single Windows Server instance (or cluster) can sync up to 30 Azure file shares.

A storage account is a scale target for performance. IOPS and throughput get shared across file shares.

Keep number of items per sync group within 100 million items (files and folders) per share. Ideally it's best to stay below 20 or 30 million per share.
1) Use multiple sync groups (number of sync groups = number of Azure file shares to sync to).
2) Only 30 shares can be synced in this scenario at a time. If you have more than 30 shares on that file server, use Share grouping concept and Volume sync to reduce the number of root or top-level folders at source.
3) Use additional File Sync servers on-premises and split/move data to these servers to work around limitations on the source Windows server.
4 File server with multiple shares and/or volumes to multiple Azure file shares under different storage account (1:1 share mapping) Yes A single Windows Server instance (or cluster) can sync up to 30 Azure file shares (same or different storage account).

Keep number of items per sync group within 100 million items (files and folders) per share. Ideally it's best to stay below 20 or 30 million per share.
Same approach as above
5 Multiple file servers with single (root volume or share) to same target Azure file share (consolidation) No A sync group can't use cloud endpoint (Azure file share) already configured in another sync group.

Although a sync group can have server endpoints on different file servers, the files can't be distinct.
Follow guidance in Scenario # 1 above with additional consideration of targeting one file server at a time.

Create a mapping table

Diagram that shows an example of a mapping table. Download the following file to experience and use the content of this image.

Use the previous information to determine how many Azure file shares you need and which parts of your existing data will end up in which Azure file share.

Create a table that records your thoughts so you can refer to it when you need to. Staying organized is important because it can be easy to lose details of your mapping plan when you're provisioning many Azure resources at once. Download the following Excel file to use as a template to help create your mapping.


Excel icon that sets the context for the download. Download a namespace-mapping template.

Windows file server considerations

To enable the sync capability on Windows Server, you must install the Azure File Sync downloadable agent. The Azure File Sync agent provides two main components: FileSyncSvc.exe, the background Windows service that's responsible for monitoring changes on the server endpoints and initiating sync sessions, and StorageSync.sys, a file system filter that enables cloud tiering and fast disaster recovery.

Operating system requirements

Azure File Sync is supported with the following versions of Windows Server:

Version Supported SKUs Supported deployment options
Windows Server 2025 Azure, Datacenter, Essentials, Standard, and IoT Full and Core
Windows Server 2022 Azure, Datacenter, Essentials, Standard, and IoT Full and Core
Windows Server 2019 Datacenter, Essentials, Standard, and IoT Full and Core
Windows Server 2016 Datacenter, Essentials, Standard, and Storage Server Full and Core
Windows Server 2012 R2* Datacenter, Essentials, Standard, and Storage Server Full and Core

*Requires downloading and installing Windows Management Framework (WMF) 5.1. The appropriate package to download and install for Windows Server 2012 R2 is Win8.1AndW2K12R2-KB*******-x64.msu.

Future versions of Windows Server will be added as they are released.

Important

We recommend keeping all servers that you use with Azure File Sync up to date with the latest updates from Windows Update.

Minimum system resources

Azure File Sync requires a server, either physical or virtual, with at least one CPU, minimum of 2 GiB of memory and a locally attached volume formatted with the NTFS file system.

Important

If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum of 2048 MiB of memory.

For most production workloads, we don't recommend configuring an Azure File Sync sync server with only the minimum requirements. See Recommended system resources for more information.

Just like any server feature or application, the system resource requirements for Azure File Sync are determined by the scale of the deployment; larger deployments on a server require greater system resources. For Azure File Sync, scale is determined by the number of objects across the server endpoints and the churn on the dataset. A single server can have server endpoints in multiple sync groups and the number of objects listed in the following table accounts for the full namespace that a server is attached to.

For example, server endpoint A with 10 million objects + server endpoint B with 10 million objects = 20 million objects. For that example deployment, we would recommend 8 CPUs, 16 GiB of memory for steady state, and (if possible) 48 GiB of memory for the initial migration.

Namespace data is stored in memory for performance reasons. Because of that, bigger namespaces require more memory to maintain good performance, and more churn requires more CPU to process.

In the following table, we've provided both the size of the namespace as well as a conversion to capacity for typical general purpose file shares, where the average file size is 512 KiB. If your file sizes are smaller, consider adding additional memory for the same amount of capacity. Base your memory configuration on the size of the namespace.

Namespace size - files & directories (millions) Typical capacity (TiB) CPU Cores Recommended memory (GiB)
3 1.4 2 8 (initial sync)/ 2 (typical churn)
5 2.3 2 16 (initial sync)/ 4 (typical churn)
10 4.7 4 32 (initial sync)/ 8 (typical churn)
30 14.0 8 48 (initial sync)/ 16 (typical churn)
50 23.3 16 64 (initial sync)/ 32 (typical churn)
100* 46.6 32 128 (initial sync)/ 32 (typical churn)

*Syncing more than 100 million files & directories isn't recommended. This is a soft limit based on our tested thresholds. For more information, see Azure File Sync scale targets.

Tip

Initial synchronization of a namespace is an intensive operation, and we recommend allocating more memory until initial synchronization is complete. This isn't required but might speed up initial sync.

Typical churn is 0.5% of the namespace changing per day. For higher levels of churn, consider adding more CPU.

Evaluation cmdlet

Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation cmdlet. This cmdlet checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported operating system version. These checks cover most but not all of the features mentioned below; we recommend you read through the rest of this section carefully to ensure your deployment goes smoothly.

The evaluation cmdlet can be installed by installing the Az PowerShell module, which can be installed by following the instructions here: Install and configure Azure PowerShell.

Usage

You can invoke the evaluation tool in a few different ways: you can perform the system checks, the dataset checks, or both. To perform both the system and dataset checks:

Invoke-AzStorageSyncCompatibilityCheck -Path <path>

To test only your dataset:

Invoke-AzStorageSyncCompatibilityCheck -Path <path> -SkipSystemChecks

To test system requirements only:

Invoke-AzStorageSyncCompatibilityCheck -ComputerName <computer name> -SkipNamespaceChecks

To display the results in CSV:

$validation = Invoke-AzStorageSyncCompatibilityCheck C:\DATA
$validation.Results | Select-Object -Property Type, Path, Level, Description, Result | Export-Csv -Path C:\results.csv -Encoding utf8

File system compatibility

Azure File Sync is only supported on directly attached, NTFS volumes. Direct attached storage, or DAS, on Windows Server means that the Windows Server operating system owns the file system. DAS can be provided through physically attaching disks to the file server, attaching virtual disks to a file server VM (such as a VM hosted by Hyper-V), or even through iSCSI.

Only NTFS volumes are supported; ReFS, FAT, FAT32, and other file systems aren't supported.

The following table shows the interop state of NTFS file system features:

Feature Support status Notes
Access control lists (ACLs) Fully supported Windows-style discretionary access control lists are preserved by Azure File Sync, and are enforced by Windows Server on server endpoints. ACLs can also be enforced when directly mounting the Azure file share, however this requires additional configuration. See the Identity section for more information.
Hard links Skipped
Symbolic links Skipped
Mount points Partially supported Mount points might be the root of a server endpoint, but they are skipped if they are contained in a server endpoint's namespace.
Junctions Skipped For example, Distributed File System DfrsrPrivate and DFSRoots folders.
Reparse points Skipped
NTFS compression Partially supported Azure File Sync doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed.
Sparse files Fully supported Sparse files sync (are not blocked), but they sync to the cloud as a full file. If the file contents change in the cloud (or on another server), the file is no longer sparse when the change is downloaded.
Alternate Data Streams (ADS) Preserved, but not synced For example, classification tags created by the File Classification Infrastructure aren't synced. Existing classification tags on files on each of the server endpoints are left untouched.

Azure File Sync will also skip certain temporary files and system folders:

File/folder Note
pagefile.sys File specific to system
Desktop.ini File specific to system
thumbs.db Temporary file for thumbnails
ehthumbs.db Temporary file for media thumbnails
~$*.* Office temporary file
*.tmp Temporary file
*.laccdb Access DB locking file
635D02A9D91C401B97884B82B3BCDAEA.* Internal sync file
\System Volume Information Folder specific to volume
$RECYCLE.BIN Folder
\SyncShareState Folder for sync
.SystemShareInformation Folder for sync in Azure file share

Note

While Azure File Sync supports syncing database files, databases aren't a good workload for sync solutions (including Azure File Sync) because the log files and databases need to be synced together, and they can get out of sync for various reasons which could lead to database corruption.

Consider how much free space you need on your local disk

When planning to use Azure File Sync, consider how much free space you need on the local disk you plan to have a server endpoint on.

With Azure File Sync, you will need to account for the following taking up space on your local disk:

  • With cloud tiering enabled:

    • Reparse points for tiered files
    • Azure File Sync metadata database
    • Azure File Sync heatstore
    • Fully downloaded files in your hot cache (if any)
    • Volume free space policy requirements
  • With cloud tiering disabled:

    • Fully downloaded files
    • Azure File Sync heatstore
    • Azure File Sync metadata database

We'll use an example to illustrate how to estimate the amount of free space would need on your local disk. Let's say you installed your Azure File Sync agent on your Azure Windows VM, and plan to create a server endpoint on disk F. You have 1 million files and would like to tier all of them, 100,000 directories, and a disk cluster size of 4 KiB. The disk size is 1000 GiB. You want to enable cloud tiering and set your volume free space policy to 20%.

  1. NTFS allocates a cluster size for each of the tiered files. 1 million files * 4 KiB cluster size = 4,000,000 KiB (4 GiB)

    Note

    To fully benefit from cloud tiering, it is recommended to use smaller NTFS cluster sizes (less than 64KiB) since each tiered file occupies a cluster. Also, the space occupied by tiered files is allocated by NTFS. Therefore, it will not show up in any UI.

  2. Sync metadata occupies a cluster size per item. (1 million files + 100,000 directories) * 4 KiB cluster size = 4,400,000 KiB (4.4 GiB)
  3. Azure File Sync heatstore occupies 1.1 KiB per file. 1 million files * 1.1 KiB = 1,100,000 KiB (1.1 GiB)
  4. Volume free space policy is 20%. 1000 GiB * 0.2 = 200 GiB

In this case, Azure File Sync would need about 209,500,000 KiB (209.5 GiB) of space for this namespace. Add this amount to any additional free space that is desired in order to figure out how much free space is required for this disk.

Failover Clustering

  1. Windows Server Failover Clustering is supported by Azure File Sync for the "File Server for general use" deployment option. For more information on how to configure the "File Server for general use" role on a Failover Cluster, see Deploying a two-node clustered file server.
  2. The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks
  3. Failover Clustering isn't supported on "Scale-Out File Server for application data" (SOFS) or on Clustered Shared Volumes (CSVs) or local disks.

Note

The Azure File Sync agent must be installed on every node in a Failover Cluster for sync to work correctly.

Data Deduplication

Windows Server 2025, Windows Server 2022, Windows Server 2019, and Windows Server 2016
Data Deduplication is supported irrespective of whether cloud tiering is enabled or disabled on one or more server endpoints on the volume for Windows Server 2016, Windows Server 2019, Windows Server 2022 and Windows Server 2025. Enabling Data Deduplication on a volume with cloud tiering enabled lets you cache more files on-premises without provisioning more storage.

When Data Deduplication is enabled on a volume with cloud tiering enabled, Dedup optimized files within the server endpoint location will be tiered similar to a normal file based on the cloud tiering policy settings. Once the Dedup optimized files have been tiered, the Data Deduplication garbage collection job will run automatically to reclaim disk space by removing unnecessary chunks that are no longer referenced by other files on the volume.

Note the volume savings only apply to the server; your data in the Azure file share won't be deduped.

Note

To support Data Deduplication on volumes with cloud tiering enabled on Windows Server 2019, Windows update KB4520062 - October 2019 or a later monthly rollup update must be installed.

Windows Server 2012 R2
Azure File Sync doesn't support Data Deduplication and cloud tiering on the same volume on Windows Server 2012 R2. If Data Deduplication is enabled on a volume, cloud tiering must be disabled.

Notes

  • If Data Deduplication is installed prior to installing the Azure File Sync agent, a restart is required to support Data Deduplication and cloud tiering on the same volume.

  • If Data Deduplication is enabled on a volume after cloud tiering is enabled, the initial Deduplication optimization job will optimize files on the volume that are not already tiered and will have the following impact on cloud tiering:

    • Free space policy will continue to tier files as per the free space on the volume by using the heatmap.
    • Date policy will skip tiering of files that may have been otherwise eligible for tiering due to the Deduplication optimization job accessing the files.
  • For ongoing Deduplication optimization jobs, cloud tiering with date policy will get delayed by the Data Deduplication MinimumFileAgeDays setting, if the file isn't already tiered.

    • Example: If the MinimumFileAgeDays setting is seven days and cloud tiering date policy is 30 days, the date policy will tier files after 37 days.
    • Note: Once a file is tiered by Azure File Sync, the Deduplication optimization job will skip the file.
  • If a server running Windows Server 2012 R2 with the Azure File Sync agent installed is upgraded to Windows Server 2016, Windows Server 2019, Windows Server 2022, or Windows Server 2025, the following steps must be performed to support Data Deduplication and cloud tiering on the same volume:

    • Uninstall the Azure File Sync agent for Windows Server 2012 R2 and restart the server.
    • Download the Azure File Sync agent for the new server operating system version (Windows Server 2016, Windows Server 2019, Windows Server 2022, or Windows Server 2025).
    • Install the Azure File Sync agent and restart the server.

    Note: The Azure File Sync configuration settings on the server are retained when the agent is uninstalled and reinstalled.

Distributed File System (DFS)

Azure File Sync supports interop with DFS Namespaces (DFS-N) and DFS Replication (DFS-R).

DFS Namespaces (DFS-N): Azure File Sync is fully supported with DFS-N implementation. You can install the Azure File Sync agent on one or more file servers to sync data between the server endpoints and the cloud endpoint, and then use DFS-N to provide namespace service. For more information, see DFS Namespaces overview and DFS Namespaces with Azure Files.

DFS Replication (DFS-R): Since DFS-R and Azure File Sync are both replication solutions, in most cases, we recommend replacing DFS-R with Azure File Sync. There are however several scenarios where you would want to use DFS-R and Azure File Sync together:

  • You're migrating from a DFS-R deployment to an Azure File Sync deployment. For more information, see Migrate a DFS Replication (DFS-R) deployment to Azure File Sync.
  • Not every on-premises server that needs a copy of your file data can be connected directly to the internet.
  • Branch servers consolidate data onto a single hub server, for which you would like to use Azure File Sync.

For Azure File Sync and DFS-R to work side by side:

  1. Azure File Sync cloud tiering must be disabled on volumes with DFS-R replicated folders.
  2. Server endpoints shouldn't be configured on DFS-R read-only replication folders.
  3. Only a single server endpoint can overlap with a DFS-R location. Multiple server endpoints overlapping with other active DFS-R locations might lead to conflicts.

For more information, see DFS Replication overview.

Sysprep

Using sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. Agent installation and server registration should occur after deploying the server image and completing sysprep mini-setup.

If cloud tiering is enabled on a server endpoint, files that are tiered are skipped and aren't indexed by Windows Search. Non-tiered files are indexed properly.

Note

Windows clients will cause recalls when searching the file share if the Always search file names and contents setting is enabled on the client machine. This setting is disabled by default.

Other Hierarchical Storage Management (HSM) solutions

No other HSM solutions should be used with Azure File Sync.

Performance and Scalability

Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the effective sync performance depends upon a number of factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution is better measured in the number of objects (files and directories) processed per second.

Changes made to the Azure file share by using the Azure portal or SMB aren't immediately detected and replicated like changes to the server endpoint. Azure Files doesn't have change notifications or journaling, so there's no way to automatically initiate a sync session when files are changed. On Windows Server, Azure File Sync uses Windows USN journaling to automatically initiate a sync session when files change.

To detect changes to the Azure file share, Azure File Sync has a scheduled job called a change detection job. A change detection job enumerates every file in the file share, and then compares it to the sync version for that file. When the change detection job determines that files have changed, Azure File Sync initiates a sync session. The change detection job is initiated every 24 hours. Because the change detection job works by enumerating every file in the Azure file share, change detection takes longer in larger namespaces than in smaller namespaces. For large namespaces, it might take longer than once every 24 hours to determine which files have changed.

For more information, see Azure File Sync performance metrics and Azure File Sync scale targets

Identity

Azure File Sync works with your standard AD-based identity without any special setup beyond setting up sync. When you're using Azure File Sync, the general expectation is that most accesses go through the Azure File Sync caching servers, rather than through the Azure file share. Since the server endpoints are located on Windows Server, and Windows Server has supported AD and Windows-style ACLs for a long time, nothing is needed beyond ensuring the Windows file servers registered with the Storage Sync Service are domain joined. Azure File Sync will store ACLs on the files in the Azure file share, and will replicate them to all server endpoints.

Even though changes made directly to the Azure file share will take longer to sync to the server endpoints in the sync group, you might also want to ensure that you can enforce your AD permissions on your file share directly in the cloud as well. To do this, you must domain join your storage account to your on-premises AD, just like how your Windows file servers are domain joined. To learn more about domain joining your storage account to a customer-owned Active Directory, see Azure Files Active Directory overview.

Important

Domain joining your storage account to Active Directory isn't required to successfully deploy Azure File Sync. This is a strictly optional step that allows the Azure file share to enforce on-premises ACLs when users mount the Azure file share directly.

Networking

The Azure File Sync agent communicates with your Storage Sync Service and Azure file share using the Azure File Sync REST protocol and the FileREST protocol, both of which always use HTTPS over port 443. SMB is never used to upload or download data between your Windows Server and the Azure file share. Because most organizations allow HTTPS traffic over port 443, as a requirement for visiting most websites, special networking configuration is usually not required to deploy Azure File Sync.

Important

Azure File Sync doesn't support internet routing. The default network routing option, Microsoft routing, is supported by Azure File Sync.

Based on your organization's policy or unique regulatory requirements, you might require more restrictive communication with Azure, and therefore Azure File Sync provides several mechanisms for you to configure networking. Based on your requirements, you can:

  • Tunnel sync and file upload/download traffic over your ExpressRoute or Azure VPN.
  • Make use of Azure Files and Azure Networking features such as service endpoints and private endpoints.
  • Configure Azure File Sync to support your proxy in your environment.
  • Throttle network activity from Azure File Sync.

Tip

If you want to communicate with your Azure file share over SMB but port 445 is blocked, consider using SMB over QUIC, which offers zero-config "SMB VPN" for SMB access to your Azure file shares using the QUIC transport protocol over port 443. Although Azure Files doesn't directly support SMB over QUIC, you can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. To learn more about this option, see SMB over QUIC with Azure File Sync.

To learn more about Azure File Sync and networking, see Azure File Sync networking considerations.

Encryption

When using Azure File Sync, there are three different layers of encryption to consider: encryption on the at-rest storage of Windows Server, encryption in transit between the Azure File Sync agent and Azure, and encryption at rest of your data in the Azure file share.

Windows Server encryption at rest

There are two strategies for encrypting data on Windows Server that work generally with Azure File Sync: encryption beneath the file system such that the file system and all of the data written to it is encrypted, and encryption within the file format itself. These methods aren't mutually exclusive; they can be used together if desired because the purpose of encryption is different.

To provide encryption beneath the file system, Windows Server provides BitLocker inbox. BitLocker is fully transparent to Azure File Sync. The primary reason to use an encryption mechanism like BitLocker is to prevent physical exfiltration of data from your on-premises datacenter by someone stealing the disks, and to prevent sideloading an unauthorized OS to perform unauthorized reads/writes to your data. To learn more about BitLocker, see BitLocker overview.

Third-party products that work similarly to BitLocker, in that they sit beneath the NTFS volume, should similarly work fully transparently with Azure File Sync.

The other main method for encrypting data is to encrypt the file's data stream when the application saves the file. Some applications might do this natively, however this usually isn't the case. An example of a method for encrypting the file's data stream is Azure Information Protection (AIP)/Azure Rights Management Services (Azure RMS)/Active Directory RMS. The primary reason to use an encryption mechanism like AIP/RMS is to prevent data exfiltration of data from your file share by people copying it to alternate locations, like to a flash drive, or emailing it to an unauthorized person. When a file's data stream is encrypted as part of the file format, this file will continue to be encrypted on the Azure file share.

Azure File Sync doesn't interoperate with NTFS Encrypted File System (NTFS EFS) or third-party encryption solutions that sit above the file system but below the file's data stream.

Encryption in transit

Note

Azure File Sync service removed support for TLS1.0 and 1.1 on August 1st, 2020. All supported Azure File Sync agent versions already use TLS1.2 by default. Using an earlier version of TLS could occur if TLS1.2 was disabled on your server or a proxy is used. If you are using a proxy, we recommend you check the proxy configuration. Azure File Sync service regions added after 5/1/2020 only support TLS1.2. For more information, see the troubleshooting guide.

The Azure File Sync agent communicates with your Storage Sync Service and Azure file share using the Azure File Sync REST protocol and the FileREST protocol, both of which always use HTTPS over port 443. Azure File Sync doesn't send unencrypted requests over HTTP.

Azure storage accounts contain a switch for requiring encryption in transit, which is enabled by default. Even if the switch at the storage account level is disabled, meaning that unencrypted connections to your Azure file shares are possible, Azure File Sync will still only used encrypted channels to access your file share.

The primary reason to disable encryption in transit for the storage account is to support a legacy application that must be run on an older operating system, such as Windows Server 2008 R2 or older Linux distribution, talking to an Azure file share directly. If the legacy application talks to the Windows Server cache of the file share, toggling this setting will have no effect.

We strongly recommend ensuring encryption of data in-transit is enabled.

For more information about encryption in transit, see requiring secure transfer in Azure storage.

Azure file share encryption at rest

All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the SMB and NFS protocols.

By default, data stored in Azure Files is encrypted with Microsoft-managed keys. With Microsoft-managed keys, Microsoft holds the keys to encrypt/decrypt the data, and is responsible for rotating them on a regular basis. You can also choose to manage your own keys, which gives you control over the rotation process. If you choose to encrypt your file shares with customer-managed keys, Azure Files is authorized to access your keys to fulfill read and write requests from your clients. With customer-managed keys, you can revoke this authorization at any time, but this means that your Azure file share will no longer be accessible via SMB or the FileREST API.

Azure Files uses the same encryption scheme as the other Azure storage services such as Azure Blob storage. To learn more about Azure storage service encryption (SSE), see Azure storage encryption for data at rest.

Storage tiers

Azure Files offers two different media tiers of storage, SSD (solid-state disks) and HDD (hard disk drives), which allow you to tailor your shares to the performance and price requirements of your scenario:

  • SSD (premium): SSD file shares provide consistent high performance and low latency, within single-digit milliseconds for most IO operations, for IO-intensive workloads. SSD file shares are suitable for a wide variety of workloads like databases, web site hosting, and development environments. SSD file shares can be used with both Server Message Block (SMB) and Network File System (NFS) protocols. SSD file shares are available in the provisioned v1 billing model. SSD file shares offer a higher availability SLA than HDD file shares (see "Azure Files Premium Tier").

  • HDD (standard): HDD file shares provide a cost-effective storage option for general purpose file shares. HDD file shares available with the provisioned v2 and pay-as-you-go billing models, although we recommend the provisioned v2 model for new file share deployments. For information about the SLA, see the Azure service-level agreements page (see "Storage Accounts").

When selecting a media tier for your workload, consider your performance and usage requirements. If your workload requires single-digit latency, or you're using SSD storage media on-premises, SSD file shares tier are probably the best fit. If low latency isn't as much of a concern, for example with team shares mounted on-premises from Azure or cached on-premises using Azure File Sync, HDD file shares may be a better fit from a cost perspective.

Once you've created a file share in a storage account, you can't directly move it to a different media tier. For example, to move an HDD file share to the SSD media tier, you must create a new SSD file share and copy the data from your original share to a new file share in the FileStorage account. We recommend using AzCopy to copy data between Azure file shares, but you may also use tools like robocopy on Windows or rsync for macOS and Linux.

See Understanding Azure Files billing for more information.

Azure File Sync region availability

For regional availability, see Products available by region.

The following regions require you to request access to Azure Storage before you can use Azure File Sync with them:

  • France South
  • South Africa West
  • UAE Central

To request access for these regions, follow the process in this document.

Redundancy

To protect the data in your Azure file shares against data loss or corruption, Azure Files stores multiple copies of each file as they are written. Depending on your requirements, you can select different degrees of redundancy. Azure Files currently supports the following data redundancy options:

  • Locally-redundant storage (LRS): With LRS, every file is stored three times within an Azure storage cluster. This protects against data loss due to hardware faults, such as a bad disk drive. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS might be lost or unrecoverable.
  • Zone-redundant storage (ZRS): With ZRS, three copies of each file are stored. However, these copies are physically isolated in three distinct storage clusters in different Azure availability zones. Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. A write to storage isn't accepted until it's written to the storage clusters in all three availability zones.
  • Geo-redundant storage (GRS): With GRS, you have two regions, a primary region and a secondary region. Files are stored three times within an Azure storage cluster in the primary region. Writes are asynchronously replicated to a Microsoft-defined secondary region. GRS provides six copies of your data spread between two Azure regions. In the event of a major disaster such as the permanent loss of an Azure region due to a natural disaster or other similar event, Microsoft will perform a failover. In this case, the secondary becomes the primary, serving all operations. Because the replication between the primary and secondary regions is asynchronous, in the event of a major disaster, data not yet replicated to the secondary region will be lost. You can also perform a manual failover of a geo-redundant storage account.
  • Geo-zone-redundant storage (GZRS): You can think of GZRS as ZRS, but with geo-redundancy. With GZRS, files are stored three times across three distinct storage clusters in the primary region. All writes are then asynchronously replicated to a Microsoft-defined secondary region. The failover process for GZRS works the same as GRS.

Standard Azure file shares up to 5 TiB support all four redundancy types. Standard file shares larger than 5 TiB only support LRS and ZRS. Premium Azure file shares only support LRS and ZRS.

General purpose version 2 (GPv2) storage accounts provide two other redundancy options that Azure Files doesn't support: read accessible geo-redundant storage (RA-GRS) and read accessible geo-zone-redundant storage (RA-GZRS). You can provision Azure file shares in storage accounts with these options set, however Azure Files doesn't support reading from the secondary region. Azure file shares deployed into RA-GRS or RA-GZRS storage accounts are billed as GRS or GZRS, respectively.

Important

Geo-redundant and geo-zone redundant storage have the capability to manually failover storage to the secondary region. We recommend that you don't do this outside of a disaster when you're using Azure File Sync because of the increased likelihood of data loss. In the event of a disaster where you would like to initiate a manual failover of storage, you'll need to open up a support case with Microsoft to get Azure File Sync to resume sync with the secondary endpoint.

Migration

If you have an existing Windows file server 2012R2 or newer, Azure File Sync can be directly installed in place, without the need to move data over to a new server. If you're planning to migrate to a new Windows file server as a part of adopting Azure File Sync, or if your data is currently located on Network Attached Storage (NAS), there are several possible migration approaches to use Azure File Sync with this data. Which migration approach you should choose depends on where your data currently resides.

See the Azure File Sync and Azure file share migration overview article for detailed guidance.

Antivirus

Because antivirus works by scanning files for known malicious code, an antivirus product might cause the recall of tiered files, resulting in high egress charges. Tiered files have the secure Windows attribute FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS set, and we recommend consulting with your software vendor to learn how to configure their solution to skip reading files with this attribute set (many do it automatically).

Microsoft's in-house antivirus solutions, Windows Defender and System Center Endpoint Protection (SCEP), both automatically skip reading files that have this attribute set. We have tested them and identified one minor issue: when you add a server to an existing sync group, files smaller than 800 bytes are recalled (downloaded) on the new server. These files will remain on the new server and won't be tiered because they don't meet the tiering size requirement (>64kb).

Note

Antivirus vendors can check compatibility between their product and Azure File Sync using the Azure File Sync Antivirus Compatibility Test Suite, which is available for download on the Microsoft Download Center.

Backup

If cloud tiering is enabled, solutions that directly back up the server endpoint or a VM on which the server endpoint is located shouldn't be used. Cloud tiering causes only a subset of your data to be stored on the server endpoint, with the full dataset residing in your Azure file share. Depending on the backup solution used, tiered files will either be skipped and not backed up (because they have the FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS attribute set), or they will be recalled to disk, resulting in high egress charges. We recommend using a cloud backup solution to back up the Azure file share directly. For more information, see About Azure file share backup or contact your backup provider to see if they support backing up Azure file shares.

If you prefer to use an on-premises backup solution, backups should be performed on a server in the sync group that has cloud tiering disabled and make sure there are no tiered files. When performing a restore, use the volume-level or file-level restore options. Files restored using the file-level restore option will be synced to all endpoints in the sync group, and existing files will be replaced with the version restored from backup. Volume-level restores won't replace newer file versions in the Azure file share or other server endpoints.

Note

Bare-metal (BMR) restore, VM restore, system restore (Windows built-in OS restore), and file-level restore with its tiered version (this happens when backup software backs up a tiered file instead of a full file) can cause unexpected results and aren't currently supported when cloud tiering is enabled. VSS snapshots (including Previous Versions tab) are supported on volumes which have cloud tiering enabled. However, you must enable previous version compatibility through PowerShell. Learn how.

Data Classification

If you have data classification software installed, enabling cloud tiering might result in increased cost for two reasons:

  1. With cloud tiering enabled, your hottest files are cached locally, and your coolest files are tiered to the Azure file share in the cloud. If your data classification regularly scans all files in the file share, the files tiered to the cloud must be recalled whenever scanned.

  2. If the data classification software uses the metadata in the data stream of a file, the file must be fully recalled in order for the software to see the classification.

These increases in both the number of recalls and the amount of data being recalled can increase costs.

Azure File Sync agent update policy

The Azure File Sync agent is updated on a regular basis to add new functionality and to address issues. We recommend updating the Azure File Sync agent as new versions are available.

Major vs. minor agent versions

  • Major agent versions often contain new features and have an increasing number as the first part of the version number. For example: 17.0.0.0
  • Minor agent versions are also called "patches" and are released more frequently than major versions. They often contain bug fixes and smaller improvements but no new features. For example: 17.2.0.0

Upgrade paths

There are five approved and tested ways to install the Azure File Sync agent updates.

  1. Use Azure File Sync agent auto-upgrade feature to install agent updates. The Azure File Sync agent will auto-upgrade. You can select to install the latest agent version when available or update when the currently installed agent is near expiration. To learn more, see Automatic agent lifecycle management.
  2. Configure Microsoft Update to automatically download and install agent updates. We recommend installing every Azure File Sync update to ensure you have access to the latest fixes for the server agent. Microsoft Update makes this process seamless by automatically downloading and installing updates for you.
  3. Use AfsUpdater.exe to download and install agent updates. The AfsUpdater.exe is located in the agent installation directory. Double-click the executable to download and install agent updates. Depending on the release version, you might need to restart the server.
  4. Patch an existing Azure File Sync agent by using a Microsoft Update patch file, or a .msp executable. The latest Azure File Sync update package can be downloaded from the Microsoft Update Catalog. Running an .msp executable will upgrade your Azure File Sync installation with the same method used automatically by Microsoft Update. Applying a Microsoft Update patch will perform an in-place upgrade of an Azure File Sync installation.
  5. Download the newest Azure File Sync agent installer from the Microsoft Download Center. To upgrade an existing Azure File Sync agent installation, uninstall the older version and then install the latest version from the downloaded installer. The server registration, sync groups, and any other settings are maintained by the Azure File Sync installer.

Note

The downgrade of Azure File Sync agent isn't supported. The new versions often include breaking changes when compared to the old versions, making the downgrade process unsupported. In case you encounter any problems with your current agent version, reach out to support or upgrade to the latest available release.

Automatic agent lifecycle management

The Azure File Sync agent will auto-upgrade. You can select either of two modes and specify a maintenance window in which the upgrade shall be attempted on the server. This feature is designed to help you with the agent lifecycle management by either providing a guardrail preventing your agent from expiration or allowing for a no-hassle, stay current setting.

  1. The default setting will attempt to prevent the agent from expiration. Within 21 days of the posted expiration date of an agent, the agent will attempt to self-upgrade. It will start an attempt to upgrade once a week within 21 days prior to expiration and in the selected maintenance window. This option doesn't eliminate the need for taking regular Microsoft Update patches.
  2. Optionally, you can select that the agent will automatically upgrade itself as soon as a new agent version becomes available (currently not applicable to clustered servers). This update will occur during the selected maintenance window and allow your server to benefit from new features and improvements as soon as they become generally available. This is the recommended, worry-free setting that will provide major agent versions as well as regular update patches to your server. Every agent released is at GA quality. If you select this option, Microsoft will flight the newest agent version to you. Clustered servers are excluded. Once flighting is complete, the agent will also become available on Microsoft Update and Microsoft Download Center.
Changing the auto-upgrade setting

The following instructions describe how to change the settings after you've completed the installer, if you need to make changes.

Open a PowerShell console and navigate to the directory where you installed the sync agent, then import the server cmdlets. By default this would look something like this:

cd 'C:\Program Files\Azure\StorageSyncAgent'
Import-Module -Name .\StorageSync.Management.ServerCmdlets.dll

You can run Get-StorageSyncAgentAutoUpdatePolicy to check the current policy setting and determine if you want to change it.

To change the current policy setting to the delayed update track, you can use:

Set-StorageSyncAgentAutoUpdatePolicy -PolicyMode UpdateBeforeExpiration

To change the current policy setting to the immediate update track, you can use:

Set-StorageSyncAgentAutoUpdatePolicy -PolicyMode InstallLatest -Day <day> -Hour <hour>

Note

If flighting has already completed for the latest agent version and the agent auto update policy is changed to InstallLatest, the agent will not auto-upgrade until the next agent version is flighted. To update to an agent version that has completed flighting, use Microsoft Update or AfsUpdater.exe. To check if an agent version is currently flighting, check the supported versions section in the release notes.

Agent lifecycle and change management guarantees

Azure File Sync is a cloud service which continuously introduces new features and improvements. This means that a specific Azure File Sync agent version can only be supported for a limited time. To facilitate your deployment, the following rules guarantee you have enough time and notification to accommodate agent updates/upgrades in your change management process:

  • Major agent versions are supported for at least six months from the date of initial release.
  • We guarantee there is an overlap of at least three months between the support of major agent versions.
  • Warnings are issued for registered servers using a soon-to-be expired agent at least three months prior to expiration. You can check if a registered server is using an older version of the agent under the registered servers section of a Storage Sync Service.
  • The lifetime of a minor agent version is bound to the associated major version. For example, when agent version 17.0.0.0 is set to expire, agent versions 17.*.*.* will all be set to expire together.

Note

Installing an agent version with an expiration warning will display a warning but succeed. Attempting to install or connect with an expired agent version isn't supported and will be blocked.

Next steps