Quota's "peak usage" based on "Size on disk" with Azure File Sync cached folders

Miclaud 1 Reputation point

I have an on-premise Windows Server 2012 R2 machine where I’m caching Azure file shares using Azure File Sync.
On Azure synchronization service I enabled cloud tiering with 1 day policy (file are kept in local cache only if edited within last 24 hours)

The problem is that if I enable FSRM quotas on may on-premise machine cache directory the Peak Usage doesn’t correspond to the theoretical size of the folder but to the actual size on disk.
As you can see, my F:\ARCHIVI\Backup folder has an hard limit of 23 GB of space, but I was able to “upload” 325 gb of data, of course copying less than 23GB of data ad once in the local volume and waiting for tiering to clean local space.
Is there any way to set up quotas based on “Size” instead of “Size on disk” attribute? My goal is to prevent wild and out of control growth of Azure shared folders size by my employees

Here’s my quotas report:

As you can see, my “Backup” folder shows 325 GB of Size and only 344KB of “Size on disk”. I’m curious about why quota’s reporting a Peak Usage of 1,54 MB instead, but it probably considers some system files I’m not aware about.


Thank you in advance for any help you can provide me.

Azure Files
Azure Files
An Azure service that offers file shares in the cloud.
1,178 questions
Azure Storage Accounts
Azure Storage Accounts
Globally unique resources that provide access to data management services and serve as the parent namespace for the services.
2,734 questions
Windows Server
Windows Server
A family of Microsoft server operating systems that support enterprise-level management, data storage, applications, and communications.
12,269 questions
{count} votes

2 answers

Sort by: Most helpful
  1. Sumarigo-MSFT 44,001 Reputation points Microsoft Employee

    @Miclaud Welcome to Microsoft Q&A Forum, Thank you for posting your query here!

    FSRM is nothing to do with AFS! If you are asking specifically related FSRM can be changed to use a quota on "size" then that is for the windows on-prem team to look at.

    Additional information: There are couple of AFS documents for Cloud tiering with FSRM
    https://learn.microsoft.com/en-us/azure/storage/file-sync/file-sync-choose-cloud-tiering-policies and from this document

    You can still enable cloud tiering if you have a volume-level FSRM quota. Once an FSRM quota is set, the free space query APIs that get called automatically report the free space on the volume as per the quota setting.

    • When FSRM quota is configured it looks at the logical size of the file to determine if quota is met or not.
    o E.g. if a 1GB file is tiered for FSRM Quota it still looks at it as 1GB file instead of 0
    • When FSRM Quota is configured the volume free space policy looks at free space reported by FSRM Quota. It does not look at the actual volume free space percent
    o E.g. if on a volume 10GB Quota is set to 5GB then for tiering it is reported as 5 GB
    • Cloud tiering does not recall files to meet the policies
    o E.g. if cloud tiering policy was set to 20% it frees up space and later the volume was doubled. It does not recall files to meet the 20%
    • Tiering can tier files only when they are synced
    • Tiering runs every 1 hr and tends to tier approx. 5% more

    For an example
    • Volume: 10 GB
    • Quota configured on root of volume which is also root of Server Endpoint
    • Quota: 20 GB
    Note: By default at least UI doesn’t allow it allows max of 10GB but with cmdlet it allows 20GB
    • Free space Percent: 20%

    If you have any additional questions or need further clarification, please let me know.


    Please do not forget to 251018-screenshot-2021-12-10-121802.png and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.

  2. Miclaud 1 Reputation point

    I've finally found a solution: configuring the Azure File Sync service on a newer Windows Server release like 2019 quotas "peak usage" refers to theoretical space used by the folder, as it should be.
    It's a shame that a still supported 2012 R2 version hasn't got any fix about that but at least we have a solution to the problem.

    Thank you everybody for your help.

    0 comments No comments