@A to the E Thank you for your time and patience!
Nfsv3 has a write scheduler which merges and batches client writes into fewer larger write calls to Blob backend.
It merges contiguous client IOs into larger IOs up to a max block size (100MB today) and can batch multiple of such PBs (Put Block) to reduce PBL (Put Block List) calls.
In short, let’s say client sends 500x1MB requests some of which are sequential (by offset) and some or not. We will try to merge the sequential ones into larger PB calls and make multiple of these PB calls in parallel (up to 64) and then issue a single PBL call to commit blocks uploaded using all those parallel PB calls.
To get the best performance, the server should get chunks of the file in a sequence. ( E.g. cp command in Unix system sends a file sequentially.)
Writing cost for hot tier: | Best case |
---|---|
Avg. File size (in mb) | 36000 |
Max size per write in a Put Block(in mb) | 100 |
No. of write operation per file | 360 |
Cost of write operation (per 10k) | |
(Hot tier, US East 2, LRS) | $0.065 |
Write cost of file | $0.002 |
Files written per month | 1710 |
Write cost for 1710 files (~63TB) | $4.001 |
Note: This cost can go up as the "Max size per write in a Put Block(in mb)" decreases. E.g. if it were reduced to 50, the cost would double to $8.
Moving from hot tier to archive tier: | |
---|---|
Cost of write operation (per 10k) | |
(Archive tier, US East 2, LRS) | $0.130 |
No of archive write operation | |
(moving 1 blob from hot->archive = 1 operation) | 1710 |
Total | $0.022 |
Please let us know if you have any further queries. I’m happy to assist you further.
Please do not forget to "Accept the answer” and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.