This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Which file sharing protocol is ideal for Azure HPC Cache to accelerate?
NFS
NTFS
SFTP
Ideal workloads for Azure HPC Cache are heavy in which operation?
Directory operations (listing folder content)
Read operations (requesting file content)
Write operations (creating or modifying content)
Azure HPC Cache is designed to run best when the data is what size?
Kilobyte files in a megabyte working set
Megabyte files in a gigabyte working set
Gigabyte files in a terabyte working set
Which of the following reasons best illustrates why Azure HPC Cache isn't a good fit for accelerating users' personal data files?
Personal data files aren't shared with others because they're private.
A single client accesses personal data files.
Personal data files aren't sensitive and don't need to be encrypted.
Suppose a genetic research company is considering Azure HPC Cache for a workload that compares gigabyte DNA files to the same reference DNA file. The company uses hundreds of cloud clients running an NFS workload. What would make this good fit even better?
If the company installed a load balancer to distribute the workload
If the company installed a proxy server to handle network traffic
If the company installed Azure ExpressRoute to maximize bandwidth
You must answer all questions before checking your work.
Continue
Was this page helpful?