Azure Well-Architected Framework perspective on Azure NetApp Files
Azure NetApp Files is an Azure-native, enterprise-class service that you can use to run file workloads in the cloud and store files. Use this fully managed, cloud-based service to:
- Choose service and performance levels.
- Manage data protection.
- Create Azure NetApp Files accounts.
- Create capacity pools.
- Create volumes.
- Create and manage file shares that have high performance, low latency, high availability, and scalability.
Use the familiar protocols and tools that you use for on-premises enterprise applications. Azure NetApp Files supports Server Message Block (SMB), Network File System (NFS), and dual-protocol volumes. You can use Azure NetApp Files for file sharing, high-performance computing, home directories, and databases.
This article assumes that as an architect, you reviewed the file storage options and chose Azure NetApp Files as the service for your workloads. The guidance in this article provides architectural recommendations that are mapped to the principles of the Well-Architected Framework pillars.
Important
How to use this guide
Each section has a design checklist that presents architectural areas of concern along with design strategies localized to the technology scope.
Also included are recommendations for the technology capabilities that can help materialize those strategies. The recommendations don't represent an exhaustive list of all configurations that are available for Azure NetApp Files and its dependencies. Instead, they list the key recommendations mapped to the design perspectives. Use the recommendations to build your proof-of-concept or to optimize your existing environments.
Foundational architecture that demonstrates the key recommendations: Moodle deployment with Azure NetApp Files.
Technology scope
This review focuses on the interrelated decisions for the following Azure resource:
- Azure NetApp Files
Reliability
The purpose of the Reliability pillar is to provide continued functionality by building enough resilience and the ability to recover fast from failures.
Reliability design principles provide a high-level design strategy applied for individual components, system flows, and the system as a whole.
Design checklist
Start your design strategy based on the design review checklist for Reliability. Determine its relevance to your business requirements while keeping in mind the scalability and performance of your workloads. Extend the strategy to include more approaches as needed.
Design your workload to align with business objectives and avoid unnecessary complexity or overhead. Use a practiced and balanced approach to make decisions that reflect the needs of your workload. Your Azure NetApp Files deployment choices can affect other components. For example, the subnet size for your Azure NetApp Files deployment determines the number of available IP addresses. And your network features setting determines the available networks and features.
Identify user flows and system flows. Understand customer needs and business requirements so that you can effectively plan for reliability and optimize security. To limit access to and from necessary networks, use the principle of least privilege when you assign permissions. To authorize access to Azure NetApp Files data, use features or services such as network security groups (NSGs), Microsoft Entra ID hybrid identities, Microsoft Entra Domain Services, Active Directory Domain Services (AD DS), and Lightweight Directory Access Protocol (LDAP). To restrict default access to Azure NetApp Files volumes, use features such as file locking.
Define reliability targets and recovery targets. Visualize recovery targets and drive actions to achieve reliability goals and recoverability goals for your workload. To improve reliability and recovery, define these targets and develop an understanding of Azure NetApp Files solutions. Targets help optimize snapshots, high availability between availability zones, cross-zone and cross-region replication, and SMB continuous availability for supported applications.
Build redundancy. Deploy your workload across availability zones and regions to build redundancy in the workload and supporting infrastructure. This approach ensures that you can quickly recover from failures. Active-passive deployments can handle production loads only in the primary region, but the deployment fails over to the secondary passive region when necessary.
Recommendations
Recommendation | Benefit |
---|---|
Use future-proof sizing to appropriately determine your delegated subnet size. | Appropriately size your subnet to ensure the long-term resiliency of your design and the scalability of your applications and workloads. |
Enable standard networking features on your delegated subnet for enhanced resiliency and extra connectivity patterns. | Select standard network features so you can have high IP limits and standard virtual network features, including extra connectivity patterns and NSGs and user-defined routes on delegated subnets. |
Use built-in Azure NetApp Files features to build redundancy and ensure the reliability of recovery targets. Use snapshots to protect business-critical data and enhance recoverability. Use the Azure NetApp Files availability zone volume placement feature to deploy volumes in a specific availability zone so that they align with Azure compute and other services in the same zone. Use cross-zone or cross-region replication to design for local or remote disaster recovery. To protect your data from zonal or regional failures, use snapshot technology so you can replicate your Azure NetApp Files instances across specific Azure availability zones or regions. |
Snapshots add stability, scalability, and fast recoverability without affecting performance. They provide the foundation for other redundancy solutions, including backup, cross-region replication, and cross-zone replication. Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple datacenter infrastructures. To design resilient solutions, use Azure services that support availability zones. Cross-zone and cross-region replication minimizes data transfer, which reduces cost, and replicates only changed blocks across regions, which creates a low restore point objective. |
Ensure application resilience during service maintenance events, for example platform, service, or software upgrades. For more information, see Address application disruptions. | Some applications might require tuning to handle input/output pauses for up to 30-45 seconds. To maintain application performance and continuity, familiarize yourself with the application's resiliency settings so you can handle storage service maintenance events. |
For SMB or dual-protocol workloads, enable continuous availability for SMB to ensure availability during maintenance events. | If you create an SMB or dual-protocol volume, use continuous availability so that SMB transparent failover for maintenance doesn't interrupt the server applications that store their data on the volume. Some workloads don't support this feature. |
Use the right performance tier, volume sizes, and quality of service (QoS) settings to ensure proper performance. For volumes that have automatic QoS settings, the quota that's assigned to the volume and the selected service level determine the throughput limit. For volumes that have manual QoS settings, you can define the throughput limit individually. Understand how your workload's needs compare to Azure NetApp Files offerings. For more information, see Performance considerations. |
Choose the correct performance tier to avoid unnecessary complexity and overhead and to help manage the total cost of ownership (TCO). |
Configure application filtering properly for your firewalls. To ensure that you use the correct configuration, see the following resources: - Understand network-attached storage (NAS) protocols - Guidelines for network planning - Understand guidelines for AD DS site design and planning |
Configure application filtering correctly to prevent problems such as dropped connections when you attempt to search directories or create items. Windows Server Active Directory, LDAP, and SMB filtering commonly have these problems. Use application filtering to ensure seamless connectivity and efficient network operations. |
Security
The purpose of the Security pillar is to provide confidentiality, integrity, and availability guarantees to the workload.
The Security design principles provide a high-level design strategy for achieving those goals by applying approaches to the technical design of your file storage configuration.
Design checklist
Start your design strategy based on the design review checklist for Security. Identify vulnerabilities and controls to improve the security posture. Extend the strategy to include more approaches as needed.
Isolate, filter, and control network traffic across both ingress and egress flows. Control network traffic to mitigate potential security events. You can filter traffic between Azure resources with an NSG. Or use user-defined routes on the Azure NetApp Files delegated subnet to limit traffic.
Implement strict, conditional, and auditable identity and access management. Strict access to resources provides a frontline of defense. To protect your data resources, use custom role-based access control (RBAC) roles, protocol-based locking tools, and built-in storage policies.
Establish a security baseline that aligns with compliance requirements, industry standards, and platform recommendations. Storage policies and key management can help you establish a baseline of security with industry-standard tools. Azure NetApp Files also supports many of the built-in security recommendations for NFS, SMB, and dual-protocol volumes.
Use modern, industry-standard methods to encrypt data. Ensure that your data remains secure. Azure NetApp Files provides solutions and features to ensure data security, including double encryption at rest. For Windows Server Active Directory users, Azure NetApp Files supports Advanced Encryption Standard (AES) encryption.
Develop a security strategy. Azure NetApp Files supports NFS, SMB, and dual-protocol volumes. If you use dual-protocol volumes, understand the security needs of those volumes and determine which security style suits your workload's needs. If you need a Windows Server Active Directory connection for SMB, NFSv4.1 Kerberos, or LDAP lookups, you should align that security baseline to compliance requirements, industry standards, and platform recommendations. Azure NetApp Files uses standard CryptoMod to generate AES-256 encryption keys, which you can apply to your SMB server. To develop an appropriate security strategy, enable LDAP over TLS, encrypt SMB connections to the domain controller, assign administrator privileged users for SMB volumes, and ensure that you have a backup policy and security protocols.
Set appropriate access and ownership configurations. Set user privileges and restricted roles to help mitigate mistakes and improper actions. To maintain security, set appropriate share-access permissions, ownership roles, and the ownership mode for shares, files, and folders. To achieve optimal security and mitigate mistakes, identify and understand the various access management solutions for NFS, SMB, and dual-protocol shares.
Recommendations
Recommendation | Benefit |
---|---|
Implement access management. Use a custom RBAC role for Azure NetApp Files to limit access for operations. For more information, see Azure custom roles. Review the list of Azure NetApp Files resource provider operations. Restrict default access to Azure NetApp Files volumes. Lock Azure subscriptions, resource groups, and resources to protect them from accidental user deletions and modifications. |
Lock resources so that users don't accidentally create or delete new volumes. You can also prevent users from creating snapshots or backups. |
Set appropriate protocol-specific access controls. Enable access control lists (ACLs) on NFSv4.1 volumes if you create an NFSv4.1 or dual-protocol volume. Restrict mount path change permissions. For more information, see Configure Unix permissions and change the ownership mode for NFS and dual-protocol volumes. For SMB or dual-protocol volumes, use ACLs and SMB to control mounting and access via SMB to the shares. For more information, see Modify share-level ACLs with Microsoft Management Console and Manage SMB share ACLs. Use the New Technology File System (NTFS) security style with SMB shares. For more information, see Access control overview. |
ACLs provide granular file security in NFSv4.1 and SMB. With NTFS, you can use NTFS ACLs. Use ACLs to limit access to necessary users. |
Set appropriate share-access permissions, ownership, and the ownership mode for NFS shares. NFS uses an export policy to control volume access permissions via IP addresses. For more information, see the following resources: - Understand NAS share permissions - Configure an export policy for NFS or dual-protocol volumes - Configure Unix permissions and change the ownership mode The owner and ownership mode dictates who owns and who can change the ownership of the volume. Set appropriate file access permissions for files and folders within NFS shares. For more information, see Understand NAS file permissions and Understand mode bits. |
Share-access permissions limit who can mount an NFS volume, the volume permissions, and who can change the volume owner. The file permissions control access to specific files and folders in a file system and are more granular than share permissions. |
Use a built-in or custom storage policy to restrict insecure volumes or snapshots. For more information, see Azure Policy definitions. | Restrict insecure volumes and use a policy to create snapshots to maintain the reliability and integrity of your applications. |
Configure standard network features and an NSG on the Azure NetApp Files delegated subnet to filter network traffic to or from the Azure NetApp Files endpoint. Consider limiting the routes and deploying user-defined routes on the Azure NetApp Files delegated subnet or network to allow only necessary traffic. |
An NSG and user-defined routes prevent unwanted or potentially malicious traffic from accessing Azure NetApp Files. User-defined routes help direct traffic only where you need it and to drop unnecessary traffic. |
Adopt an effective strategy for your account keys. To mitigate risk, rotate the account keys periodically. To manage your own keys for volumes, use customer-managed keys. | Rotate account keys to help deter access from malicious actors. Use customer-managed keys if you have a security requirement or regulation for self-managed keys. |
Reduce the visibility of resources. If you configure a snapshot policy or take snapshots, hide the snapshot path for NFSv3 and SMB volumes. By default, the snapshot directory path is hidden from NFSv4.1 clients. For NFSv4.1 and dual-protocol volumes, disable the showmount functionality. For SMB, enable access-based enumeration and nonbrowsable SMB shares. |
Limit the visibility of resources to limit access. When you hide the snapshot path, you hide the directory path from NFSv3 and SMB clients, which adds an extra layer of protection. The directory remains accessible. The showmount functionality on NFS clients enables users to see exported file systems on an NFS server, but this feature can flag security probes as a vulnerability. You must enable the showmount functionality on some applications. In SMB deployments, access-based enumeration creates a perimeter on your data, which prevents unnecessary access to the share. Nonbrowsable SMB shares prevent the Windows client from browsing the SMB share. The share doesn't show up in the Windows file browser or in the list of shares when you run the net view \\server /all command. |
Decide which security style to implement if you have dual-protocol volumes. SMB and NFS use different permission models for user access and group access. With dual-protocol volumes, configure Azure NetApp Files to use either the UNIX or the NTFS security style. Only one security style can be active. | Understand the needs of your dual-protocol workloads to help optimize security. |
Use double encryption for sensitive data at rest. By default, Azure NetApp Files encrypts data at rest. To configure double encryption, set the encryption type to double when you create a capacity pool. | Double encryption encrypts data at the hardware level (encrypted SSD drives) and at the software layer that's on the volume level. |
Enable AES encryption if you establish a connection to a Windows Server Active Directory server. This option enables AES encryption authentication support for the admin account of the Windows Server Active Directory connection. For more information, see Create and manage Windows Server Active Directory connections. | AES encryption is a native and industry-standard platform encryption method that helps protect your data. |
Enable LDAP over TLS if you establish a connection to a Windows Server Active Directory server and you use LDAP. For more information, including restrictions about when you can use LDAP, see the following resources: - LDAP encryption - Create and manage Windows Server Active Directory connections - Configure AD DS LDAP over TLS |
LDAP over TLS secures communication between an Azure NetApp Files volume and the Windows Server Active Directory LDAP server. |
Encrypt SMB connections to the domain controller. For more information, see Create and manage Windows Server Active Directory connections. | Encrypted domain controller connections only use the SMB3 protocol. |
Determine and configure backup policy users, security privilege users, and administrators. If you establish a connection to a Windows Server Active Directory server, add the backup policy users, security privilege users, and administrators. For more information, see Create and manage Windows Server Active Directory connections. | This option grants extra security privileges to AD DS domain users or groups. |
Enable NFSv4.1 Kerberos for NFSv4.1 and dual-protocol volumes. | Depending on your encryption level, Kerberos offers initial authentication, integrity checking, and privacy via a Kerberos ticket exchange. For more information, see Understand data-in-transit encryption. |
Enable SMB3 protocol encryption for in-flight SMB3 data. For more information, see the following resources: - SMB shares - Understand data-in-transit encryption - SMB encryption |
SMB clients that don't use SMB3 encryption can't access a volume that has this setting enabled. Data at rest is encrypted regardless of this setting. |
Cost optimization
Cost Optimization focuses on detecting spend patterns, prioritizing investments in critical areas, and optimizing in others to meet the organization's budget while meeting business requirements.
The Cost Optimization design principles provide a high-level design strategy for achieving those goals and making tradeoffs as necessary in the technical design related to Azure NetApp Files and its environment.
Design checklist
Start your design strategy based on the design review checklist for Cost Optimization for investments. Fine-tune the design so that the workload is aligned with the budget that's allocated for the workload. Your design should use the right Azure capabilities, monitor investments, and find opportunities to optimize over time.
Optimize component costs. Determine the specifics of your storage account and design your Azure NetApp Files capacity pools accordingly to optimize costs and lower your TCO. To improve performance and prevent unnecessary costs, appropriately size your capacity pools to your needs and consolidate volumes that are in larger capacity pools with the appropriate QoS settings and performance tier. Use the Azure NetApp Files cool access feature to tier infrequently accessed data to lower-cost Azure storage.
Optimize environment costs. Production workloads require proper data protection and disaster recovery. Azure NetApp Files offers several built-in options, including snapshots that provide space-optimized restore points, backups that efficiently protect data on lower-cost Azure storage tiers, and cross-region and cross-zone replication. To help optimize the cost of your deployment and ensure disaster preparedness, understand how each option suits your workload.
Understand and calculate pricing. Understand your business needs so that you understand pricing. Azure NetApp Files offers several tools, including the Azure NetApp Files performance calculator, to help you accurately estimate pricing.
Optimize scalability costs. Use Azure NetApp Files to meet your shifting business needs and respond to changes in workloads. For example, you might move volumes to improve performance and lower costs. You can use dynamic volume sizing to efficiently scale your Azure NetApp Files volumes to meet performance and capacity requirements on demand.
Use partner solutions to optimize costs. Azure NetApp Files integrates with products like Azure VMware Solution and Microsoft SQL Server and is optimized for Oracle and SAP workloads. Understand Azure NetApp Files features and benefits to optimize your deployments and reduce the TCO. For example, you can use Azure NetApp Files data stores to increase storage capacity in Azure VMware Solution without using extra Azure VMware Solution nodes. You can also use the TCO estimator to calculate estimated costs for Azure VMware Solution and Azure NetApp Files.
Recommendations
Consider the following recommendations to optimize cost when you configure your Azure NetApp Files account.
Recommendation | Benefit |
---|---|
Use the right performance tier, volume sizes, and QoS settings to improve performance and prevent unnecessary costs. For volumes that have manual QoS settings, you can define the throughput limit individually. To plan for performance in Azure NetApp Files, understand performance tiers and QoS settings. | The quota that's assigned to the volume and the selected service level determine the throughput limit for a volume that has automatic QoS settings. Ensure that you use the appropriate performance tier and QoS settings to help maintain performance. |
Use cool access to transparently tier infrequently accessed data to lower-cost Azure storage. Configure inactive data, and move data from Azure NetApp Files storage to an Azure storage account. | Cool access uses a lower-cost storage tier, which reduces the TCO. The transition to the cool tier is seamless for users and applications. |
Size capacity pools appropriately without introducing unnecessary overhead or free space. You don't need to maintain free space in a capacity pool. Only increase capacity pool sizes when you need to increase a volume's size or create a new volume. Reduce capacity pool sizes when you delete volumes or reduce a volume size. | Appropriately size capacity pools to ensure that you only pay for the storage that you need. |
Consolidate as many volumes as possible within a few large capacity pools to share resources. Take advantage of manual QoS to rightsize performance and capacity. Use Azure NetApp Files to share a single capacity pool between multiple volumes from disparate workloads. Set capacity pools to manual QoS so that you can configure each volume with a custom size and throughput. For more information, see Throughput limit examples of volumes in a manual QoS capacity pool. | Combine as many volumes as possible within the same capacity pool to optimize resources and costs. |
Use native snapshots to create space-optimized restore points. | Use Azure NetApp Files snapshots to create space-efficient and cost-efficient restore points. |
Use Azure NetApp Files backup to efficiently protect data on low-cost Azure storage. Azure NetApp Files supports a fully managed backup solution for long-term recovery, archives, and compliance. | Backups are incremental and fully managed to reduce the TCO. |
Take advantage of cross-region or cross-zone replication as a cost-optimized disaster recovery solution. | Azure NetApp Files provides a fully managed and cost-optimized disaster recovery solution that has efficient replication technology. You can replicate data between zones or regions, without needing to stand up infrastructure. |
Implement dynamic volume shaping to provide on-demand performance and capacity only when needed. For a comparison of static volume shaping and dynamic volume shaping cost models, see Cost models. To determine whether Azure NetApp Files data stores can reduce your TCO when you use them with Azure VMware Solution, use the Azure NetApp Files for Azure VMware Solution TCO estimator. | Dynamic volume shaping provides nondisruptive volume resizing and service-level changes to meet changing performance requirements in real time. Ensure that you only pay for performance and capacity when you need it. |
Use Azure NetApp Files data stores when you deploy Azure VMware Solution. | Use Azure VMware Solution so that you can add Azure NetApp Files volumes to the data store capacity, which expands the capacity beyond what the virtual storage area network (vSAN) provides. You can increase storage capacity without extra Azure VMware Solution nodes. |
Use Azure NetApp Files to deploy small virtual machines (VMs) and take advantage of high network throughput limits when you deploy SQL Server. You can use Azure NetApp Files volumes as an SMB share to host SQL Server databases. For more information, see Benefits of using Azure NetApp Files for SQL Server deployments. | Azure VMs typically have a higher network throughput versus disk throughput. With Azure NetApp Files, you can use small Azure VM sizes, which reduce the TCO. |
Use Azure NetApp Files to deploy small VMs and take advantage of high network throughput limits when you deploy SAP workloads. This approach reduces the TCO. For more information, see SAP on Azure NetApp Files sizing best practices. | Use Azure NetApp Files volumes with various SAP workloads. Azure VMs typically have a higher network throughput versus disk throughput. With Azure NetApp Files, you can use small Azure VM sizes, which reduce the TCO. |
Use Azure NetApp Files to deploy small VMs and take advantage of higher network throughput limits when you deploy Oracle database workloads. This approach reduces the TCO. For more information, see Run demanding Oracle workloads in Azure. | Use Azure NetApp Files volumes with various Oracle database workloads. Azure VMs typically have a higher network throughput versus disk throughput. With Azure NetApp Files, you can use small Azure VM sizes, which reduce the TCO. |
Use Azure NetApp Files when you deploy Siemens Teamcenter product lifecycle management (PLM) software. This approach reduces the TCO. | Deploy Azure NetApp Files as a storage solution for Siemens Teamcenter PLM. You can use several cost optimization features to lower the TCO. |
Operational excellence
Operational Excellence primarily focuses on procedures for development practices, observability, and release management.
The Operational Excellence design principles provide a high-level design strategy for achieving those goals for the operational requirements of the workload.
Design checklist
Start your design strategy based on the design review checklist for Operational Excellence for defining processes for observability, testing, and deployment related to your file storage configuration.
Develop the proper architecture and design for your workloads to optimize performance and security. Understand LDAP, application filtering, and the Windows Server Active Directory connector to successfully deploy Azure NetApp Files. To ensure that you protect your data, understand other features such as snapshots.
Deploy with confidence. Understand and manage limits to help optimize your architecture for Azure NetApp Files. Solutions such as Azure Resource Manager templates (ARM templates) can help you automate your deployment. And you can use test environments that you clone from existing workloads to do dry runs that use real data and scenarios. Use Azure NetApp Files built-in tools to increase your confidence in your deployment.
Monitor your routine operations to help optimize performance and better understand various workloads. Azure NetApp Files provides performance and capacity management tools, Azure-native monitoring features, and tools to manage your regional quotas and resource limits. Regularly test your production environment to adjust performance targets and optimize performance. Use snapshot-based cloning and other features in Azure NetApp Files to simulate your production workloads and environments and optimize your performance.
Recommendations
Consider the following recommendations to optimize operational excellence when you configure your Azure NetApp Files account.
Recommendation | Benefit |
---|---|
Understand the Windows Server Active Directory connector, its options, and what the options do. Solution architectures that use Azure NetApp Files volumes require proper AD DS design and planning. | Azure NetApp Files features such as SMB volumes, dual-protocol volumes, and NFSv4.1 Kerberos volumes integrate with AD DS. Understand AD DS deployment to help optimize your Azure NetApp Files experience. |
Understand LDAP. LDAP provides a general-purpose, network-based directory service that you can use across diverse platforms to locate network objects. | LDAP is robust, scalable, and secure. You can use LDAP to contain millions of user objects and group objects. With Windows Server Active Directory, you can use multiple servers to replicate across multiple sites to improve performance and resiliency. The only LDAP server that Azure NetApp Files supports is Windows Server Active Directory. It supports both AD DS and Microsoft Entra Domain Services. |
Use native monitoring features to monitor the performance and health of application volumes, file shares, and databases. | Use monitoring features to set up alerts and notifications for critical events such as file system capacity and performance problems. Use alerts to take corrective actions proactively. |
Manage performance. Azure NetApp Files offers nondisruptive on-demand capacity scaling and service-level changes. | Use these features to scale up or scale down quickly when applications, file shares, and databases require adjustments. To help manage the performance and availability of your application, combine these capabilities with volume resizing or manual QoS settings. |
Use advanced snapshot data-protection and data-management capabilities. These features provide the foundation for data-protection solutions, including single-file restores, volume restores, clones, cross-region replication, and long-term retention. | Azure NetApp Files snapshot technology provides stability, scalability, and fast recoverability without affecting performance. |
Automate infrastructure deployments. Use ARM templates to define and deploy the resources that support your application, such as file shares, storage accounts, VMs, and databases. Also consider using infrastructure as code via Terraform to configure and deploy cloud infrastructure. For more information, see Deploy Azure NetApp Files by using Terraform and Terraform Azure Resource Manager and Azure NetApp Files account integration. | Use Azure NetApp Files to automate the deployment of your application infrastructure, file shares, and databases, which improves the deployment's efficiency and supports a lift-and-shift approach. |
Test deployments. Use cloning via snapshot restore to quickly create a new volume for testing. | Azure NetApp Files provides a reliable and scalable platform to deploy your application code, files, and databases. Use cloning to ensure that your application remains up to date and stable and to test new releases or upgrades easily. |
Test environments. You can use Azure NetApp Files to create and manage test environments quickly. Use snapshot restore to clone your production data to a new volume. | Test activities remain separate from production environments. Do testing to add guardrails. For example, you can ensure the effectiveness of a disaster recovery solution. Use test environments to test new releases or upgrades in a nondisruptive manner. Combine test environments with cross-zone or cross-region replication for further optimization. You can repurpose data that you typically replicate for disaster recovery only. |
Manage your regional quotas and resource limits. Review those levels regularly as your workload evolves. To increase your regional quota, request a regional capacity quota increase. To increase resource limits, request a limit increase. | Quotas affect cost and performance. Plan the amount of resources that your workload requires. Review that level regularly as the workload evolves. Understand these requirements to maintain performance and capacity as needs change. |
Performance efficiency
Performance Efficiency is about maintaining user experience even when there's an increase in load by managing capacity. The strategy includes scaling resources, identifying and optimizing potential bottlenecks, and optimizing for peak performance.
The Performance Efficiency design principles provide a high-level design strategy for achieving those capacity goals against the expected usage.
Design checklist
Start your design strategy based on the design review checklist for Performance Efficiency. Define a baseline that's based on key performance indicators for Azure NetApp Files.
Define performance targets. Understand the demands of your workload and assign numerical values to your performance targets. Azure NetApp Files offers tools and resources to quantify these demands, including calculators and formulas to convert throughput to input/output operations per second (IOPS). You should also understand how Azure NetApp Files service levels and performance tiers affect your deployment and meet the needs of your workload.
Conduct capacity planning. Understand the capacity requirements of your datasets so that you can plan for and optimize performance. Before you deploy your application, understand the nature of your workload and understand the resource limits of Azure NetApp Files. Ensure that Azure NetApp Files capabilities can handle your specific needs to effectively plan for your performance requirements. Make configuration choices that meet your performance and capacity needs.
Select the right service. When you define the needs of your Azure NetApp Files deployment, understand the different performance, capacity, data protection, and disaster recovery requirements. Based on your requirements, calibrate Azure NetApp Files to meet your specific throughput and general performance needs. In some cases, you can reduce storage costs.
Continually optimize performance. Monitor your volume performance to understand the shifting demands of your production workloads. Use these monitoring insights to optimize and tune your performance.
Recommendations
Recommendation | Benefit |
---|---|
Review and understand the Azure NetApp Files service levels, which provide performance ceilings for Azure NetApp Files volumes. | When performance needs for a workload change, you can dynamically and seamlessly change the service levels of the volume to reduce cost. |
Understand resource limits of Azure NetApp Files. | Understand the resource limits of Azure NetApp Files to reduce the risk of overprovisioning or overtaxing your volume. |
Understand the nature of your workload. Convert throughput to IOPS so that you can get insights into performance. For more information, see Performance benchmark test recommendations. Use the Azure NetApp Files performance calculator to understand how your needs match with Azure NetApp Files. |
Understand the IOPS, throughput, and latency requirements of your workload to determine what service level and volume capacity you require. Use the Azure NetApp Files performance calculator to choose the correct capacity and service level so that you can size your volume to maximize your performance and cost efficiency. |
Understand the capacity requirements of the dataset, and determine the appropriate QoS type. | Volume quotas and QoS settings affect performance and volume capacity. If you have a small dataset but high performance requirements, you can use manual QoS to overprovision the volume's performance. |
Understand the difference between regular volumes and large volumes. | Large volumes can provide more capacity and performance but come with minimum requirements that might not fit your workload. In the right context, large volumes can streamline your operation. |
Configure your capacity pool for your workload's capacity, throughput, and QoS policy. | Azure NetApp Files volumes use automatic QoS policies by default and are tied to the volume capacity. You can use manual QoS to configure the capacity pool and provide high performance for volumes. Manual QoS uses the capacity pool size for its performance levels. Understand how capacity pools work so that you can tune Azure NetApp Files to deliver the optimal performance for your business needs. |
Optimize and tune Azure NetApp Files performance. Review the performance FAQ for Azure NetApp Files. | Optimize performance through network and VM configurations. |
Consider data protection and disaster recovery options. | Data protection and disaster recovery can promote consistent performance because they provide a secondary site that can properly accommodate your workload's performance needs if outages occur. |
Monitor your Azure NetApp Files volume performance. | Monitor the performance of your Azure NetApp Files volumes to get insights into your workload's performance. Determine whether you need to adjust service levels or QoS policies for greater or lower performance. |
Azure policies
Azure provides an extensive set of built-in policies related to Azure NetApp Files. Some of the preceding recommendations can be audited through Azure policies. Consider the following policies that are related to security:
- Azure NetApp Files SMB volumes should use SMB3 encryption
- Azure NetApp Files NFSv4.1 volumes should use Kerberos data encryption
- Azure NetApp Files NFSv4.1 volumes should use Kerberos data integrity or data privacy
- Azure NetApp Files volumes shouldn't use the NFSv3 protocol type
Azure Advisor recommendations
Advisor is a personalized cloud consultant that helps you follow best practices to optimize your Azure NetApp Files deployments. Here are some recommendations that can help you improve the reliability, security, cost effectiveness, performance, and operational excellence of Azure NetApp Files.
Consider the following Advisor recommendation for cost effectiveness:
- Save on-demand costs with reserved capacity.
Consider the following Advisor recommendations for reliability:
- Implement disaster recovery strategies for your Azure NetApp Files resources.
- Enable continuous availability for SMB volumes.
- Review SAP configurations for timeout values that you use with Azure NetApp Files.