Uredi

Deli z drugimi prek


Application resilience FAQs for Azure NetApp Files

This article answers frequently asked questions (FAQs) about Azure NetApp Files application resilience.

What do you recommend for handling potential application disruptions due to storage service maintenance events?

Azure NetApp Files might undergo occasional planned maintenance (for example, platform updates, service or software upgrades). From a file protocol (NFS/SMB) perspective, the maintenance operations are nondisruptive, as long as the application can handle the IO pauses that might briefly occur during these events. The I/O pauses are typically short, ranging from a few seconds up to 30 seconds. The NFS protocol is especially robust, and client-server file operations continue normally. Some applications might require tuning to handle IO pauses for as long as 30-45 seconds. As such, ensure that you're aware of the application’s resiliency settings to cope with the storage service maintenance events. For human interactive applications leveraging the SMB protocol, the standard protocol settings are usually sufficient.

Important

To ensure a resilient architecture, it is crucial to recognize that the cloud operates under a shared responsibility model. This model encompasses the Azure cloud platform, its infrastructure services, the OS-layer, and application vendors. Each of these components plays a vital role in gracefully handling potential application disruptions that may arise during storage service maintenance events.

Do I need to take special precautions for SMB-based applications?

Yes, certain SMB-based applications require SMB Transparent Failover. SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover for specific applications, Azure NetApp Files now supports the SMB Continuous Availability shares option. Using SMB Continuous Availability is only supported for workloads on:

Caution

Custom applications are not supported with SMB Continuous Availability and cannot be used with SMB Continuous Availability enabled volumes.

I'm running IBM MQ on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the NFS protocol?

If you're running the IBM MQ application in a shared files configuration, where the IBM MQ data and logs are stored on an Azure NetApp Files volume, the following considerations are recommended to improve resilience during storage service maintenance events:

Note

The number of messages that each MQ multi-instance pair should process is highly dependent on your specific environment. You need to decide how many MQ multi-instance pairs would be needed, or what the scale-up or scale-down rules would be.

The scale-out architecture would be comprised of multiple IBM MQ multi-instance pairs deployed behind an Azure Load Balancer. Applications configured to communicate with IBM MQ would then be configured to communicate with the IBM MQ instances via Azure Load Balancer. For support related to IBM MQ on shared NFS volumes, you should obtain vendor support at IBM.

I'm running Apache ActiveMQ with LevelDB or KahaDB on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the NFS protocol?

If you're running the Apache ActiveMQ, it's recommended to deploy ActiveMQ High Availability with Pluggable Storage Lockers.

ActiveMQ high availability (HA) models ensure that a broker instance is always online and able to process message traffic. The two most common ActiveMQ HA models involve sharing a filesystem over a network. The purpose is to provide either LevelDB or KahaDB to the active and passive broker instances. These HA models require that an OS-level lock be obtained and maintained on a file in the LevelDB or KahaDB directories, called "lock." There are some problems with this ActiveMQ HA model. They can lead to a "no-master" situation, where the replica isn’t aware that it can lock the file. They can also lead to a "master-master" configuration that results in index or journal corruption and ultimately message loss. Most of these problems stem from factors outside of ActiveMQ's control. For instance, a poorly optimized NFS client can cause locking data to become stale under load, leading to “no-master” downtime during failover.

Because most problems with this HA solution stem from inaccurate OS-level file locking, the ActiveMQ community introduced the concept of a pluggable storage locker in version 5.7 of the broker. This approach allows a user to take advantage of a different means of the shared lock, using a row-level JDBC database lock as opposed to an OS-level filesystem lock. For support or consultancy on ActiveMQ HA architectures and deployments, you should contact OpenLogic by Perforce.

I'm running Apache ActiveMQ with LevelDB or KahaDB on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despites using the SMB protocol?

The general industry recommendation is to not run your KahaDB shared storage on CIFS [Common Internet File System]/SMB. If you're having trouble maintaining accurate lock state, check out the JDBC Pluggable Storage Locker, which can provide a more reliable locking mechanism. For support or consultancy on ActiveMQ HA architectures and deployments, you should contact OpenLogic by Perforce.

I’m running Boomi on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events?

If you're running Boomi, it's recommended you follow the Boomi Best Practices for Run Time High Availability and Disaster Recovery.

Boomi recommends Boomi Molecule is used to implement high availability for Boomi Atom. The Boomi Molecule system requirements state that either NFS with NFS locking enabled (NLM support) or SMB file shares can be used. In the context of Azure NetApp Files, NFSv4.1 volumes have NLM support.

Boomi recommends that SMB file share is used with Windows VMs; for NFS, Boomi recommends Linux VMs.

Note

Azure NetApp Files Continuous Availability Shares are not supported with Boomi.

Next steps