Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on Red Hat Enterprise Linux
This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with standby on Azure Red Hat Enterprise Linux virtual machines (VMs), by using Azure NetApp Files for the shared storage volumes.
In the example configurations, installation commands, and so on, the HANA instance is 03 and the HANA system ID is HN1. The examples are based on HANA 2.0 SP4 and Red Hat Enterprise Linux for SAP 7.6.
Note
This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we’ll remove them from this article.
Before you begin, refer to the following SAP notes and papers:
- Azure NetApp Files documentation
- SAP Note 1928533 includes:
- A list of Azure VM sizes that are supported for the deployment of SAP software
- Important capacity information for Azure VM sizes
- Supported SAP software, and operating system (OS) and database combinations
- The required SAP kernel version for Windows and Linux on Microsoft Azure
- SAP Note 2015553: Lists prerequisites for SAP-supported SAP software deployments in Azure
- SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux
- SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
- SAP Note 3108302 has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x
- SAP Note 2178632: Contains detailed information about all monitoring metrics reported for SAP in Azure
- SAP Note 2191498: Contains the required SAP Host Agent version for Linux in Azure
- SAP Note 2243692: Contains information about SAP licensing on Linux in Azure
- SAP Note 1999351: Contains additional troubleshooting information for the Azure Enhanced Monitoring Extension for SAP
- SAP Note 1900823: Contains information about SAP HANA storage requirements
- SAP Community Wiki: Contains all required SAP notes for Linux
- Azure Virtual Machines planning and implementation for SAP on Linux
- Azure Virtual Machines deployment for SAP on Linux
- Azure Virtual Machines DBMS deployment for SAP on Linux
- General RHEL documentation
- Azure-specific RHEL documentation:
- NFS v4.1 volumes on Azure NetApp Files for SAP HANA
Overview
One method for achieving HANA high availability is by configuring host auto failover. To configure host auto failover, you add one or more virtual machines to the HANA system and configure them as standby nodes. When active node fails, a standby node automatically takes over. In the presented configuration with Azure virtual machines, you achieve auto failover by using NFS on Azure NetApp Files.
Note
The standby node needs access to all database volumes. The HANA volumes must be mounted as NFSv4 volumes. The improved file lease-based locking mechanism in the NFSv4 protocol is used for I/O
fencing.
Important
To build the supported configuration, you must deploy the HANA data and log volumes as NFSv4.1 volumes and mount them by using the NFSv4.1 protocol. The HANA host auto-failover configuration with standby node is not supported with NFSv3.
In the preceding diagram, which follows SAP HANA network recommendations, three subnets are represented within one Azure virtual network:
- For client communication
- For communication with the storage system
- For internal HANA inter-node communication
The Azure NetApp volumes are in separate subnet, delegated to Azure NetApp Files.
For this example configuration, the subnets are:
client
10.9.1.0/26storage
10.9.3.0/26hana
10.9.2.0/26anf
10.9.0.0/26 (delegated subnet to Azure NetApp Files)
Set up the Azure NetApp Files infrastructure
Before you proceed with the setup for Azure NetApp Files infrastructure, familiarize yourself with the Azure NetApp Files documentation.
Azure NetApp Files is available in several Azure regions. Check to see whether your selected Azure region offers Azure NetApp Files.
For information about the availability of Azure NetApp Files by Azure region, see Azure NetApp Files Availability by Azure Region.
Important considerations
As you're creating your Azure NetApp Files volumes for SAP HANA scale-out with stand by nodes scenario, be aware of the important considerations documented in NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
Sizing for HANA database on Azure NetApp Files
The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in Service level for Azure NetApp Files.
While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
The configuration in this article is presented with simple Azure NetApp Files Volumes.
Important
For production systems, where performance is a key, we recommend to evaluate and consider using Azure NetApp Files application volume group for SAP HANA.
Deploy Azure NetApp Files resources
The following instructions assume that you've already deployed your Azure virtual network. The Azure NetApp Files resources and VMs, where the Azure NetApp Files resources will be mounted, must be deployed in the same Azure virtual network or in peered Azure virtual networks.
Create a NetApp account in your selected Azure region by following the instructions in Create a NetApp account.
Set up an Azure NetApp Files capacity pool by following the instructions in Set up an Azure NetApp Files capacity pool.
The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the Ultra Service level. For HANA workloads on Azure, we recommend using an Azure NetApp Files Ultra or Premium service Level.
Delegate a subnet to Azure NetApp Files, as described in the instructions in Delegate a subnet to Azure NetApp Files.
Deploy Azure NetApp Files volumes by following the instructions in Create an NFS volume for Azure NetApp Files.
As you're deploying the volumes, be sure to select the NFSv4.1 version. Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network or in peered Azure virtual networks. For example, HN1-data-mnt00001, HN1-log-mnt00001, and so on, are the volume names and nfs://10.9.0.4/HN1-data-mnt00001, nfs://10.9.0.4/HN1-log-mnt00001, and so on, are the file paths for the Azure NetApp Files volumes.
- volume HN1-data-mnt00001 (nfs://10.9.0.4/HN1-data-mnt00001)
- volume HN1-data-mnt00002 (nfs://10.9.0.4/HN1-data-mnt00002)
- volume HN1-log-mnt00001 (nfs://10.9.0.4/HN1-log-mnt00001)
- volume HN1-log-mnt00002 (nfs://10.9.0.4/HN1-log-mnt00002)
- volume HN1-shared (nfs://10.9.0.4/HN1-shared)
In this example, we used a separate Azure NetApp Files volume for each HANA data and log volume. For a more cost-optimized configuration on smaller or non-productive systems, it's possible to place all data mounts on a single volume and all logs mounts on a different single volume.
Deploy Linux virtual machines via the Azure portal
First you need to create the Azure NetApp Files volumes. Then do the following steps:
Create the Azure virtual network subnets in your Azure virtual network.
Deploy the VMs.
Create the additional network interfaces, and attach the network interfaces to the corresponding VMs.
Each virtual machine has three network interfaces, which correspond to the three Azure virtual network subnets (
client
,storage
andhana
).For more information, see Create a Linux virtual machine in Azure with multiple network interface cards.
Important
For SAP HANA workloads, low latency is critical. To achieve low latency, work with your Microsoft representative to ensure that the virtual machines and the Azure NetApp Files volumes are deployed in close proximity. When you're onboarding new SAP HANA system that's using SAP HANA Azure NetApp Files, submit the necessary information.
The next instructions assume that you've already created the resource group, the Azure virtual network, and the three Azure virtual network subnets: client
, storage
and hana
. When you deploy the VMs, select the client subnet, so that the client network interface is the primary interface on the VMs. You will also need to configure an explicit route to the Azure NetApp Files delegated subnet via the storage subnet gateway.
Important
Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP HANA certified VM types and OS releases for those types, go to the SAP HANA certified IaaS platforms site. Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.
Create an availability set for SAP HANA. Make sure to set the max update domain.
Create three virtual machines (hanadb1, hanadb2, hanadb3) by doing the following steps:
a. Use a Red Hat Enterprise Linux image in the Azure gallery that's supported for SAP HANA. We used a RHEL-SAP-HA 7.6 image in this example.
b. Select the availability set that you created earlier for SAP HANA.
c. Select the client Azure virtual network subnet. Select Accelerated Network.
When you deploy the virtual machines, the network interface name is automatically generated. In these instructions for simplicity we'll refer to the automatically generated network interfaces, which are attached to the client Azure virtual network subnet, as hanadb1-client, hanadb2-client, and hanadb3-client.
Create three network interfaces, one for each virtual machine, for the
storage
virtual network subnet (in this example, hanadb1-storage, hanadb2-storage, and hanadb3-storage).Create three network interfaces, one for each virtual machine, for the
hana
virtual network subnet (in this example, hanadb1-hana, hanadb2-hana, and hanadb3-hana).Attach the newly created virtual network interfaces to the corresponding virtual machines by doing the following steps:
a. Go to the virtual machine in the Azure portal.
b. In the left pane, select Virtual Machines. Filter on the virtual machine name (for example, hanadb1), and then select the virtual machine.
c. In the Overview pane, select Stop to deallocate the virtual machine.
d. Select Networking, and then attach the network interface. In the Attach network interface drop-down list, select the already created network interfaces for the
storage
andhana
subnets.e. Select Save.
f. Repeat steps b through e for the remaining virtual machines (in our example, hanadb2 and hanadb3).
g. Leave the virtual machines in stopped state for now. Next, we'll enable accelerated networking for all newly attached network interfaces.
Enable accelerated networking for the additional network interfaces for the
storage
andhana
subnets by doing the following steps:a. Open Azure Cloud Shell in the Azure portal.
b. Execute the following commands to enable accelerated networking for the additional network interfaces, which are attached to the
storage
andhana
subnets.az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb1-storage --accelerated-networking true az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb2-storage --accelerated-networking true az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb3-storage --accelerated-networking true az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb1-hana --accelerated-networking true az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb2-hana --accelerated-networking true az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb3-hana --accelerated-networking true
Start the virtual machines by doing the following steps:
a. In the left pane, select Virtual Machines. Filter on the virtual machine name (for example, hanadb1), and then select it.
b. In the Overview pane, select Start.
Operating system configuration and preparation
The instructions in the next sections are prefixed with one of the following:
- [A]: Applicable to all nodes
- [1]: Applicable only to node 1
- [2]: Applicable only to node 2
- [3]: Applicable only to node 3
Configure and prepare your OS by doing the following steps:
[A] Maintain the host files on the virtual machines. Include entries for all subnets. The following entries were added to
/etc/hosts
for this example.# Storage 10.9.3.4 hanadb1-storage 10.9.3.5 hanadb2-storage 10.9.3.6 hanadb3-storage # Client 10.9.1.5 hanadb1 10.9.1.6 hanadb2 10.9.1.7 hanadb3 # Hana 10.9.2.4 hanadb1-hana 10.9.2.5 hanadb2-hana 10.9.2.6 hanadb3-hana
[A] Add a network route, so that the communication to the Azure NetApp Files goes via the storage network interface.
In this example will use
Networkmanager
to configure the additional network route. The following instructions assume that the storage network interface iseth1
.
First, determine the connection name for deviceeth1
. In this example the connection name for deviceeth1
isWired connection 1
.# Execute as root nmcli connection # Result #NAME UUID TYPE DEVICE #System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet eth0 #Wired connection 1 4b0789d1-6146-32eb-83a1-94d61f8d60a7 ethernet eth1
Then configure additional route to the Azure NetApp Files delegated network via
eth1
.# Add the following route # ANFDelegatedSubnet/cidr via StorageSubnetGW dev StorageNetworkInterfaceDevice nmcli connection modify "Wired connection 1" +ipv4.routes "10.9.0.0/26 10.9.3.1"
Reboot the VM to activate the changes.
[A] Install the NFS client package.
yum install nfs-utils
[A] Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note 3024346 - Linux Kernel Settings for NetApp NFS. Create configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp configuration settings.
vi /etc/sysctl.d/91-NetApp-HANA.conf # Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 131072 16777216 net.ipv4.tcp_wmem = 4096 16384 16777216 net.core.netdev_max_backlog = 300000 net.ipv4.tcp_slow_start_after_idle=0 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_sack = 1
[A] Create configuration file /etc/sysctl.d/ms-az.conf with additional optimization settings.
vi /etc/sysctl.d/ms-az.conf # Add the following entries in the configuration file net.ipv6.conf.all.disable_ipv6 = 1 net.ipv4.tcp_max_syn_backlog = 16348 net.ipv4.conf.all.rp_filter = 0 sunrpc.tcp_slot_table_entries = 128 vm.swappiness=10
Tip
Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note 2382421.
[A] Adjust the sunrpc settings, as recommended in SAP note 3024346 - Linux Kernel Settings for NetApp NFS.
vi /etc/modprobe.d/sunrpc.conf # Insert the following line options sunrpc tcp_max_slot_table_entries=128
[A] Red Hat for HANA configuration.
Configure RHEL as described in SAP Note 2292690, 2455582, 2593824, and Red Hat note 2447641.
Note
If installing HANA 2.0 SP04 you will be required to install package
compat-sap-c++-7
as described in SAP note 2593824, before you can install SAP HANA.
Mount the Azure NetApp Files volumes
[A] Create mount points for the HANA database volumes.
mkdir -p /hana/data/HN1/mnt00001 mkdir -p /hana/data/HN1/mnt00002 mkdir -p /hana/log/HN1/mnt00001 mkdir -p /hana/log/HN1/mnt00002 mkdir -p /hana/shared mkdir -p /usr/sap/HN1
[1] Create node-specific directories for /usr/sap on HN1-shared.
# Create a temporary directory to mount HN1-shared mkdir /mnt/tmp # if using NFSv3 for this volume, mount with the following command mount 10.9.0.4:/HN1-shared /mnt/tmp # if using NFSv4.1 for this volume, mount with the following command mount -t nfs -o sec=sys,nfsvers=4.1 10.9.0.4:/HN1-shared /mnt/tmp cd /mnt/tmp mkdir shared usr-sap-hanadb1 usr-sap-hanadb2 usr-sap-hanadb3 # unmount /hana/shared cd umount /mnt/tmp
[A] Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain, i.e.
defaultv4iddomain.com
and the mapping is set to nobody.Important
Make sure to set the NFS domain in
/etc/idmapd.conf
on the VM to match the default domain configuration on Azure NetApp Files:defaultv4iddomain.com
. If there's a mismatch between the domain configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure NetApp volumes that are mounted on the VMs will be displayed asnobody
.sudo cat /etc/idmapd.conf # Example [General] Domain = defaultv4iddomain.com [Mapping] Nobody-User = nobody Nobody-Group = nobody
[A] Verify
nfs4_disable_idmapping
. It should be set to Y. To create the directory structure wherenfs4_disable_idmapping
is located, execute the mount command. You won't be able to manually create the directory under /sys/modules, because access is reserved for the kernel / drivers.# Check nfs4_disable_idmapping cat /sys/module/nfs/parameters/nfs4_disable_idmapping # If you need to set nfs4_disable_idmapping to Y mkdir /mnt/tmp mount 10.9.0.4:/HN1-shared /mnt/tmp umount /mnt/tmp echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping # Make the configuration permanent echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
For more details on how to change
nfs4_disable_idmapping
parameter see https://access.redhat.com/solutions/1749883.[A] Mount the shared Azure NetApp Files volumes.
sudo vi /etc/fstab # Add the following entries 10.9.0.4:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 10.9.0.4:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 10.9.0.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 10.9.0.4:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 10.9.0.4:/HN1-shared/shared /hana/shared nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 # Mount all volumes sudo mount -a
For workloads, that require higher throughput, consider using the
nconnect
mount option, as described in NFS v4.1 volumes on Azure NetApp Files for SAP HANA. Check ifnconnect
is supported by Azure NetApp Files on your Linux release.[1] Mount the node-specific volumes on hanadb1.
sudo vi /etc/fstab # Add the following entries 10.9.0.4:/HN1-shared/usr-sap-hanadb1 /usr/sap/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 # Mount the volume sudo mount -a
[2] Mount the node-specific volumes on hanadb2.
sudo vi /etc/fstab # Add the following entries 10.9.0.4:/HN1-shared/usr-sap-hanadb2 /usr/sap/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 # Mount the volume sudo mount -a
[3] Mount the node-specific volumes on hanadb3.
sudo vi /etc/fstab # Add the following entries 10.9.0.4:/HN1-shared/usr-sap-hanadb3 /usr/sap/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 # Mount the volume sudo mount -a
[A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.
sudo nfsstat -m # Verify that flag vers is set to 4.1 # Example from hanadb1 /hana/data/HN1/mnt00001 from 10.9.0.4:/HN1-data-mnt00001 Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4 /hana/log/HN1/mnt00002 from 10.9.0.4:/HN1-log-mnt00002 Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4 /hana/data/HN1/mnt00002 from 10.9.0.4:/HN1-data-mnt00002 Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4 /hana/log/HN1/mnt00001 from 10.9.0.4:/HN1-log-mnt00001 Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4 /usr/sap/HN1 from 10.9.0.4:/HN1-shared/usr-sap-hanadb1 Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4 /hana/shared from 10.9.0.4:/HN1-shared/shared Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
Installation
In this example for deploying SAP HANA in scale-out configuration with standby node with Azure, we've used HANA 2.0 SP4.
Prepare for HANA installation
[A] Before the HANA installation, set the root password. You can disable the root password after the installation has been completed. Execute as
root
commandpasswd
.[1] Verify that you can log in via SSH to hanadb2 and hanadb3, without being prompted for a password.
ssh root@hanadb2 ssh root@hanadb3
[A] Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note 2593824.
yum install libgcc_s1 libstdc++6 compat-sap-c++-7 libatomic1
[2], [3] Change ownership of SAP HANA
data
andlog
directories to hn1adm.# Execute as root sudo chown hn1adm:sapsys /hana/data/HN1 sudo chown hn1adm:sapsys /hana/log/HN1
[A] Disable the firewall temporarily, so that it doesn't interfere with the HANA installation. You can re-enable it, after the HANA installation is done.
# Execute as root systemctl stop firewalld systemctl disable firewalld
HANA installation
[1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation and Update guide. In this example, we install SAP HANA scale-out with master, one worker, and one standby node.
a. Start the hdblcm program from the HANA installation software directory. Use the
internal_network
parameter and pass the address space for subnet, which is used for the internal HANA inter-node communication../hdblcm --internal_network=10.9.2.0/26
b. At the prompt, enter the following values:
- For Choose an action: enter 1 (for install)
- For Additional components for installation: enter 2, 3
- For installation path: press Enter (defaults to /hana/shared)
- For Local Host Name: press Enter to accept the default
- Under Do you want to add hosts to the system?: enter y
- For comma-separated host names to add: enter hanadb2, hanadb3
- For Root User Name [root]: press Enter to accept the default
- For roles for host hanadb2: enter 1 (for worker)
- For Host Failover Group for host hanadb2 [default]: press Enter to accept the default
- For Storage Partition Number for host hanadb2 [<<assign automatically>>]: press Enter to accept the default
- For Worker Group for host hanadb2 [default]: press Enter to accept the default
- For Select roles for host hanadb3: enter 2 (for standby)
- For Host Failover Group for host hanadb3 [default]: press Enter to accept the default
- For Worker Group for host hanadb3 [default]: press Enter to accept the default
- For SAP HANA System ID: enter HN1
- For Instance number [00]: enter 03
- For Local Host Worker Group [default]: press Enter to accept the default
- For Select System Usage / Enter index [4]: enter 4 (for custom)
- For Location of Data Volumes [/hana/data/HN1]: press Enter to accept the default
- For Location of Log Volumes [/hana/log/HN1]: press Enter to accept the default
- For Restrict maximum memory allocation? [n]: enter n
- For Certificate Host Name For Host hanadb1 [hanadb1]: press Enter to accept the default
- For Certificate Host Name For Host hanadb2 [hanadb2]: press Enter to accept the default
- For Certificate Host Name For Host hanadb3 [hanadb3]: press Enter to accept the default
- For System Administrator (hn1adm) Password: enter the password
- For System Database User (system) Password: enter the system's password
- For Confirm System Database User (system) Password: enter system's password
- For Restart system after machine reboot? [n]: enter n
- For Do you want to continue (y/n): validate the summary and if everything looks good, enter y
[1] Verify global.ini
Display global.ini, and ensure that the configuration for the internal SAP HANA inter-node communication is in place. Verify the communication section. It should have the address space for the
hana
subnet, andlisteninterface
should be set to.internal
. Verify the internal_hostname_resolution section. It should have the IP addresses for the HANA virtual machines that belong to thehana
subnet.sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini # Example #global.ini last modified 2019-09-10 00:12:45.192808 by hdbnameserve [communication] internal_network = 10.9.2.0/26 listeninterface = .internal [internal_hostname_resolution] 10.9.2.4 = hanadb1 10.9.2.5 = hanadb2 10.9.2.6 = hanadb3
[1] Add host mapping to ensure that the client IP addresses are used for client communication. Add section
public_host_resolution
, and add the corresponding IP addresses from the client subnet.sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini #Add the section [public_hostname_resolution] map_hanadb1 = 10.9.1.5 map_hanadb2 = 10.9.1.6 map_hanadb3 = 10.9.1.7
[1] Restart SAP HANA to activate the changes.
sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB
[1] Verify that the client interface will be using the IP addresses from the
client
subnet for communication.# Execute as hn1adm /usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB 'select * from SYS.M_HOST_INFORMATION'|grep net_publicname # Expected result "hanadb3","net_publicname","10.9.1.7" "hanadb2","net_publicname","10.9.1.6" "hanadb1","net_publicname","10.9.1.5"
For information about how to verify the configuration, see SAP Note 2183363 - Configuration of SAP HANA internal network.
[A] Re-enable the firewall.
Stop HANA
sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB
Re-enable the firewall
# Execute as root systemctl start firewalld systemctl enable firewalld
Open the necessary firewall ports
Important
Create firewall rules to allow HANA inter node communication and client traffic. The required ports are listed on TCP/IP Ports of All SAP Products. The following commands are just an example. In this scenario with used system number 03.
# Execute as root sudo firewall-cmd --zone=public --add-port={30301,30303,30306,30307,30313,30315,30317,30340,30341,30342,1128,1129,40302,40301,40307,40303,40340,50313,50314,30310,30302}/tcp --permanent sudo firewall-cmd --zone=public --add-port={30301,30303,30306,30307,30313,30315,30317,30340,30341,30342,1128,1129,40302,40301,40307,40303,40340,50313,50314,30310,30302}/tcp
Start HANA
sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB
To optimize SAP HANA for the underlying Azure NetApp Files storage, set the following SAP HANA parameters:
max_parallel_io_requests
128async_read_submit
onasync_write_submit_active
onasync_write_submit_blocks
all
For more information, see I/O stack configuration for SAP HANA.
Starting with SAP HANA 2.0 systems, you can set the parameters in
global.ini
. For more information, see SAP Note 1999930.For SAP HANA 1.0 systems versions SPS12 and earlier, these parameters can be set during the installation, as described in SAP Note 2267798.
The storage that's used by Azure NetApp Files has a file size limitation of 16 terabytes (TB). SAP HANA is not implicitly aware of the storage limitation, and it won't automatically create a new data file when the file size limit of 16 TB is reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will result in errors and, eventually, in an index server crash.
Important
To prevent SAP HANA from trying to grow data files beyond the 16-TB limit of the storage subsystem, set the following parameters in
global.ini
.
Test SAP HANA failover
Simulate a node crash on an SAP HANA worker node. Do the following:
a. Before you simulate the node crash, run the following commands as hn1adm to capture the status of the environment:
# Check the landscape status python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | | hanadb3 | yes | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | # Check the instance status sapcontrol -nr 03 -function GetSystemInstanceList GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
b. To simulate a node crash, run the following command as root on the worker node, which is hanadb2 in this case:
echo b > /proc/sysrq-trigger
c. Monitor the system for failover completion. When the failover has been completed, capture the status, which should look like the following:
# Check the instance status sapcontrol -nr 03 -function GetSystemInstanceList GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN # Check the landscape status python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | | hanadb2 | no | info | | | 2 | 0 | default | default | master 2 | slave | worker | standby | worker | standby | default | - | | hanadb3 | yes | info | | | 0 | 2 | default | default | master 3 | slave | standby | slave | standby | worker | default | default |
Important
When a node experiences kernel panic, avoid delays with SAP HANA failover by setting
kernel.panic
to 20 seconds on all HANA virtual machines. The configuration is done in/etc/sysctl
. Reboot the virtual machines to activate the change. If this change isn't performed, failover can take 10 or more minutes when a node is experiencing kernel panic.Kill the name server by doing the following:
a. Prior to the test, check the status of the environment by running the following commands as hn1adm:
#Landscape status python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | | hanadb3 | yes | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | # Check the instance status sapcontrol -nr 03 -function GetSystemInstanceList GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
b. Run the following commands as hn1adm on the active master node, which is hanadb1 in this case:
hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB kill
The standby node hanadb3 will take over as master node. Here is the resource state after the failover test is completed:
# Check the instance status sapcontrol -nr 03 -function GetSystemInstanceList GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY # Check the landscape status python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | hanadb1 | no | info | | | 1 | 0 | default | default | master 1 | slave | worker | standby | worker | standby | default | - | | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | | hanadb3 | yes | info | | | 0 | 1 | default | default | master 3 | master | standby | master | standby | worker | default | default |
c. Restart the HANA instance on hanadb1 (that is, on the same virtual machine, where the name server was killed). The hanadb1 node will rejoin the environment and will keep its standby role.
hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB start
After SAP HANA has started on hanadb1, expect the following status:
# Check the instance status sapcontrol -nr 03 -function GetSystemInstanceList GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN # Check the landscape status python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | hanadb1 | no | info | | | 1 | 0 | default | default | master 1 | slave | worker | standby | worker | standby | default | - | | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | | hanadb3 | yes | info | | | 0 | 1 | default | default | master 3 | master | standby | master | standby | worker | default | default |
d. Again, kill the name server on the currently active master node (that is, on node hanadb3).
hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB kill
Node hanadb1 will resume the role of master node. After the failover test has been completed, the status will look like this:
# Check the instance status sapcontrol -nr 03 -function GetSystemInstanceList GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN # Check the landscape status python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | | hanadb3 | no | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - |
e. Start SAP HANA on hanadb3, which will be ready to serve as a standby node.
hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB start
After SAP HANA has started on hanadb3, the status looks like the following:
# Check the instance status sapcontrol -nr 03 -function GetSystemInstanceList & python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN # Check the landscape status python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | | hanadb3 | no | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - |
Next steps
- Azure Virtual Machines planning and implementation for SAP
- Azure Virtual Machines deployment for SAP
- Azure Virtual Machines DBMS deployment for SAP
- NFS v4.1 volumes on Azure NetApp Files for SAP HANA
- To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines (VMs).