Assigning the Linux Fence Agent Role to the related RHEL VMs allows you use the fence_azure_arm fence script to list the nodes and reboot the VMs.
$ fence_azure_arm --msi -o list
r9p2clazpn1,
r9p2clazpn2,
Trying to apply to cluster configuration is another issue.
cs stonith create vmfence1 fence_azure_arm msi=true resourceGroup="ime-rg" subscriptionId="211b22e3-0480-4e9b-8284-48dc65ea9d39" pcmk_host_list=r9p2clazpn1 pcmk_host_map=r9p2clazpn1:211b22e3-0480-4e9b-8284-48dc65ea9d39 power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 op monitor interval=3600
[azureuser@r9p2clazpn1 ~]$ sudo pcs status
Cluster name: r9p2clazp
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: r9p2clazpn1 (version 2.1.6-9.el9-6fdc9deea29) - partition with quorum
* Last updated: Thu Dec 7 16:21:58 2023 on r9p2clazpn1
* Last change: Thu Dec 7 16:21:31 2023 by root via cibadmin on r9p2clazpn1
* 2 nodes configured
* 5 resource instances configured
Node List:
* Online: [ r9p2clazpn1 r9p2clazpn2 ]
Full List of Resources:
* Clone Set: locking-clone [locking]:
* Started: [ r9p2clazpn1 r9p2clazpn2 ]
* vmfence1 (stonith:fence_azure_arm): Starting r9p2clazpn2
Failed Resource Actions:
* vmfence1 start on r9p2clazpn1 returned 'error' at Thu Dec 7 16:21:31 2023 after 23.227s
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Any suggestions?