Hello Rasitha Mudugama Hewage Thank you for reaching Microsoft QnA platform
Cluster workloads should be placed on a logical network that is different from the management network. Using the management vSwitch IP pool for Kubernetes nodes increases the risk of IP conflicts and can complicate troubleshooting or the firewall configurations
Troubleshoot network validation errors - https://learn.microsoft.com/en-us/azure/aks/aksarc/network-validation-errors
Create logical networks for Kubernetes clusters on Azure Local - https://learn.microsoft.com/en-us/azure/aks/aksarc/aks-networks?tabs=azurecli
Also choose an IP range with enough addresses for node scaling, pods, and future expansion. Kindly do not overlap IP pools between management and cluster networks.
AKS enabled by Azure Arc network requirements - https://learn.microsoft.com/en-us/azure/aks/aksarc/network-system-requirements
Logical networks on Azure Stack HCI do not automatically create a classic Azure Virtual Network. Instead, they are mapped to Hyper-V or SDN vSwitches and require manual configuration (e.g. address space, gateways, VLANs, etc.).
For AKS clusters on Azure, the underlying Azure Virtual Network is created only if it's specified during deployment, you can reuse or peer with pre-existing network
Also use static or dynamic IP pool settings, assign a gateway, and configure DNS information and if you want to create an AKS cluster and start a service application on the cluster.
Please refer the wiki for details : Create Kubernetes clusters using Azure CLI - AKS enabled by Azure Arc | Microsoft Learn and let us know if this helps here.
Regards
Himanshu