Azure VMware Solution networking and interconnectivity concepts
Azure VMware Solution offers a private cloud environment accessible from on-premises and Azure-based resources. Services such as Azure ExpressRoute, VPN connections, or Azure Virtual WAN deliver the connectivity. However, these services require specific network address ranges and firewall ports for enabling the services.
When deploying a private cloud, private networks for management, provisioning, and vMotion get created. You'll use these private networks to access vCenter and NSX-T Manager and virtual machine vMotion or deployment.
ExpressRoute Global Reach is used to connect private clouds to on-premises environments. It connects circuits directly at the Microsoft Enterprise Edge (MSEE) level. The connection requires a virtual network (vNet) with an ExpressRoute circuit to on-premises in your subscription. The reason is that vNet gateways (ExpressRoute Gateways) can't transit traffic, which means you can attach two circuits to the same gateway, but it won't send the traffic from one circuit to the other.
Each Azure VMware Solution environment is its own ExpressRoute region (its own virtual MSEE device), which lets you connect Global Reach to the 'local' peering location. It allows you to connect multiple Azure VMware Solution instances in one region to the same peering location.
For locations where ExpressRoute Global Reach isn't enabled, for example, because of local regulations, you have to build a routing solution using Azure IaaS VMs. For some examples, see Azure Cloud Adoption Framework - Network topology and connectivity for Azure VMware Solution.
Virtual machines deployed on the private cloud are accessible to the internet through the Azure Virtual WAN public IP functionality. For new private clouds, internet access is disabled by default.
There are two ways to interconnectivity in the Azure VMware Solution private cloud:
Basic Azure-only interconnectivity lets you manage and use your private cloud with only a single virtual network in Azure. This implementation is best suited for Azure VMware Solution evaluations or implementations that don't require access from on-premises environments.
Full on-premises to private cloud interconnectivity extends the basic Azure-only implementation to include interconnectivity between on-premises and Azure VMware Solution private clouds.
This article covers the key concepts that establish networking and interconnectivity, including requirements and limitations. In addition, this article provides you with the information you need to know to work with Azure VMware Solution to configure your networking.
Azure VMware Solution private cloud use cases
The use cases for Azure VMware Solution private clouds include:
- New VMware vSphere VM workloads in the cloud
- VM workload bursting to the cloud (on-premises to Azure VMware Solution only)
- VM workload migration to the cloud (on-premises to Azure VMware Solution only)
- Disaster recovery (Azure VMware Solution to Azure VMware Solution or on-premises to Azure VMware Solution)
- Consumption of Azure services
All use cases for the Azure VMware Solution service are enabled with on-premises to private cloud connectivity.
Azure virtual network interconnectivity
You can interconnect your Azure virtual network with the Azure VMware Solution private cloud implementation. You can manage your Azure VMware Solution private cloud, consume workloads in your private cloud, and access other Azure services.
The diagram below shows the basic network interconnectivity established at the time of a private cloud deployment. It shows the logical networking between a virtual network in Azure and a private cloud. This connectivity is established via a backend ExpressRoute that is part of the Azure VMware Solution service. The interconnectivity fulfills the following primary use cases:
- Inbound access to vCenter Server and NSX-T Manager that is accessible from VMs in your Azure subscription.
- Outbound access from VMs on the private cloud to Azure services.
- Inbound access of workloads running in the private cloud.
In the fully interconnected scenario, you can access the Azure VMware Solution from your Azure virtual network(s) and on-premises. This implementation is an extension of the basic implementation described in the previous section. An ExpressRoute circuit is required to connect from on-premises to your Azure VMware Solution private cloud in Azure.
The diagram below shows the on-premises to private cloud interconnectivity, which enables the following use cases:
- Hot/Cold vSphere vMotion between on-premises and Azure VMware Solution.
- On-Premises to Azure VMware Solution private cloud management access.
For full interconnectivity to your private cloud, you need to enable ExpressRoute Global Reach and then request an authorization key and private peering ID for Global Reach in the Azure portal. The authorization key and peering ID are used to establish Global Reach between an ExpressRoute circuit in your subscription and the ExpressRoute circuit for your private cloud. Once linked, the two ExpressRoute circuits route network traffic between your on-premises environments to your private cloud. For more information on the procedures, see the tutorial for creating an ExpressRoute Global Reach peering to a private cloud.
The following table describes the maximum limits for Azure VMware Solution.
|vSphere clusters per private cloud||12|
|Minimum number of ESXi hosts per cluster||3|
|Maximum number of ESXi hosts per cluster||16|
|Maximum number of ESXi hosts per private cloud||96|
|Maximum number of vCenter Servers per private cloud||1|
|Maximum number of HCX site pairings||25 (any edition)|
|Maximum number of Azure VMware Solution ExpressRoute max linked private clouds||4
The virtual network gateway used determines the actual max linked private clouds. For more details, see About ExpressRoute virtual network gateways
|Maximum Azure VMware Solution ExpressRoute port speed||10 Gbps
The virtual network gateway used determines the actual bandwidth. For more details, see About ExpressRoute virtual network gateways
|Maximum number of Azure Public IPv4 addresses assigned to NSX-T Data Center||2,000|
|vSAN capacity limits||75% of total usable (keep 25% available for SLA)|
|VMware Site Recovery Manager - Maximum number of protected Virtual Machines||3,000|
|VMware Site Recovery Manager - Maximum number of Virtual Machines per recovery plan||2,000|
|VMware Site Recovery Manager - Maximum number of protection groups per recovery plan||250|
|VMware Site Recovery Manager - RPO Values||5 min or higher *|
|VMware Site Recovery Manager - Maximum number of virtual machines per protection group||500|
|VMware Site Recovery Manager - Maximum number of recovery plans||250|
* For information about Recovery Point Objective (RPO) lower than 15 minutes, see How the 5 Minute Recovery Point Objective Works in the vSphere Replication Administration guide.
For other VMware-specific limits, use the VMware configuration maximum tool.
Now that you've covered Azure VMware Solution network and interconnectivity concepts, you may want to learn about:
Submit and view feedback for