Having trouble communicating to VM's over VPN

Arwen 21 Reputation points
2020-11-08T00:12:07.08+00:00

I am testing out Azure with a virtual cisco meraki MX. I have set it up in Azure and configured a VPN between my on-prem and Azure. I have created two virtual networks in azure. one for the meraki and one for the servers. I created peerings between the two networks and also created routes in the each network as well back to my home network. I can ping the IP of my VMX, but not my server. Any help would be appreciated.

Azure Virtual Network
Azure Virtual Network
An Azure networking service that is used to provision private networks and optionally to connect to on-premises datacenters.
2,461 questions
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Andreas Baumgarten 109.9K Reputation points MVP
    2020-11-08T10:51:04.33+00:00

    What kind of server you deployed? If it's a Windows Server by default the OS Firewall blocks incoming ICMP (used for ping).
    Maybe this will help.


    (If the reply was helpful please don't forget to upvote and/or accept as answer, thank you)

    Regards
    Andreas Baumgarten


  2. GitaraniSharma-MSFT 49,606 Reputation points Microsoft Employee
    2020-11-09T02:54:16.667+00:00

    Hello @Arwen ,

    Welcome to Microsoft Q&A Platform. Thanks for posting your query.

    Below are a few things that you need to validate/fix to get your setup working:

    1. The Vnet peering between the Hub Vnet (where NVA is deployed) and Spoke Vnet (where servers are deployed) should have the below options enabled:
      Traffic to remote virtual network - Allow
      Traffic forwarded from remote virtual network - Allow
      Please refer : https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?toc=%2fazure%2fvirtual-network%2ftoc.json#spoke-connectivity
      38140-vnet-peering-options.jpg
    2. UDR (User Defined Route) added to all the subnets of the Spoke Vnet (where servers are deployed) with a return route as below:
      Address prefix - Address range of your on-premises
      Next hop type - Virtual appliance
      Next hop address - IP address of your NVA which handles the traffic.
      Please refer : https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview#user-defined
    3. Make sure that the NVA has been enabled for IP forwarding.
      NOTE : If the traffic sent to the NVA is destined for a different IP address, IP forwarding sends the traffic to the correct location.
      Please refer : https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-network-interface#enable-or-disable-ip-forwarding
    4. Since you mentioned you are not even able to ping the other subnet from the meraki portal, it could be that NSGs are blocking access. Check the NSG applied on the Subnets and VMs to make sure traffic is allowed between subnets and there is no NSG blocking access to the VMs.
      Please refer : https://learn.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview
    5. Enable ICMP through the Windows firewall on your servers or disable Windows Firewall for a few minutes for testing. If it is a Linux server, try disabling Firewall & test again.

    You can use the below article as an overall reference of your setup (excluding hub-spoke model):
    https://learn.microsoft.com/en-us/azure/virtual-network/tutorial-create-route-table-portal

    Hope this helps!

    Kindly let us know if the above helps or you need further assistance on this issue.

    ----------------------------------------------------------------------------------------------------------------

    Please "Accept the answer" if the information helped you. This will help us and others in the community as well.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.