Hey,
I noticed a strange behaviour (tested in North Europe and Central US) that goes against what docs say (that VM without any NSG on vm/subnet should have open access).
My situation:
- Created VNet
- Created Subnet without NSG
- Created VM in subnet created above without NSG (linux). Tested that connecting to ssh(22)/www(80) works from other VM in that network.
- Two approaches tested:
a) Created public ip for that VM and tried to connect
b) Created a load balancer with 2VMs and balancing on port 80
- The result indicates that VMs without any NSG are unreachable:
a) single VM with single public ip is unreachable on port 22. After adding NSG allowing 22 starts to work. After disassociating stops to work
b) 2VMs brhing load balancer work flawlessly only when they have NSGs assigned that allow port 22 (tested with NSG on NIC, none on subnet). After disassociating NSG from one of the VM's NIC this particular VM stops being reachable. The backend pool is still healthy (checked metrics and also when hitting F5 in browser it randomly hangs suggesting that it tries to hit VM without NSG, VM with NSG works when it gets picked).
Overall it seems like default NSG is applied underneath when there is no NSG applied in configuration (because somehow only load balancer health checks go through).
Is this some new change in Azure that did not yet get reflected in the docs ?