Volume 32 Number 4
The New Azure App Service Environment
By Christina Compy | April 2017
The Azure App Service Environment (ASE) is a Premium feature offering of the Azure App Service. It gives a single-tenant instance of the Azure App Service that runs right in your own Azure virtual network (VNet), providing network isolation and improved scaling capabilities. While the original feature gave customers what they were looking for in terms of network control and isolation, it was not as “Platform as a Service (PaaS) like” as the normal App Service. This caused confusion among customers, who had some trouble managing the system. With the newly relaunched ASE, however, things now work the same as the multi-tenant App Service.
The Azure App Service is a multi-tenant application hosting service. If you want to run your HTTP listening applications in a PaaS service, the App Service is a very quick and easy way to go and has many developer-supporting features. You can do things like integrate with continuous integration (CI) systems, scale your apps out instantly with a flick of the mouse and much more. There are limits to the service, though, that blocked certain use cases.
The use cases that couldn’t be met in the multi-tenant App Service largely centered around scale and app isolation. While you can scale your apps easily in the multi-tenant App Service, there are limits based on the price plan. The greatest number of instances you can scale an app to in the multi-tenant App Service is 20.
With respect to isolation, there’s no way to lock down access to your apps in the multi-tenant App Service at a network level. The App Service has two features to access resources in other networks, Azure Virtual Network (VNet) Integration and Hybrid Connections, but has nothing that can lock apps down at a network level and no way to host completely Internet-isolated apps in the App Service. This means you couldn’t host a line-of-business (LOB) application that you wanted available only on a private IP address on the multi-tenant App Service.
To resolve the scaling and isolation limitations, we provided the Premium ASE feature in 2015. It’s an instance of the Azure App Service that runs in a customer’s VNet, running the same code as the multi-tenant App Service but with some changes to deployment to use fewer resources.
With the first version of the ASE you could scale up to 50 instances and use larger dedicated workers. The ASE is capable of hosting Web apps, mobile apps, API apps and Functions. Because the ASE runs in a subnet in the customer’s VNet, the apps in the ASE have easy access to resources that are available in the VNet itself or across ExpressRoute or site-to-site VPN connections. Also, as shown in Figure 1, because the ASE is in the customer’s subnet, it can restrict access to its apps at a network level using network security groups (NSGs).
Figure 1 App Service Environment High-Level Networking Model
Among the benefits of this deployment model is a static IP address that can be used for both the inbound and outbound IP address for the apps in the ASE. The nature of the multi-tenant app service is that the inbound and outbound addresses are shared by multiple tenants. While it is possible to set up IP SSL for an app and get an IP address assigned to that app, there is no way to lock down the outbound address.
Hosting the ASE in a VNet is a great first start, but it still didn’t completely solve the isolation problem when the ASE was first released. The ASE still needed a public virtual IP (VIP) for HTTP/S and publishing access. It also deployed only in classic VNets, which was a problem for many customers. To solve those problems, support was added in June 2016 for Resource Manager VNets and also for internal load balancers (ILBs), as shown in Figure 2.
Figure 2 App Service Environment High-Level Networking with an Internal Load Balancer
The addition of ILB support meant that customers could now host intranet sites in the cloud. You could take an LOB application that you didn’t want to be Internet-accessible and deploy it into your ILB-enabled ASE. The ILB sits on one of the VNet IP addresses, so it’s accessible only from within the VNet or from hosts that have access to the VNet over a VPN.
The ILB-enabled ASE opened the door to other possibilities, such as Web application firewall (WAF)-fronted applications and two-tier applications. For WAF-fronted ASE applications, a customer could use a WAF virtual device to act as the Internet endpoint for its ILB ASE-hosted apps, which adds an additional security layer for Internet-accessible apps. In a two-tier application, the Web-accessible app could be hosted in either the multi-tenant app service or from another ASE, and the back-end-secured API apps could then be hosted in the ILB ASE. If you used the multi-tenant App Service for such a purpose, you’d then use the VNet Integration feature to securely access your API apps.
When the ASE was originally designed and planned, the assumption was that it would cater to IT professionals who’d want to control this personal deployment of the App Service as if it was a system they ran in their own datacenters. With that in mind, ASE was designed to be flexible. An ASE has two role types to manage: the front ends that act as the HTTP endpoints for applications and the workers that host the apps. You can scale out the quantity of either, as well as change the size of the virtual machine (VM) used for that role type.
There were certain consequences to thinking this way—the ASE roles were treated as resources that system administrators would independently manage, but it turned out that customers didn’t want to be or have system administrators for their cloud services. They wanted the ASE to remain as easy to use as the multi-tenant App Service. Having to manage the resource pools and their apps was too confusing and affected feature adoption.
The New ASE
After the initial version of the ASE (which I’ll refer to as ASEv1) was released, there was substantial feedback from customers who tried it out and found that it didn’t fit their business needs for one reason or another. The primary reasons they gave concerned:
- The complexity of managing the ASEv1 roles, as well as their apps, was aggravating and non-intuitive.
- Adding more capacity to the ASE took too long. Because ASEv1 was built to be run by system administrators for their tenants, the concern had not been with how fast the roles were provisioned. The reality was that when ASEv1 was used, the person who scaled out the ASE roles and the one who deployed the app were typically one and the same and the delay was a problem.
- The system management model forced customers to be far more aware of the ASE architecture and behavior than they wanted to be.
This brings us to the new version of the ASE, which I’ll call ASEv2. The team took the feedback to heart and for ASEv2 we focused on making the UX the same as it was in the multi-tenant App Service, without losing the benefits that ASEv1 provided.
Creating an App Service Plan The App Service plan (ASP) is the scaling container that holds all apps. All apps are in ASPs and when you scale the ASP, you’re also scaling all of the apps in the ASP. This is true for the multi-tenant App Service and for the ASE. This means that to create an app you need to either choose or create an ASP. To create an ASP in ASEv1 you needed to pick an ASE as your location and then select a worker pool, as shown in Figure 3. If the worker pool you wanted to deploy into didn’t have enough capacity, you’d have to add more workers to it before you could create your ASP in it.
Figure 3 Creating an App Service Plan in ASEv1
With ASEv2, when you create an ASP you still select the ASE as your location, but instead of picking a worker pool, you use the pricing cards just as you do outside of the ASE. There are no more worker pools to manage. When you create or scale your ASP, the necessary workers are automatically added.
ASEv2 includes upgraded VMs that are used to host your apps. The workers for ASEv2 are built on the Dv2-series VMs and outperform the workers used in the multi-tenant app service. To distinguish between ASPs that are in an ASE and those in the multi-tenant service, a new pricing SKU was created. The name of this SKU is Isolated, as shown in Figure 4. When you pick an Isolated SKU it means you want the associated ASP to be created in an ASEv2.
Figure 4 Creating an App Service Plan in ASEv2
Creating an ASE One of the other issues that hindered ASE adoption was its lack of visibility. Many customers didn’t even know that the ASE feature existed. To create an ASE, you had to look for the ASE creation flow, which was completely separate from app creation. In ASEv1 customers had to add workers to their worker pools in order to create ASPs. Now that workers are added automatically when ASPs are created or scaled, the ASEv2 creation experience can be placed squarely in the ASP creation flow, as shown in Figure 5.
Figure 5 Creating an App Service Environment from the App Service Plan Creation Flow
To create a new ASEv2 during the ASP creation experience, you simply select a non-ASE region and then select one of the new Isolated SKU cards. When you do this, the ASE creation UI is displayed, which enables you to create a brand new ASEv2 in either a new or pre-existing VNet.
Time to scale The new ASP creation flow became possible only because the process for provisioning new workers was accelerated. ASEv2 automatically provisions new workers for your ASPs when you create or scale an ASP. The only way to make this a reasonable customer experience was to reduce the time required to create and scale out. To make this work, as much as possible is preloaded onto the VHDs used for provisioning the role instances, minimizing supplementary required reboots. Moving to the Dv2 workers was also helpful as they have faster cores and use SSDs. Both of those practices make install and reboots faster.
System Management In ASEv1 the customer had to manage the front ends, workers and the update domain workers. The front-end roles handle HTTP traffic and send traffic to the workers. The workers are the VMs that host your apps. The update domain workers act as standby hosts in case of upgrades or worker failures. With ASEv1 the customer had to know how these components all worked together and scale the resource pools appropriately. When workers had to be scaled out to handle more ASP instances, users had to add more front ends and update domain workers.
ASEv2, in contrast, hides away the infrastructure. Now users simply scale out their App Service plans and the infrastructure is added as needed. When an ASP needs more workers, the workers are added. Front ends and update domain workers are added automatically as the quantity of workers is scaled out. If customers have unusual needs that require more aggressive front-end scaling, they can change the rate at which front ends are added to their ASE.
As you can see in the ASEv2 portal page in Figure 6, things are far simpler now. There’s no longer a need for the worker pool or front-end UI pages. With all scaling now automatic, there are no more scale or autoscale controls. And because the IP addresses used by the ASE are pretty important to know about, the UI consolidates that information.
Figure 6 App Service Environment Version 2 Portal Page
The release of ASEv2 is by no means the end of the ASE feature development efforts. There will continue to be a steady stream of improvements, but they will not impact the UX to the extent the changes made with ASEv2 did.
Additional Benefits Due to the changes made to the system architecture, ASEv2 has a few additional benefits over ASEv1. With ASEv1, the maximum default scale was 50 workers. There were a number of system-architecture reasons why that limit was set, but these issues were addressed in creating the new ASEv2 experience. With ASEv2 the maximum default scale is now 100, which means you can have up to 100 ASP instances hosted in ASEv2. This can be anything from 100 instances of an ASP to 100 individual ASPs, with anything in between.
Moreover, ASEv2 now uses Dv2-based dedicated workers. These new dedicated workers are much faster than the A series VMs on which ASEv1 depended. They have faster CPUs, which improves throughput, and SSDs, which improves file-access performance. As in the multi-tenant app service, the choices for dedicated workers when creating an ASP are single core, dual core or quad core. The new ASE dedicated workers, however, have double the memory of their multi-tenant counterparts and come with 3.5GB, 7GB or 14GB of RAM, respectively.
ASEv1 was a great first step toward enabling customers to have network isolation for their App Service-hosted applications. ASEv2 built on that experience a far more PaaS-like capability that’s not just easier to use, but is also much more powerful.
All of the changes that have been noted here for the ASE have been vetted by a large number of MVPs and customers. Even before development started, the team wanted to validate its approach with people who had already tried using an ASE. As a result of this input-heavy approach, we are confident that the new ASE experience will be considered a substantial improvement and look forward to its success in the field.
Christina Compy is originally an aerospace engineer who worked on the Hubble Space Telescope and has been working in the software industry for more than 20 years. She has been a program manager at Microsoft since 2013 and works on enterprise-focused capabilities.
Thanks to the following Microsoft technical expert for reviewing this article: Stefan Shackow
Stefan is a program manager on the Azure App Services team who has worked on the web app cloud offering since its earliest days. In Azure, Stefan leads a team of program managers who work on the development and deployment of Azure App Service, as well as the development of Microsoft's on-premises/cloud hybrid products.