다음을 통해 공유


Compute Partitioning Guidance

Design and ImplementationCloud Guidance and PrimersShow All

When deploying an application to the cloud it may be desirable to allocate the services and components it uses in a way that helps to minimize running costs while maintaining the scalability, performance, availability, and security of the application.

Overview of Microsoft Azure Compute Options

Azure provides three distinct solutions for hosting applications in the cloud. Azure Web Sites is simple website hosting technology designed to help you quickly build a website or migrate an existing website to the cloud. Azure Cloud Services is a comprehensive hosting technology aimed at more complex web applications, and applications that must be highly scalable or globally available. Azure Virtual Machines allows you to deploy virtual web servers and other services in the cloud.

The key differences between the hosting solutions are the level of control, the methods used to deploy applications, the options for scaling and elasticity, and the use of durable storage. For information about choosing a hosting technology, see Web Sites, Cloud Services, and Virtual Machines (VMs) on MSDN and the section “Evaluating Cloud Hosting Opportunities” in Chapter 1, The Adatum Scenario, of the patterns & practices guide Moving Applications to the Cloud.

Each technology offers a range of sizes for the hosting server, including the number of CPU cores, amount of memory, and bandwidth usage limits. For information about choosing the appropriate size, see Real World: Considerations When Choosing a Web Role Instance Size: How small should I go? on MSDN.

Guidelines for Designing the Compute Boundary

The following sections describe the steps for designing the compute boundary for an application, and the primary factors that require consideration at each stage.

Decompose Applications into Logical Components

Applications you deploy in Azure can be decomposed into multiple components. For example, you might choose to decompose a complex application into separate logical compute instances that implement the website UI, API, administration site, background processing, caches, and more.

When considering decomposing applications, the primary design decision is to define the boundary between the separate parts of the application. Many applications have natural boundaries. For example, it is common to separate the UI from the background processing tasks and offload work to these tasks to maintain performance and responsiveness of the UI. Where an application contains distinct and separate UI sections, such as public and restricted areas, these may also be candidates for decomposition. Even a simple website UI can be decomposed into multiple components, for example by separating the pages that require high throughput from the remainder of the site.

If an application uses services that expose an API, these services are likely to be implemented as separate components or roles in order to manage their scale independently from the website UI. It may be appropriate to separate the tasks in these components or roles as part of the decomposition process.

You should also take into account the workload of each separate part of the application. Workload decomposition refers to decomposing an application into parts based on functional workloads that may have different scale, security and management requirements. This may help you define the decomposition boundaries of the application.

Figure 1 shows an example where an application that evolves to contain a range of different types of component can be decomposed into multiple separate compute host instances, depending on the requirements of each component.

Figure 1 - Decomposing an application into multiple separate compute host instances

Figure 1 - Decomposing an application into multiple separate compute host instances

In Figure 1, the components of the application fall into three partitions. The actual types of component in each partition have similar requirements in terms of scalability, availability, and security. Components of the same type can be hosted together. Where the requirements differ, hosting in different compute instances allows the parameters of that instance to be fine-tuned to match the requirements.

The physical deployment of each component also depends on the hosting technology you choose. For example, when using Azure Virtual Machines you can separate the components by installing them on separate virtual machines. When using Azure Cloud Services, you can separate the components by using web and worker roles.

Identify Requirements

To identify the groups and plan the physical deployment, you must determine the non-functional requirements for each logical component. The following requirements must be identified for each component:

  • Performance and scalability. Often the main considerations when assembling the required compute instances, components, and services for an application are performance and throughput and, to achieve this, the application may make use of multiple instances. Decomposing****applications will allow you to more closely control the number of instances of each one you deploy to ensure that your application can meet peaks in demand. You can scale out as demand increases, and scale in when the application is not busy. You can configure automatic scaling for Azure hosted applications. For more information, see Autoscaling Guidance elsewhere in this guide.
  • Availability. Business and commercial applications typically need to meet strict service level agreements (SLAs) and other organizational requirements in terms of availability, responsiveness, and minimum downtime. Decomposing****compute instances, components, and services that have differing requirements can improve availability because you can host additional instances of the vital ones and fewer of those that have lower availability requirements.
  • Deployment and updating. Applications will need to be deployed to the hosting environment, and updated as new features are added or bugs are fixed. However, each component may have a different update and deployment cycle. Grouping components that have the same update cycle will simplify management.
  • Security. It is vital to consider how partitioning affects the security boundaries within the application. Decomposing****applications may be necessary to maximize security. For example, you may want to implement the Gatekeeper pattern that helps to protect applications from intrusive attacks, or isolate components and services for the tenants in a multi-tenant application by deploying some tasks in separate components or compute instances.
  • Resource utilization. Different component parts of an application may have differing requirements for memory, bandwidth, CPU, and more. Decomposing parts of an application allows you to match the requirements for each part with the size of the hosting instance. For example, a small instance may be sufficient for background processing tasks that run occasionally and have low demand for memory and CPU power, whereas other more intensive tasks may require large or even extra-large compute hosts to manage the demand. However, if the demand fluctuates to a large degree, consider using a smaller host size and deploying multiple instances through autoscaling.
  • Hosting environment. Components may have specific demands or limitations that affect the choice of hosting environment. For example, third-party components that require special configuration of the operating system will probably need to be hosted in a virtual machine.
  • Background tasks. If the application performs background processing, these tasks are often good candidates for decomposition. The types of processing that usually work well as background tasks are those that perform a large amount of I/O or network activity, and those that run asynchronously. For example, a long-running workflow that includes external service calls or batch operations that periodically process large volumes of data could be decomposed into worker roles as background tasks.

Allocate Components to Compute Instances

Decisions about how the components of an application are grouped into a single or multiple compute hosts must be based on the requirements of the individual logical components. Components that have similar requirements can be grouped into the same partition. However, the requirements of the application as a whole must be considered. When allocating components to compute resources, consider:

  • Management and Maintenance. The cost and effort of managing, monitoring, and maintaining applications (and the services, components, and tasks each one requires) depend to some extent on the range of different items that are deployed. Decomposing****applications will increase management, monitoring, and maintenance overhead; although this is not a linear relationship because you will typically be able to extend existing tools and systems to include the additional deployments.
  • Runtime cost. You are billed for every hosted compute instance you deploy to the cloud environment, and so decomposing****applications is likely to increase runtime costs. However, implementing autoscaling can minimize the runtime cost for items that are subject to variable demand or load, while maintaining availability. For more information see the Autoscaling Guidance.
  • Dependencies. Some components may have dependencies that prevent them from being separated. It may also be advantageous to minimize the requirements for inter-process communication between components by hosting them in the same compute instance, for example, to minimize latency or reduce deployment complexity.
  • Inter-process Communication. Tasks may need to communicate with components in other compute instances, perhaps by using shared memory, private HTTP or TCP endpoints, asynchronous messaging, named pipes, data stores, or a global cache. When this is the case, consider how it will affect the design. Extremely chatty components, or components that are heavily dependent upon each other, could be hosted in the same instance to reduce the communication overhead. For more information about implementing communication between the component parts using queues, see the Asynchronous Messaging Guidance, Queue-based Load Leveling pattern, and Priority Queue pattern.

Related Patterns and Guidance

The following patterns and guidance may also be relevant to your scenario when consolidating or decomposing application and service instances:

  • Autoscaling Guidance. Autoscaling can be used to maintain availability of solutions automatically instead of the labor-intensive process of constantly monitoring performance and scaling individual components and services in a partitioned application to meet capacity and to optimize cost targets.
  • Competing Consumers Pattern. Components in a partitioned application may need to retrieve messages from the same source and process multiple messages concurrently in order to optimize throughput, to improve scalability and availability, and to balance the workload. The Competing Consumers pattern demonstrates how this can be achieved.
  • Compute Resource Consolidation Pattern. In some cases, it may be appropriate to consolidate multiple tasks or operations into a single computational unit to increase compute resource utilization, and reduce the costs and management overhead associated with performing compute processing in cloud-hosted applications. The Compute Resource Consolidation pattern describes this approach.
  • Gatekeeper Pattern. This pattern can help to add additional protection to a partitioned application by using a dedicated host instance that acts as a broker between clients and the application, validates and sanitizes requests, and passes requests and data between them.
  • Leader Election Pattern. Components in a partitioned application may execute a collection of collaborating task instances, with one task coordinating the actions being performed by the others. The Leader Election pattern shows how one task can be elected as the leader, and can assume responsibility for managing the other instances.

More Information

The pages Web Sites, Cloud Services, and Virtual Machines (VMs) on MSDN.

The article Real World: Considerations When Choosing a Web Role Instance Size: How small should I go? on MSDN.

For information about designing applications for scalability see the following sections of the patterns & practices guide Developing Multi-tenant Applications for the Cloud on MSDN:

Next Topic | Previous Topic | Home | Community

patterns & practices Developer Center