The Trey Research Scenario
On this page: |
---|
Integrating with the Cloud | The Challenges of Hybrid Application Integration | The Trey Research Company | Trey Research's Strategy | The Orders Application | The Original On-Premises Orders Application | The Azure Hybrid Application | How Trey Research Tackled the Integration Challenges - Staged Migration to the Cloud | Technology Map of the Guide | Summary | More Information |
This guide focuses on the ways that you can use the services exposed by Microsoft Azure™ technology platform, and some other useful frameworks and components, to help you integrate applications with components running in the cloud to build hybrid solutions. A hybrid application is one that uses a range of components, resources, and services that may be separated across datacenter, organizational, network, or trust boundaries. Some of these components, resources, and services may be hosted in the cloud, though this is not mandatory. However, in this guide, we will be focusing on applications that have components running in Azure.
The guide is based on the scenario of a fictitious company named Trey Research that wants to adapt an existing application to take advantage of the opportunities offered by Azure. It explores the challenges that Trey Research needed to address and the architectural decisions Trey Research made.
Jana Says: | |
---|---|
|
Integrating with the Cloud
Using the cloud can help to minimize running costs by reducing the need for on-premises infrastructure, provide reliability and global reach, and simplify administration. It is often the ideal solution for applications where some form of elasticity or scalability is required.
It's easy to think of the cloud as somewhere you can put your applications without requiring any infrastructure of your own other than an Internet connection and a hosting account; in much the same way as you might decide to run your ASP.NET or PHP website at a web hosting company. Many companies already do just this. Applications that are self-contained, so that all of the resources and components can be hosted remotely, are typical candidates for the cloud.
But what happens if you cannot relocate all of the resources for your application to the cloud? It may be that your application accesses data held in your own datacenter where legal or contractual issues limit the physical location of that data, or the data is so sensitive that you must apply special security policies. It could be that your application makes use of services exposed by other organizations, which may or may not run in the cloud. Perhaps there are vital management tools that integrate with your application, but these tools run on desktop machines within your own organization.
Bharath Says: | |
---|---|
|
In fact there are many reasons why companies and individuals may find themselves in the situation where some parts of an application are prime targets for cloud hosting, while other parts stubbornly defy all justification for relocating to the cloud. In this situation, to take advantage of the benefits of the cloud, you can implement a hybrid solution by running some parts in the cloud while other parts are deployed on-premises or in the datacenters of your business partners.
The Challenges of Hybrid Application Integration
When planning to move parts of an existing application from on-premises to the cloud, it is likely that you will have concerns centered on issues such as communication and connectivity. For example, how will cloud-based applications call on-premises services, or send messages to on-premises applications? How will cloud-based applications access data in on-premises data stores? How can you ensure that all instances of the application running in cloud datacenters have data that is up-to-date?
In addition, moving parts of an application to the cloud prompts questions about performance, availability, management, authentication, and security. When elements of your application are now running in a remote location, and are accessible only over the Internet, can they still work successfully as part of the overall application?
It is often helpful to divide the challenges presented by hybrid applications into distinct categories that focus attention on the fundamental areas of concern.
It is possible to divide the many challenges into separate areas of concern. This helps you to identify them more accurately, and discover the solutions that are available to help you to resolve them. The areas of concern typically consist of the following:
- Deploying functionality and data to the cloud. It is likely that you will need to modify the code in your existing on-premises applications to some extent before it, and the data it uses, can be deployed to the cloud. At a minimum you will need to modify the configuration, and you may also need to refactor the code so that it runs in the appropriate combination of Azure web and worker roles. You must also consider how you will deploy data to the cloud; and handle applications that, for a variety of reasons, may not be suitable for deploying to Azure web and worker roles.
- Authenticating users and authorizing requests. Most applications will need to authenticate and authorize visitors, customers, or partners at some stage of the process. Traditionally, authentication was carried out against a local application-specific store of user details, but increasingly users expect applications to allow them to use more universal credentials; for example, existing accounts with social network identity providers such as Windows Live® ID, Google, Facebook, and Open ID. Alternatively, the application may need to authenticate using accounts defined within the corporate domain to allow single sign on or to support federated identity with partners.
- Cross-boundary communication and service access. Many operations performed in hybrid applications must cross the boundary between on-premises applications, partner organizations, and applications hosted in Azure. Service calls and messages must be able to pass through firewalls and Network Address Translation (NAT) routers without compromising on-premises security. The communication mechanisms must work well over the Internet and compensate for lower bandwidth, higher latency, and less reliable connectivity. They must also protect the contents of messages, authenticate senders, and protect the services and endpoints from Denial of Service (DoS) attacks.
- Business logic and message routing. Many hybrid applications must process business rules or workflows that contain conditional tests, and which result in different actions based on the results of evaluating these rules. For example, an application may need to update a database, send the order to the appropriate transport and warehouse partner, perform auditing operations on the content of the order (such as checking the customer's credit limit), and store the order in another database for accounting purposes. These operations may involve services and resources located both in the cloud and on-premises.
- Data synchronization. Hybrid applications that run partly on-premises and partly in the cloud, run in the cloud and use on-premises data, or run wholly in the cloud but in more than one datacenter, must synchronize and replicate data between locations and across network boundaries. This may involve synchronizing only some rows and columns, and you may also want to perform translations on the data.
- Scalability, performance, and availability. While cloud platforms provide scalability and reliability, the division of parts of the application across the cloud/on-premises boundary may cause performance issues. Bandwidth limitations, the use of chatty interfaces, and the possibility of throttling in Azure may necessitate caching data at appropriate locations, deploying additional instances of the cloud-based parts of the application to handle varying load and to protect against transient network problems, and providing instances that are close to the users to minimize response times.
- Monitoring and management. Companies must be able to effectively manage their remote cloud-hosted applications, monitor the day-to-day operation of these applications, and have access to logging and auditing data. They must also be able to configure, upgrade, and administer the applications, just as they would if the applications were running in an on-premises datacenter. Companies also need to obtain relevant and timely business information from their applications to ensure that they are meeting current requirements such as Service Level Agreements (SLAs), and to plan for the future.
To help you meet these challenges, Azure provides a comprehensive package of cloud-based services, management tools, and development tools that make it easier to build integrated and hybrid applications. You can also use many of these services when the entire application is located within Azure, and has no on-premises components.
Markus Says: | |
---|---|
|
The Trey Research Company
Trey Research is a medium sized organization of 600 employees, and its main business is manufacturing specialist bespoke hardware and electronic components for sale to research organizations, laboratories, and equipment manufacturers. It sells these products over the Internet through its Orders application. As an Internet-focused organization, Trey Research aims to minimize all non-central activities and concentrate on providing the best online service and environment without being distracted by physical issues such as transport and delivery. For this reason, Trey Research has partnered with external companies that provide these services. Trey Research simply needs to advise a transport partner when an order is received into manufacturing, and specify a date for collection from Trey Research's factory. The transport partner may also advise Trey Research when delivery to the customer has been made.
The Orders application is just one of the many applications that Trey Research uses to run its business. Other on-premises applications are used to manage invoicing, raw materials, supplier orders, production planning, and more. However, this guide is concerned only with the Orders application and how it integrates with other on-premises systems such as the main management and monitoring applications.
The developers at Trey Research are knowledgeable about various Microsoft products and technologies, including the .NET Framework, ASP.NET MVC, SQL Server®, and the Microsoft Visual Studio® development system. The developers are also familiar with Azure, and aim to use any of the available features of Azure that can help to simplify their development tasks.
Trey Research's Strategy
Trey Research was an early adopter of cloud-based computing and Azure; it has confirmed this as the platform for new applications and for extended functionality in existing applications. Trey Research hopes to minimize on-premises datacenter costs, and is well placed to exploit new technologies and the business opportunities offered by the cloud.
Although they are aware of the need to maintain the quality and availability of existing services to support an already large customer base, the managers at Trey Research are willing to invest in the development of new services and the modification of existing services to extend their usefulness and to improve the profitability of the company. This includes planning ahead for issues such as increased demand for their services, providing better reporting and business information capabilities, improving application performance and availability, and handling additional complexity such as adding external partners.
The Orders Application
Trey Research's Orders application enables visitors to place orders for products. It is a web application that has evolved over time to take advantage of the benefits of cloud-based deployment in multiple datacenters in different geographical locations, while maintaining some essential services and applications within the on-premises corporate infrastructure. This is a common scenario for many organizations, and it means that solutions must be found to a variety of challenges. For example, how will the application connect cloud-based services with on-premises applications in order to perform tasks that would normally communicate over a corporate datacenter network, but most now communicate over the Internet?
In Trey Research's case, some vital functions connected with the application are not located in the cloud. Trey Research's management and operations applications and some databases are located on-premises in their own datacenter. The transport and delivery functions are performed by separate transport partners affiliated to Trey Research. These transport partners may themselves use cloud-hosted services, but this has no impact on Trey Research's own application design and implementation.
Markus Says: | |
---|---|
|
The Original On-Premises Orders Application
When Trey Research originally created the Orders application it ran entirely within their own datacenter, with the exception of the partner services for transport and delivery. The application was created as two separate components: the Orders application itself (the website and the associated business logic), and the suite of management and reporting applications.
In addition, the public Orders web application would need to be able to scale to accommodate the expected growth in demand over time, whereas the management and reporting applications would not need to scale to anything like the same extent. Trey Research proposed to scale the management and reporting applications as demand increases by adding additional servers to an on-premises web farm in their datacenter. Figure 1 shows the application running on-premises.
Figure 1
High-level overview of the Trey Research Orders application running on-premises
As you can see in Figure 1, the Orders application accesses several databases. It uses ASP.NET Forms authentication to identify customers and looks up their details in the Customers table using a unique user ID. It obtains a list of the products that Trey Research offers from the Products table in the database, and stores customer orders in the Orders table. The Audit Log table in the on-premises database holds a range of information including runtime and diagnostic information, together with details of notable orders such as those over a specific total value. Managers can obtain business information from the Orders table by using SQL Server Reporting Services.
The Orders application sends a message to the appropriate transport partner when a customer places an order. Currently, Trey Research has two transport partners: one for local deliveries in neighboring states and one for deliveries outside of the area. This message indicates the anticipated delivery date and packaging information for the order (such as the weight and number of packages). The transport partner may send a message back to the Orders application after the delivery is completed so that the Orders database table can be updated.
Due to the nature of the products Trey Research manufactures, it must also ensure that it meets legal requirements for the distribution of certain items, particularly for export to other countries and regions. These requirements include keeping detailed records of the sales of certain electronic components that may be part of Trey Research's products, and hardware items that could be used in the manufacture of munitions. Analyzing the contents of orders is a complex and strictly controlled process accomplished by a legal compliance application from a third party supplier, and it runs on a specially configured server.
Finally, Trey Research uses separate applications to monitor the Orders application, manage the data it uses, and perform general administrative tasks. These monitoring and management applications interact with Trey Research's corporate systems for performing tasks such as invoicing and managing raw materials stock, but these interactions are not relevant to the topics and scenarios of this guide.
The Azure Hybrid Application
With the availability of affordable and reliable cloud hosting services, Trey Research decided to investigate the possibility of moving the application to Azure.
Applications that run across the cloud and on-premises boundary may use web, worker, and virtual machine roles hosted in one or more Azure data centers; SQL Azure™ technology platform databases in the same or different data centers; third-party remote services built using Windows or other technologies; and on-premises resources such as databases, services, and file shares. Integrating and communicating between these resources and services is not a trivial task, especially when there are firewalls and routers between them.
One of the most immediate concerns when evolving applications to the cloud is how you will expose internal services and data stores to your cloud-based applications and services.
In addition, applications should be designed and deployed in such a way as to be scalable to meet varying loads, robust so that they are available at all times, secure so that you have full control over who can access them, and easy to manage and monitor.
Figure 2 shows a high-level view of the architecture Trey Research implemented for their hybrid application. Although Figure 2 may seem complicated, the Orders application works in much the same way as when it ran entirely on-premises. You will see more details about the design decisions and implementation of each part of the application in subsequent chapters of this guide.
Figure 2
High-level overview of the Trey Research Orders application running in the cloud
Here is a brief summary of the features shown in Figure 2:
Customer requests all pass through Azure Traffic Manager, which redirects the customer to the instance of the Orders application running in the closest datacenter, based on response time and availability.
Instead of using ASP.NET Forms authentication, customers authenticate using a social identity provider such as Windows Live ID, Yahoo!, or Google. Azure Access Control Service (ACS) manages this process, and returns a token containing a unique user ID to the Orders application. The Orders application uses this token to look up the customer details in the Customers and Products tables of the database running in a local SQL Azure datacenter.
New customers can register with Trey Research and obtain an account for using the Orders application. (Registration is performed as an out-of-band operation by the Head Office accounting team, and this process is not depicted in Figure 2.) When a customer has been provisioned within Trey Research's on-premises customer management system, the account details are synchronized between the Customers table held in the on-premises database and SQL Azure in all the datacenters. This enables customers to access the application in any of the global datacenters Trey Research uses.
Note
After the initial deployment, Trey Research decided to allow customers to edit some of their details, such as the name, billing address, and password (but not critical data such as the user's social identity information) using the application running in the cloud. These changes are be made to the local SQL Azure database, and subsequently synchronized with the on-premises data and SQL Azure in the other datacenters. You will see how this is done in Chapter 2, "Deploying the Orders Application and Data in the Cloud." However, the example application provided with this guide works in a different way. It allows you to register only by using the cloud application. This is done primarily to avoid the need to configure SQL Data Sync before being able to use the example application.
The Orders application displays a list of products stored in the Products table. The Products data is kept up to date by synchronizing it from the master database located in the head office datacenter.
When a customer places an order, the Orders application:
- Stores the order details in the Orders table of the database in the local SQL Azure datacenter. All orders are synchronized across all Azure datacenters so that the order status information is available to customers irrespective of the datacenter to which they are routed by Traffic Manager.
- Sends an order message to the appropriate transport partner. The transport company chosen depends on the type of product and delivery location.
- Sends any required audit information, such as orders over a specific total value, to the on-premises management and monitoring application, which will store this information in the Audit Log table of the database located in the head office datacenter.
The third-party compliance application running in a virtual machine role in the cloud continually validates the orders in the Orders table for conformance with legal restrictions and sets a flag in the database table on those that require attention by managers. It also generates a daily report that it stores on a server located in the head office datacenter.
When transport partners deliver the order to the customer they send a message to the Orders application (running in the datacenter that originally sent the order advice message) so that it can update the Orders table in the database.
To obtain management information, the on-premises Reporting application uses the Business Intelligence features of the SQL Azure Reporting service running in the cloud to generate reports from the Orders table. These reports can be combined with data obtained from the Data Market section of Azure Marketplace to compare the results with global or local trends. The reports are accessible by specific external users, such as remote partners and employees.
Keep in mind that, for simplicity, some of the features and processes described here are not fully implemented in the example we provide for this guide, or may work in a slightly different way. This is done to make it easier for you to install and configure the example, without requiring you to obtain and configure Azure accounts in multiple data centers, and for services such as SQL Azure Data Sync and SQL Reporting.
How Trey Research Tackled the Integration Challenges
This guide shows in detail how the designers and developers at Trey Research evolved the Orders application from entirely on-premises architecture to a hybrid cloud-hosted architecture. To help you understand how Trey Research uses some of the technologies available in Azure and SQL Azure, Figure 3 shows them overlaid onto the architectural diagram you saw earlier in this chapter.
Figure 3
Technology map of the Trey Research Orders application running in the cloud
The information in this guide about Azure, SQL Azure, and the services they expose is up to date at the time of writing. However, Azure is constantly evolving and adding new capabilities and features. For the latest information about Azure, see "What's New in Azure" on MSDN.
Staged Migration to the Cloud
When converting an existing solution into a hybrid application, you may consider whether to carry out a staged approach by moving applications and services one at a time to the cloud. While this seems to be an attractive option that allows you to confirm the correct operation of the system at each of the intermediate stages, it is not always the best approach.
For example, the developers at Trey Research considered moving the web applications into Azure web roles and using a connectivity solution such as the Azure Connect service to allow the applications to access on-premises database servers. This approach introduces latency that will have an impact on the web application responsiveness, and it will require some kind of caching solution in the cloud to overcome this effect. It also leaves the application open to problems if connectivity should be disrupted.
Jana Says: | |
---|---|
|
Another typical design Trey Research considered was using Azure Service Bus Relay to enable cloud-based applications to access on-premises services that have not yet moved to the cloud. As with the Azure Connect service, Azure Service Bus Relay depends on durable connectivity; application performance may suffer from the increased latency and transient connection failures that are typical on the Internet.
However, applications that are already designed around a Service Oriented Architecture (SOA) are likely to be easier to migrate in stages than monolithic or closely-coupled applications. It may not require that you completely redesign the connectivity and communication features to suit a hybrid environment, though there may still be some effort required to update these features to work well over the Internet if they were originally designed for use over a high-speed and reliable corporate network.
Technology Map of the Guide
The following chapters of this guide discuss the design and implementation of the Trey Research's hybrid Orders application in detail, based on a series of scenarios related to the application. The table below shows these scenarios, the integration challenges associated with each one, and the technologies that Trey Research used to resolve these challenges.
Chapter |
Challenge |
Technologies |
---|---|---|
Chapter 2, "Deploying the Orders Application and Data in the Cloud" |
Deploying functionality and data to the cloud. Data synchronization. |
SQL Azure SQL Azure Data Sync SQL Azure Reporting Service Azure Marketplace (Data Market) Service Bus Relay |
Chapter 3, "Authenticating Users in the Orders Application" |
Authenticating users and authorizing requests in the cloud. |
Azure Access Control Service Windows Identity Framework Enterprise Library Transient Fault Handling Application Block |
Chapter 4, "Implementing Reliable Messaging and Communications with the Cloud" |
Cross-boundary communication and service access. |
Azure Connect service Service Bus Queues Service Bus Topics and Rules |
Chapter 5, "Processing Orders in the Trey Research Solution" |
Business logic and message routing. |
Service Bus Queues Service Bus Topics and Rules |
Chapter 6, "Maximizing Scalability, Availability, and Performance in the Orders Application" |
Scalability, performance, and availability. |
Azure Caching service Azure Traffic Manager Enterprise Library Autoscaling Application Block |
Chapter 7, "Monitoring and Managing the Orders Application" |
Monitoring and management. |
Azure Diagnostics Azure Management REST APIs Azure Management Cmdlets |
Note
Some of the features and services listed here (such as Azure virtual machine role, Azure Connect service, and Azure Traffic Manager) were still prerelease or beta versions at the time of writing. For up to date information, see the Microsoft Azure home page at https://www.microsoft.com/windowsazure/. In addition, this guide does not cover ACS in detail. ACS is discussed in more depth in "Claims Based Identity & Access Control Guide" (see https://claimsid.codeplex.com/), which is part of this series of guides on Azure.
Summary
This chapter introduced you to hybrid applications that take advantage of the benefits available from hosting in the cloud. Cloud services provide a range of opportunities for Platform as a Service (Paas) and Infrastructure as a Service (IaaS) deployment of applications, together with a range of built-in features that can help to resolve challenges you may encounter when evolving an existing application to the cloud or when building new hybrid applications that run partially on-premises and partially in the cloud.
This chapter also introduced you to Trey Research's online Orders application, and provided an overview of how Trey Research evolved it from an entirely on-premises application into a hybrid application where some parts run in the cloud, while maintaining other parts in their on-premises datacenter. Finally, this chapter explored the final architecture of the Orders application so that you are familiar with the result.
The subsequent chapters of this guide drill down into the application in more detail, and provide a great deal more information about choosing the appropriate technology, how Trey Research implemented solutions to the various challenges faced, and how these solutions could be extended or adapted to suit other situations.
You'll see how Trey Research modified its application to work seamlessly across on-premises and cloud locations, and to integrate with external partner companies (whose applications may also be running on-premises or in the cloud), using services exposed by Azure and SQL Azure.
More Information
All links in this book are accessible from the book's online bibliography available at: https://msdn.microsoft.com/en-us/library/hh871440.aspx.
- For the latest information about Azure, see "What's New in Azure" at https://msdn.microsoft.com/en-us/library/windowsazure/gg441573.
- The website for this series of guides at https://wag.codeplex.com/ provides links to online resources, sample code, Hands-on-Labs, feedback, and more.
- The portal with information about Microsoft Azure is at https://azure.microsoft.com. It has links to white papers, tools, and many other resources. You can also sign up for a Azure account here.
- Find answers to your questions on the Azure Forum at https://social.msdn.microsoft.com/Forums/en-US/category/windowsazureplatform.
- Eugenio Pace, a principal program manager in the Microsoft patterns & practices group, is creating a series of guides on Azure, to which this documentation belongs. To learn more about the series, see his blog at https://blogs.msdn.com/eugeniop.
- Masashi Narumoto is a program manager in the Microsoft patterns & practices group, working on guidance for Azure. His blog is at https://blogs.msdn.com/masashi_narumoto.
- Scott Densmore, lead developer in the Microsoft patterns & practices group, writes about developing applications for Azure on his blog at https://scottdensmore.typepad.com/.
- Steve Marx's blog is at http://blog.smarx.com/ is a great source of news and information on Azure.
- Code and documentation for the patterns & practice Azure Guidance project is available on the Codeplex Azure Guidance site at https://wag.codeplex.com/.
- Comprehensive guidance and examples on Azure Access Control Service is available in the patterns & practices book "A Guide to Claims–based Identity and Access Control," also available online at https://claimsid.codeplex.com/ and on MSDN at https://msdn.microsoft.com/en-us/library/ff423674.aspx.
Next Topic | Previous Topic | Home
Last built: June 4, 2012