Aracılığıyla paylaş


Using Patterns to Design the Baseline Architecture

 

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

patterns & practices Developer Center

Integration Patterns

May 2004

Summary: This chapter uses the language of patterns to explore the decisions and tradeoffs that the Global Bank architecture team made while designing and implementing their bill payment system.

Contents

Meeting the Requirements of Global Bank

Designing the Global Bank Baseline Architecture

Next Chapter

"It's all talk until the code runs."—Ward Cunningham

The last chapter introduced a banking scenario that posed many technical integration challenges. It also presented patterns as an effective means of communicating design decisions. This chapter walks through the application of integration patterns in the context of the Global Bank scenario.

Although the scenario is a fictitious story for conveying design decisions, it is important to note that the authors actually built and performance tested this design in the patterns & practices lab at Microsoft. The design team consisted of field-experienced practitioners with access to the latest product builds. The decision points in the story correspond to real decision points in the lab, although this description shapes them into a more readable story than the actual effort. Also, the actual development process was more iterative than the story might suggest; some portions of the system evolved incrementally. Later in this guide, you will find that the implementation patterns contain extracts of the code used to run the Global Bank baseline architecture. You will also find more detailed explanations of the patterns mentioned in this chapter.

Meeting the Requirements of Global Bank

At the end of the last chapter, the Global Bank architecture team applied a pattern-based design approach to its bill payment system's requirements and arrived at an initial technical architecture blueprint. At this point, the team felt fairly satisfied about the initial design, at least on paper. The members of the team knew the value of creating these design models, but they also knew that they would learn other things only from the running code. They were anxious to validate their models with executable bits.

To validate their thinking, the members of the Global Bank team built a baseline architecture in the test lab and implemented five of the most architecturally significant use cases. They chose these particular use cases to help define and validate the most important mechanisms in the design. They did not intend the use cases to be functionally complete—that would come later when the functional requirements firmed up. At this point, they wanted to refine the riskiest parts of their design down to the level of executable code, the most concrete form of design.

As they implemented this baseline architecture, members of the team also performance tested many parts of the system to validate their design assumptions and tradeoffs. This helped them to further understand the overall scalability of their solution as they considered the impact of additional users over time. All of this implementation and testing contributed to their overall confidence in the patterns they had selected.

The bulk of this chapter explores the decisions and tradeoffs that the Global Bank architecture team made during the design and implementation process, and it takes a closer look at the implemented system. The discussion uses the language of patterns to convey these decisions and tradeoffs, as well as the intentions behind them, as discrete and comprehensible decision points. Throughout the discussion, pattern names appear in title capitalization and italic (for example, Portal Integration). This treatment of pattern names emphasizes the building of a pattern vocabulary and signals that the concepts are explained as patterns later in this guide.

Using Patterns to Communicate Design Decisions

Each pattern clearly limits the scope of its problem and solution to a discrete and comprehensible, or "mind-sized," decision point. By considering relatively small atomic design decisions one at a time, you are better prepared to manage the overall complexity of the system. As you build a complex system, you aggregate these small design decisions together to eventually form a larger hierarchy, or frame, of decisions.

Of course, changes at the top of the hierarchy may affect the elements below, and it is unrealistic to expect your first design to be just right. Most likely, you will need to iterate. However, having a set of discrete decision points makes it easier to iterate when you need to.

Remember, in complex environments, there is often no single right answer for a given problem. For any set of requirements, each group of designers may arrive at different, yet equally valid, designs. Usually, the difference reflects a different set of tradeoffs and priorities. What is most important to understand about the design process is that:

  • A series of technical decisions must be made.
  • Each design decision involves tradeoffs—both advantages and disadvantages.
  • Tradeoffs made at one level constrain the decisions at other levels.
  • The sum of these design decisions must result in an architecture that meets both the functional and nonfunctional requirements of the system.

With these guidelines in mind, the architecture team set out to build a baseline architecture.

The Role of a Baseline Architecture

A baseline architecture is a thin executable slice through the overall system that is designed to implement the most architecturally significant use cases. As you implement these key use cases, you want to define the key mechanisms and components in the system and retire the most serious technical risks early. If you do this well, the baseline architecture does not become a prototype to be thrown away. Instead, it becomes the skeletal structure of the system. The baseline provides sufficient stability so that subsequent iterations can smoothly evolve the running system into one that meets all of the functional and nonfunctional requirements. The baseline architecture is intentionally incomplete.

How do you actually design a baseline architecture? To answer this question, let's trace the architecture team's steps during the design sessions.

Designing the Global Bank Baseline Architecture

As the team dissected the high-level requirements of the chief technology officer (CTO), the members of the team arrived at the following use cases:

  • Schedule Payments
  • View Scheduled Payments
  • Execute Scheduled Payment
  • Receive Payment Response
  • Add Payee

The first use case they discussed was the View Scheduled Payments use case. This use case involved a portal that allowed users to see their account information, including their current account balance and a list of scheduled payments. To build this portal, the team would need to connect to multiple back-end systems and to aggregate the results in a single view. Implementing this use case would require the team to resolve several key technical issues. Let's look now at the use case in more detail and understand the team's thinking as they approached the problem.

View Scheduled Payments Use Case

To implement View Scheduled Payments, the portal would have to display the following information:

  • Account information from the mainframe
  • Profile information such as name and address from the Customer Relationship Management (CRM) system
  • Scheduled payment information from a payment system

Optionally, the portal would have to display any other loans the customer might have with newly acquired banks so that the customer could submit electronic payments toward these loans.

Initially, members of the team had slightly different opinions of what a portal was. However, they eventually agreed that a portal is a single view into many back-end systems that are integrated "at the glass," or, in other words, at the user presentation level. Thus, Portal Integration is a type of integration that looks like Figure 1.

Ff647952.f02intpatt01(en-us,PandP.10).gif

Figure 1. Portal integration to multiple back-end systems

The members of the Global Bank team needed to make individual connections to many different kinds of systems to make Portal Integration work. They considered each connection individually to determine exactly how they were going to connect to the system.

Note   At this point in the story, the payment system does not exist. It is, however, included in Figure 1 as a placeholder to use for planning purposes.

System Connections

As the members of the team thought more about this problem, they debated the kinds of connections they could make. The discussion was full of overloaded terms and individual biases toward the methods that each member was most familiar with. To make matters worse, the team did not share a common design vocabulary because some members of the team had never worked together before.

Finally, the members of the team realized they had to narrow the discussion to a few practical choices for each system. To do so, they would have to tighten their frame of reference when they compared their connection options. They finally agreed to approach the problem from the perspective of integrating by using a Three-Layered Services Application [Trowbridge03]. As shown in Figure 2, a Three-Layered Services Application defines three distinct logical layers: data, business logic (functional), and presentation.

Ff647952.f02intpatt02(en-us,PandP.10).gif

Figure 2. Three-Layered Services Application

They also agreed that although not every system was designed as a Three-Layered ServicesApplication, using these three logical layers would give them a common way to reason about other systems. Using these layers to shape their discussion, they began to discuss the relative tradeoffs between each connection alternative.

Note   To present an overview of the design, this chapter discusses tradeoffs between design alternatives at a high level only. For more detailed information regarding these tradeoffs, see the pattern chapters (Chapters 3 through 6). If you want to see a visual model of all of these patterns and their relationships, see Chapter 7, Project Notebook.****

First, they could use Data Integration to connect at the logical level of data, making the same data available to more than one application. This approach worked well when there was very little business logic to reuse across applications. For other applications, they knew that raw data was not enough; they wanted to reuse the functionality of a given application. This functionality often contained business logic, process, or calculated values. In this case, they would need to use Functional Integration to connect at the business logic layer*.* And although they preferred to connect to systems directly to share either function or data, they acknowledged that sometimes the only practical way to integrate with a system was through Presentation Integration, also known as screen scraping. Moving away from a pure systems perspective, they also discussed human integration as a means to integrate with a system. However, because they were focused on building a baseline architecture, they considered human integration to be out of scope—at least for the moment.

Now that they agreed on an approach to the alternatives before them, the members of the team returned to the set of individual connection decisions they had to make. The first system to connect to was the payment system.

Connecting to the Payment System

The members of the team knew they would need a system to hold all the scheduled payments along with related information. They decided the simplest thing to do was to build a payment system that persisted this information in a database with Web-based administrator screens to manage the data. They decided to use Data Integration to connect the portal to the payment system because no additional system functionality or behavior seemed important to share.

Connecting to the CRM System

The next system to connect to was the existing CRM system. The members of the team analyzed the system architecture and realized there was only one practical choice: Functional Integration. That is because the software vendor used a highly abstracted schema within a relational database to store information and recommended against Data Integration. Instead, the vendor provided a functional Web services interface to encapsulate access to the data. This was the same kind of encapsulation at a system level that good object-oriented designers perform at a class level when they create private instance variables and public accessor methods.

Although encapsulation is generally a good thing, in this case the members of the team marked it as a technical risk. They marked it as a risk because the vendor's implementation was effectively "black box," or unknown to the Global Bank team. The members of the team also knew from experience how difficult it is to build high performance abstract interfaces. Furthermore, because profile information from the CRM was required with each View Scheduled Payments request, performance was critical. They decided to mark this interface as a key test point and to stress test it early to discover the point where additional load would compromise system performance. They needed this information soon so they could consider compensating design alternatives, if necessary.

Connecting to the Mainframe

Integrating with the mainframe was critical because it was the system of record for all account information. Over the years, the organization had invested significantly to develop solid Customer Information Control System (CICS) transactions. Any integration with the account system would need to use this functionality; therefore, the team chose Functional Integration but deferred the connection details until later.

The team created the diagram in Figure 3 to record the design decisions made so far. The team used squares to represent design elements, circles to represent patterns, and lines to indicate relationships between the patterns and other design elements.

Ff647952.f02intpatt03(en-us,PandP.10).gif

Figure 3. Connecting the payment, CRM, and mainframe systems to a portal

Connecting to Loan Systems

The final connections to consider were the connections to the acquired bank systems that were located in a remote data center. This optional part of the use case involved finding all loans a customer might have with these banks so that the customer could schedule payments toward them. This requirement presented many challenges. First, Data Integration would be complex because of the many different data formats. The many different data formats would require multiple transformations. Next, because more acquisitions were likely, the team wanted to minimize the cost of integrating additional systems into this consolidated loan information request. The team decided to use Functional Integration in the form of request and response messages and to expect each system involved in this collaboration to provide the appropriate response. This decentralized approach would make it easier to integrate new systems in the future.

As the members of the team thought more about the connections to the remote data center, they realized there was another complication with these connections. All of the links between previous connections were reliable connections within the same enterprise (near links). The connection to the remote data center spanned multiple enterprises and was not considered reliably connected (a far link). Based on previous experience, they preferred to use a message queue or message-oriented middleware, to buffer connections between far links to improve reliability. They also knew that there were more issues than the reliability of the far link connections. With this in mind, they decided to consider their growing network of connection points more carefully.

Integration Topology

Although the team was making progress toward determining the best way to connect to each system, choosing the right topology to link these connection points seemed less clear. As the members of the team discussed alternatives, they arrived at three possible ways to connect three or more systems together: Point-to-Point Connection, Message Broker, and Message Bus.

The easiest way to connect the systems was to use the Point-to-Point Connection pattern, as shown in Figure 4.

Ff647952.f02intpatt04(en-us,PandP.10).gif

Figure 4. Connecting four systems through point-to-point connections

Point-to-Point Connection is effective and simple for a small number of systems. A liability of this approach, however, is that each system must have information about each endpoint that it connects to. The members of the team knew that as they added more systems to their integration architecture, it would become more and more complex to add each additional system, making it expensive to extend and manage.

The team considered inserting a Message Broker to act as an intermediary between senders and receivers, as shown in Figure 5.

Ff647952.f02intpatt05(en-us,PandP.10).gif

Figure 5. Connecting four systems by using a message broker

The advantage of using a Message Broker is that it decouples the receiver from the sender. Instead of sending the message to a specific endpoint, the sender can send messages to the broker. The broker then routes the message to the proper recipients. In addition, the broker often transforms the messages from one format to another to resolve the incompatible message formats between endpoints.

Finally, the team considered connecting multiple systems by using a Message Bus. A Message Bus (see Figure 6) requires each system on the bus to share a common data format, a common set of command messages, and a common infrastructure. A system sends a message to the Message Bus, and the Message Bus then transports the message to the other systems by way of the common infrastructure.

The members of the team liked the fact that after a Message Bus is built, the cost of adding another system to the message bus is negligible to the existing systems. As they thought further about implementation, they discussed different ways the common infrastructure might be built and soon found themselves in a heated debate over such issues as broadcast and Publish/Subscribe (pub/Sub). They agreed to postpone further discussion of these issues until or unless they decided to incorporate a Message Bus into the actual design.

Ff647952.f02intpatt06(en-us,PandP.10).gif

Figure 6. Four systems connected with a message bus

Now that the members of the team had brainstormed alternative integration topologies, they brought their attention back to the View Scheduled Payments use case. They knew there were many kinds of systems providing loan information to this use case. They also knew it was likely that the bank would acquire even more financial services companies in the future. These potential acquisitions represented even more sources of loan information to be integrated. They wanted the system to be flexible in its ability to handle these kinds of changes.

Adding a Message Broker for the Loan Systems

They decided to employ a Message Broker between Global Bank's data center and the remote data center housing the other loan systems. They intended to send a loan information request message to the broker, and the broker would then forward it to other systems interested in this type of message. As these systems responded with loan information, the broker would pull the information together and make it available as a consolidated whole.

By using a message queue to implement Message Broker, they would also create the kind of buffer they wanted between their data center and the far link that connected it to the remote data center.

Figure 7 shows how the members of the team modified their original diagram to include the message broker connecting the portal to the remote data center.

Ff647952.f02intpatt07(en-us,PandP.10).gif

Figure 7. Connecting the portal to the remote data center

To show the dynamic nature of the system and to document how the system would realize the View Scheduled Payments use case, the team drew the collaboration diagram that is shown in Figure 8.

Ff647952.f02intpatt08(en-us,PandP.10).gif

Figure 8. View Scheduled Payments collaboration diagram

The following is the flow of the use case that is shown in Figure 8:

  1. A customer browses to the online bill payment application.
  2. The Web server prompts the customer for a user name and password.
  3. The Web server authenticates the customer by using information retrieved from the directory server.
  4. The Web server sends an asynchronous request to the integration server asking for related loans.
  5. The Web server retrieves customer profile information from the CRM server.
  6. The Web server retrieves the customer's mainframe account number from the payment server.
  7. The Web server retrieves account balance information from the mainframe.
  8. The Web server retrieves a list of scheduled payments from the payment server.
  9. The Web server checks the integration server to see whether any loan information has been retrieved.
  10. The Web server builds the presentation, which displays account balance, scheduled payments, and customer profile information.
  11. If loan information is available, it appends this optional information to the presentation.
  12. The Web server returns the presentation code back to the browser.
  13. The browser renders the view.

So far, the members of the team had a pattern-based design model and a collaboration diagram that showed how the system would realize the View Scheduled Payment use case. They wanted one more model that showed the static nature of their system with well-defined high-level boundaries. To portray this view, they used a port-and-wire model as shown in Figure 9. The outgoing ports are depicted as black squares, and the incoming ports are depicted as white squares.

Ff647952.f02intpatt09(en-us,PandP.10).gif

Figure 9. View Scheduled Payments message flow

Although all the details were certainly not worked out for this use case, the members of the team felt that the use case was at a sufficient level of detail to proceed to the next use case. They would return to refine the design later, after exploring whether parts of this design would realize other use cases as well.

Execute Scheduled Payment Use Case

The next use case they considered was the Execute Scheduled Payment use case. To implement this use case, the system would:

  • Start up at a system-defined interval.
  • Retrieve the set of payments to be made on or before the current date.
  • For each payment, the system would verify that there were sufficient funds in the payment account and then debit the account for the payment amount.
  • Send the payment to an appropriate payment channel.

There were four kinds of payment channels currently in scope: domestic payments through a clearing house, electronic payment gateways using Society for Worldwide Interbank Financial Telecommunication (SWIFT) transactions, electronic payments to a manual fulfillment house, and account-to-account transfers within the bank's internal system.

Focusing on the Baseline Architecture

As the members of the team talked though this use case, they tried to avoid discussing domain-specific details that had more to do with business logic than technical architecture. Although they knew these details were important, they also realized that the purpose of the baseline architecture was to mitigate technical risk, not to fully refine the business requirements. They knew the requirements team was on track to do exactly that job, so they focused on the items that worried them the most from a technical perspective. They also deemphasized some of the use case areas that did not represent top technical challenges.

Payment Channels

One area of concern was the SWIFT payment Gateway.

Note   The Gateway pattern abstracts access to an external resource by presenting a single interface to the integrated applications while hiding the external resource interface. For more information, see "Gateway" later in this chapter.

The members of the team knew the requirements would include making international transactions to support their wealthiest clients, and for this they would use SWIFT transactions. They also knew there would be requirements for domestic payments, and for those payments they would use the existing system. It would be too expensive to pay a SWIFT transaction fee for domestic payments, especially when they already had an existing payment system.

The existing payment system was technically straightforward. It used a leased secure line for Point-to-Point Connection with a clearing house and secure file transfer. The bank and the clearing house exchanged files that contained both outgoing and incoming data records. This was a simple form of Data Integration that the bank had used for years. The team would use this system for domestic transfers. Because they understood it well, there was little reason to build and test this system early, so these details were omitted from the initial use case.

However, the SWIFT payment Gateway was a very different story. They would need to package the transaction securely in an XML message and use Web services to send it to the payment Gateway over the Internet. Because this part of the use case was new to the team and presented many technical risks, it was one of the top priorities for the baseline architecture. They wanted to build it early and test it.

Using Domain Knowledge to Guide Design Decisions

Because many members of the team had been in banking for years, they naturally brought their business knowledge into the design sessions. Although this business knowledge was invaluable, the team had to sort out what was relevant for the baseline architecture and what was not. This sorting was, of course, a judgment call, but it was necessary for the team to stay focused on mitigating the most important technical risks first.

For example, the members of the team knew that any time the bank would initiate a payment through an external party such as a clearing house or a payment Gateway, the confirmation would be delayed. The rules of double entry accounting would not allow funds to be in limbo during this period. Therefore, a holding account would have to be credited at payment initiation and debited upon payment confirmation. This would keep the system in balance at all times.

Although implementing a holding account was critical to the final use case, it was not critical for the early baseline architecture. The team was proficient at enlisting debits and credits in the same transactions across most of the systems in the bank. They did not consider this to be a technical risk. Therefore, the team decided to defer the implementation of this logic until after the requirements team defined the specific holding accounts to use.

Using SWIFT Gateway for the Baseline Architecture

The rules to determine the right payment channel were straightforward. When a customer scheduled a payment, the customer could select either a domestic or an international payment. If the payment were domestic, the customer would provide an American Bankers Association (ABA) routing number for the intended payee. If this field were left blank, the system would send an electronic payment to a company that specialized in paper check writing and mailing services (a manual fulfillment house). If the field were not blank, the system would check the routing number against a list of internal banks. If the numbers matched, the system would make a payment by transferring money internally from one account to another. If the routing number were valid but did not match the internal banks, the standard domestic payments system would make the payment by secure file transfer. Finally, payments marked as international would use the SWIFT payment Gateway. Because the system would send the payment to an appropriate channel, there would be a system-based acknowledgment that the message was received.

To simplify the initial use case, the members of the team omitted any routing to their domestic payment system and instead routed these payments through the SWIFT Gateway for test purposes. This exercised the SWIFT Gateway by using the test data. The test data was based on domestic accounts instead of international accounts. It would be easy to add international routing and test data later, but they wanted to pressure test the Gateway payment mechanisms early.

As they continued to walk though the use case flow, the members of the team realized that a key element was missing. Currently, the system would receive an acknowledgment that the payment message was sent, but how would the system know if the payment was received by the intended payee? What would happen if the payment Gateway or manual fulfillment house could not pay the payee? These questions led them to the Receive Payment Response use case.

Designing for Execute Scheduled Payment and Receive Payment Response

The Receive Payment Response use case described the behavior of the payment Gateway and the manual fulfillment house after they processed the payment request. In this use case, these payment channels returned the result of their processing to Global Bank's system. If the payment was successful, the payment status and transaction ID were updated in the payment system. If the payment failed, a compensating transaction to credit the account was first issued to the mainframe and then status and ID fields were updated accordingly in the payment system.

Because of the close relationship between Execute Scheduled Payments and Receive Payment Response, the team decided to evaluate Process Integration for both use cases.

Process Integration

Process Integration adds a layer of software on top of other applications and services to coordinate the execution of a long-running business function, as shown in Figure 10.

Ff647952.f02intpatt10(en-us,PandP.10).gif

Figure 10. Process Integration, a coordinating layer above other applications and services

The members of the team knew they would need a layer like this to coordinate the two use cases, and they discussed the best way to design it. Some members of the team suggested an integration server. Integration servers often include orchestration tools for this purpose. Other members of the team wanted to build a custom coordinating layer by encapsulating process and activity components. They thought the integration server was excessive. After some debate, they decided to choose the integration server approach. They reasoned it was likely that the bank would continue to add more financial services and external partners in the future, and that these services and partners would need Process Integration capabilities also. And although the use of an integration server might initially cost them some time for installation and training, the cost would be more than repaid through the reduced development time and overall flexibility of the system.

The members of the team updated their design model to incorporate Process Integration, as shown in Figure 11. Notice that process integration needs to communicate with the message broker and the payment systems, but it does not need to connect directly to the portal.

Ff647952.f02intpatt11(en-us,PandP.10).gif

Figure 11. Incorporating Process Integration into the baseline architecture

Message Broker for Payment Channels

Although Process Integration would handle the orchestration needs of long-running transactions, the members of the team knew that each payment channel was likely to need a different message format, thus requiring transformation code. They would need a SWIFT-compliant XML schema in the case of the payment Gateway and a more generic XML schema in the case of the manual fulfillment house. Worse, they anticipated that the bank would add more external partners who would use more message formats in the future. To avoid duplicating this transformation logic across the system and to take advantage of transformation tools, they decided to use a Message Broker as an intermediary between Global Bank's system and their trading partners' systems.

Like the other Message Broker in this design, the queue-based message broker implementation would buffer the somewhat unreliable connections between systems.

Message Broker Using Functional Integration with SOI

Even though they decided to use a message broker to communicate with trading partners, they still had to decide how to connect the Message Broker to the target system. Message Brokers can use Data Integration to connect at the logical data layer. For example, Message Brokers can connect at the logical data level by sending files by using File Transfer Protocol (FTP). Or, Message Brokers can use Functional Integration to connect at the business logic layer. For example, they can connect by using Web services.

The members of the team knew there were many ways to share functionality. The three most common methods are distributed objects (.NET Framework remoting, COM+, Common Object Request Broker Architecture (CORBA)), Remote Method Invocation (RMI); proprietary message-oriented middleware, and Web services. Some of the team members came from large enterprises where they had built logical services on top of proprietary message-oriented middleware. This approach had had worked well for them in the past. However, all the members of the team were intrigued by the possibility of using Web services because of the potential interoperability between platforms and the support of major platform vendors. Not surprisingly, they decided to connect with partners by using a kind of Functional Integration based on Web services: Service-Oriented Integration (SOI).

To record their design decisions, the members team modified their design model to include an additional message broker and the use of Service-Oriented Integration, as shown in Figure 12. They also rationalized the communication lines with a common bus to make the model more readable.

Ff647952.f02intpatt12(en-us,PandP.10).gif

Figure 12. Incorporating Message Broker and Service-Oriented Integration for connections with trading partners

Models for Execute Scheduled Payment and Receive Payment Response

In addition to the pattern-based design model, the team decided to create a collaboration diagram for the Execute Scheduled Payment use case, as shown in Figure 13.

Ff647952.f02intpatt13(en-us,PandP.10).gif

Figure 13. Execute Scheduled Payment collaboration diagram

The following is the flow of the use case that is shown in Figure 13:

  1. A system scheduler in the integration server initiates this use case and begins to execute the payment.
  2. The integration server requests the list of payments to make from the payment system.
  3. For each payment, the integration server checks the account balance in the mainframe. The integration server debits the account if sufficient funds exist.
  4. The integration server retrieves the appropriate security credentials for the message exchange.
  5. The integration server sets the routing information, transforms the message to the format understood by the recipient, and then sends the message.

To show a static view of the system with boundaries, they created a port-and-wire drawing, as shown in Figure 14.

Ff647952.f02intpatt14(en-us,PandP.10).gif

Figure 14. Execute Scheduled Payment use case realization

Because the Receive Payment Response use case was related to the View Scheduled Payments use case, the team created a collaboration diagram for this use case, as shown in Figure 15.

Ff647952.f02intpatt15(en-us,PandP.10).gif

Figure 15. Receive Payment Response collaboration diagram

The following is the flow of the use case that is shown in Figure 15:

  1. The precondition for this use case is that a payment message has been sent to one of the payment recipients: the SWIFT payment gateway, the manual fulfillment partner, or an acquired bank.
  2. After processing the payment request, the payment recipient sends a payment response to the integration server.
  3. The integration server correlates the response to the originating request.
  4. If the payment failed, the integration server credits the customer account on the mainframe.
  5. The integration server updates the payment record in the payment system with status and transaction ID.

Just as they did for the previous use case, the members of the team also produced a port-and-wire diagram for Receive Payment Response, as shown in Figure 16.

Ff647952.f02intpatt16(en-us,PandP.10).gif

Figure 16. Receive Payment Response use case realization

Accessing Account Services on the Mainframe

As the members of the team reviewed the Receive Payment Response use case, they realized there was still a key issue to resolve. Both this use case and the View Scheduled Payments use case needed to access the mainframe by using Functional Integration, although exactly how that was going to be done was still unclear. There were clear differences in application programming and potential differences in network protocols that had to be resolved, not to mention security and transactions. How would the team manage this complexity and not let it overcomplicate the design? One team member suggested a Gateway.

Gateway

A Gateway is a design element that encapsulates outbound access to an external system. There are Gateways at the application level that are usually implemented as classes. For more information, see Enterprise Solution Patterns Using Microsoft .NET [Trowbridge03] or Martin Fowler's Patterns of Enterprise Application Architecture [Fowler03]. There are also Gateways at the integration level that are usually implemented as processes or subsystems. Based on the CTO's constraints, the members of the team knew that the system platform would be based on Microsoft technology, while the mainframe was based on an IBM CICS system. They decided to employ a Gateway to bridge the communication and programming model between these different technologies.

Although the team decided to use a Gateway to bridge technologies, the team needed to decide how to connect the application to the Gateway. One method was to connect the Web server directly to the mainframe Gateway, resulting in the least number of network hops, while placing a mainframe connection in the perimeter network (also known as DMZ, demilitarized zone, and screened subnet). Although this direct connection was likely to be fast, it would require deploying the connection to every Web server used for this purpose. It also made the team nervous that a hacked Web server could be used to gain mainframe access.

Another choice was to wrap the mainframe Gateway with a Service Interface [Trowbridge03] by using Web services and to then have the ASP.NET page from the Web servers make the call to the Service Interface. The Service Interface would then access the mainframe through the Gateway. The additional network hops and serialization would have performance implications, but this approach would also have the advantage of exposing the mainframe functionality (Functional Integration) to other applications in the enterprise by using a platform-independent connection (Service-Oriented Integration). To secure this connection, the members of the team considered a Web Services Security (WS-Security) implementation, but they realized there would be a performance tradeoff for the necessary encryption and decryption routines.

Capturing the design decisions made so far, the members of the team modified their design model to reflect the Gateway and Service Interface patterns shown in Figure 17.

Ff647952.f02intpatt17(en-us,PandP.10).gif

Figure 17. Incorporating the Gateway and Service Interface to communicate with the mainframe

Performance vs. Extensibility Tradeoff

The team knew that the account system on the mainframe was a key system to the enterprise and that many other systems and services would need to use it. Because so many systems depended on it, they also knew that performance was important.

Based on these considerations, the members of the team created test request and response messages with realistic payloads, and they created three test points that were designed to measure relevant performance.

The first test point ran from the Web server directly to the Gateway system. The second test point ran from the Web server to a Web services–based Service Interface. The Service Interface then used a Gateway system to call the mainframe. Finally, the last test point ran the same Web services–based Service Interface but implemented WS-Security. They stressed the system to find out where the transaction rates, concurrent users, and response times flattened. They would need to know this information to compare it to the operational requirements being captured by the project team. Using the actual performance against the requirements would help them determine how to best meet the requirement for the system to handle up to three times the anticipated load. Ultimately, it would help them make the tradeoffs in performance and flexibility that they needed to make.

At this point, the team felt they had worked out most of the necessary resource connections and communication mechanisms to meet the current use cases. They now turned their attention to the portal Web application itself.

The Portal Web Application

To refine the portal application, the team needed to decide the identification, authentication, and authorization mechanisms the Web application would use to identify a customer and authorize access. They decided to use Intercepting Filter and the Active Directory® directory service.

Following the steps of the View Scheduled Payments use case, a customer uses a Web browser to go to the Global Bank Web site where an Intercepting Filter intercepts the Web request and prompts the user for a user name and password. The Web server requests the user's credentials from the directory server, authenticates the user, associates the user with a role, and then returns a set of credentials to the Web server.

Figure 18 shows how the members of the team modified their pattern-based design model to document these decisions.

Ff647952.f02intpatt18(en-us,PandP.10).gif

Figure 18. Adding Intercepting Filter and Directory Services to the design model

Refining the portal application required the use of other related patterns that solve other problems beyond integration. These problems appear almost any time you want to build a Web-based application.

Application vs. Integration Patterns

The Intercepting Filter pattern just introduced into the Global Bank design is not part of this Integration Patterns guide. Neither are the Service Interface or class-level gateways, such as Service Gateway, that also become part of the design later in this chapter. These other patterns are from Enterprise Solution Patterns Using Microsoft .NET, which is a recommended prerequisite to this guide. Additionally, the Data Integration pattern later in this guide refers to Data Patterns, which is another previous patterns guide. Figure 19 shows the relationship between these three guides.

Ff647952.f02intpatt19(en-us,PandP.10).gif

Figure 19. Relationship between Integration Patterns and other Microsoft patterns guides

For more information about previously released patterns, refer to the guides and their references. Now let's return to the Global Bank scenario.

Global Bank Portal Application

In the View Scheduled Payments use case, after the Web server obtains and validates security credentials, the Web server uses these credentials to issue an asynchronous request for loan information from within a synchronous method. This approach is the Half Synch/Half Asynch pattern. Next, the Web server uses a Gateway object to access the mainframe's Gateway subsystem and retrieve an account balance.

Note   The Gateway in this case is a class-level or object-level Gateway, whereas the Gateway that translates network and programming model calls to the mainframe is a subsystem-level Gateway. For information about class-level Gateways, see Service Gateway in Enterprise Solution Patterns for Microsoft .NET and Data Table Gateway in Martin Fowler's Enterprise Application Architecture. Information about subsystem-level gateways is contained later in this guide.

The Web server then uses a Gateway object to call the CRM system. The CRM system has encapsulated its functionality with a Web services–based Service Interface. Finally, the Web server checks to see whether the asynchronous request has returned a loan information response. After all this data is retrieved, the Web server then builds a presentation that displays all the requested information. If the loan system request returned any entries, the presentation appends this information to the display as optional information.

Figure 20 shows how the members of the team updated the pattern-based design model to document these decisions.

Ff647952.f02intpatt20(en-us,PandP.10).gif

Figure 20: Adding Gateways to the design model

Implementing the Global Bank Scenario

If you consider all of the patterns in Figure 20 as high-level design elements (unbound) and walk thorough the use cases presented so far, you can see how these elements collaborate at an abstract level to realize these use cases. To refine this design further, you must map these patterns to an implementation platform. Doing so usually requires additional iterations across the design because the chosen implementation may constrain or enable certain pattern-based design decisions.

For example, some of the implementation decisions will be constrained by decisions that the enterprise has already made. To understand these constraints, you need to understand the current technical architecture within your enterprise and make an intelligent initial allocation of functionality to server types. The members of the Global Bank team did this for their scenario and arrived at the model of their technical architecture that is shown in Figure 21.

Ff647952.f02intpatt21(en-us,PandP.10).gif

Figure 21. Initial Global Bank network diagram with server types

To refine this model more, the members of the team needed to decide the platform or platforms to build their system on. Given the CTO's constraints, some of the platform infrastructure decisions were easy. For example, the account information of record would reside on the mainframe. The mainframe is based on an IBM OS/390 operating system. The acquired banks ran on systems based on WebSphere Java 2 Enterprise Edition (J2EE). The rest of the systems would be based on the Microsoft platform, as shown in Figure 22.

Ff647952.f02intpatt22(en-us,PandP.10).gif

Figure 22. Mapping the baseline architecture and patterns to technologies

Notice the mix of Microsoft and third-party technologies, as well as the question mark associated with the Microsoft CRM system. The question mark indicates that the Global Bank team is still testing whether this system meets its requirements. The team will compare the performance test results with the actual operational requirements obtained from the project team to see if this implementation meets its performance needs.

Later in this book, you will find these patterns described in detail, with matching implementation patterns to Microsoft technologies.

Note   The preview release of this guide does not document all the patterns that appear in Figure 22. For example, only some architecture and design patterns have matching implementation patterns. However, some patterns have multiple implementation patterns, such as Implementing Service-Oriented Integration with ASP.NET and Implementing Service-Oriented Integration with BizTalk Server 2004.

Next Chapter

This chapter showed how the Global Bank team applied patterns to design their baseline architecture. The description moved quickly over a wide range of patterns and technical challenges. The next chapter is the first of three pattern cluster chapters that describe these patterns in greater detail. Chapter 7, "A Language of Patterns," uses a visual model to tie these patterns and their relationships together and to explain how the Global Bank team used this visual model to guide them through their design choices.

Start | Previous | Next

patterns & practices Developer Center

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

© Microsoft Corporation. All rights reserved.