Value-Driven Architecture: Linking Product Strategy with Architecture
Charlie Alfred, Technical Director
June 2005
Applies to:
Enterprise architecture
Value modeling
Summary: Use value models to create effective high-level architecture strategies with information provided in this article. (15 printed pages)
Contents
Introduction
Requirements Are Ineffective Compasses
Value Model Overview
Utility Curves
Architecture Challenges
Integration with Existing Development
Conclusions
References
About Foliage Software Systems
Footnotes
Introduction
Systems exist to generate value for their stakeholders. Unfortunately, this ideal is often met only to a limited degree. Current development methods, such as waterfall, spiral, and agile, often provide incomplete and inadequate direction to stakeholders, architects, and developers.
This paper introduces two essential concepts: value models and architecture strategy, which are missing from many development processes. The creation of well-defined value models provides direction that improves the quality of trade-off decisions, especially in systems that are deployed to many users in various settings. The existence of a clearly stated architecture strategy provides a coherent high-level direction for the system, in the same way that the United States Constitution does for its nation. Finally, this paper will show how these two concepts can be integrated effectively with waterfall, spiral, or agile methods.
Requirements Are Ineffective Compasses
Our current ways of building complex software-intensive systems are ineffective. This is not the same as saying that they are inadequate. Many systems built using waterfall, spiral, or agile methods are deployed successfully and are able to satisfy their stakeholders. However, many are not, and for reasons that are correctable.
Traditional processes for building software-intensive systems, like waterfall and spiral methods, rely on requirements to provide direction. A common misconception is that requirements are statements that describe the problem. According to Greenfield and Short [1], they aren't. They define the solution from the perspective of the users and system sponsor.
Requirements have some notable shortcomings:
- Requirements typically use a binary structure. They function like pass/fail grades in a college course, and provide little if any help in making trade-off decisions. Of course, these trade-off decisions must get made at some point in the process. Often they are made implicitly, and without full consideration of the implications.
- Requirements are frequently used as the basis for specifying testable acceptance criteria for a system. In the process of making them specific, important design decisions are made implicitly, without full considerations of the implications. Eventually, these decisions must be reversed at a significant cost, or they end up limiting the potential of the system.
- Requirements tend to treat all individuals of a given user-type the same. For example, use case scenarios for a medical system might refer to physicians and nurses, while those for a real estate system might refer to buyer, seller, agent, and lender. The problem is two physicians aren't the same, and they aren't necessarily satisfied by the same things. There is a good reason why popular restaurants have many entrees on the menu.
- The information needed to make effective software architecture decisions is often left unstated. All systems are deployed in environments that place significant obstacles in their path. Overcoming these obstacles is the responsibility of every system, and succeeding in spite of them is the mark of an effective system. However, unless developers have an extremely deep understanding of the problem domain, they haven't accumulated the acumen to make good judgments. At the same time, influential users and system sponsors usually do have this experience, but often lack the technology or systems expertise to know when it is needed.
Figure 1. System Agility
Agile methods like EXTREME PROGRAMMING and SCRUM take a slightly different approach. These methods emphasize some useful changes, such as close collaboration between stakeholders and developers, and very short project iterations to get continual feedback. The theory is that continuous interaction between stakeholders and developers is a more reliable mechanism for project navigation than a big up-front investment in written requirements.
In addition, agile methods tend to favor more organic, reactive approaches (refactoring) to those with more prescriptive guidance (architecture). Proponents of agile methods speak of allowing the architecture of a system to evolve. In some situations, this approach can be effective. One example is when user needs or competitive conditions change rapidly. However, there are many cases where this approach can be risky. One in particular is when a product must be developed to run in many different environments and/or satisfy stakeholders with different needs and priorities.
The main issue with waterfall, spiral, and agile approaches is that software development often proceeds without some very critical information, and without the tools needed to gather it. A seaworthy boat, a working radio, and a complete set of sails are all necessary, but not necessarily sufficient. An experienced sailor wouldn't think of leaving port without a good set of nautical maps, a long-range weather forecast, and a reliable way of tracking the boat's location.
This paper will discuss two processes: value modeling and architecture strategy. It will show how effective use of these techniques will:
- Capture essential information about the problem domain that enables users and developers to make effective tradeoffs.
- Permit significant obstacles to success to be identified and prioritized.
- Enable the architecture strategy to be expressed in a clear, concise way that can be understood by all stakeholders.
Value Model Overview
Purposeful systems are developed to create value for their stakeholders. In most cases, this value is perceived to be beneficial because these stakeholders play important roles in other systems. In turn, these other systems exist to create value for their stakeholders. This recursive quality of systems is one key in the analysis and understanding of value flows. The next section (Discovering Value Models) discusses this point in more depth.
The list on the previous page presents three vital traits of an intentional system. Two mechanisms to achieve each trait are described, and a real world example is provided for each mechanism.
These three traits are at the heart of a value model. In order to identify and work with them more easily, we need to reduce each one down to an elemental form:
Value Expectation expresses a need for a particular feature, including what is provided (capabilities), how well they are provided (quality attributes), and how beneficial various levels of quality are (utility function). For example, a car driver might have a value expectation for how quickly and safely the vehicle can stop from a speed of 60 miles per hour.
Opposing Force represents some natural or imposed force in the environment where a system is deployed that makes satisfying a value expectation well more difficult. For example, how effectively a car can stop from 60 MPH depends on the type of surface (pavement versus gravel), slope (up or downhill), conditions (dry, wet, ice), and the weight of the vehicle.
Change Catalyst represents some force or event in the environment that causes value expectations to shif,t or limiting factors to have a different impact. For example, decreases in memory chip costs and increases in storage density became a catalyst for digital photography.
For the remainder of this paper, we shall refer to opposing forces and change catalysts as limiting factors, and we'll refer to all three collectively as value drivers.
If a system is to be effective at satisfying the value models of its stakeholders, it needs to be able to identify and analyze them. Traditional approaches, like use case scenarios or business/marketing requirements, start by focusing on the types of actors with which the system interacts. This approach has several major limitations:
- It focuses more on what things the actors do, and less on why they do them.
- It tends to stereotype actors into categories, where all individuals of a type are essentially the same (traders, portfolio managers, or system administrators, for example).
- It tends to ignore differences in limiting factors (for example: Is an equity trader in New York the same as one in London? Is trading at market open the same as trading during the day?).
- It is based on binary outcomes: the requirement is met or it isn't. The use case completes successfully or it doesn't.
There is a very logical, practical reason why this approach is popular. It uses sequential and classification-based reasoning, so it is easy to teach and explain, and it can produce a set of objectives that are easy to verify. Of course, if simplicity were the only goal that counted, we'd all still be walking or riding horses to get from one place to another.
Discovering Value Models
In his book Competitive Advantage, Michael Porter [7] discusses the concept of value chains in the context of corporate strategic planning:
"Although value activities are the building blocks of competitive advantage, the value chain is not a collection of independent activities, but a system of interdependent activities. Linkages are relationships between the way that one value activity is performed, and the cost or performance of another.
"Linkages exist not only within a firm's value chain (horizontal linkages), but between a firm's value chain and the value chains of suppliers and channels (vertical linkages). The way that supplier or channel activities are performed affects the cost or performance of a firm's activities (and vice versa)."
If one thinks of a firm (or a supply chain) as a system, and each major value activity (procurement, receiving, manufacturing, and so on) as a subsystem, then we can generalize the notion of value chains and linkages:
- Each entity (value activity) has its own value model to represent its value expectations and limiting factors.
- Each linkage describes how the value model of one entity dovetails with the value model of the entity with which it is linked.
- Each linkage between two entities in the same system is what Porter refers to as a horizontal linkage. Each linkage between entities in different systems is a vertical linkage.
Porter also refers to the concept of differentiation, where two entities performing the same set of value activities behave differently. A simple example might be a taxi versus a municipal bus. While both provide ground transportation, these two contexts have different features. The bus is relatively inexpensive and follows a predetermined route and schedule. The taxi is available on demand (except for when you really need one), operates point-to-point, is more expensive, and holds a limited number of passengers. When it is raining, the extra cost of a taxi might not matter as much.
For the rest of this paper, we will use the term value cluster to refer to an abstract entity that performs a general type of value activity. Value context will be used to refer to a specialized form of a value cluster that has significant differences in value expectations, opposing forces, or change catalysts from other contexts in the same cluster.
Both value clusters and value contexts have their own value models. The value model of a cluster represents the common aspects of all contexts that specialize that cluster. Each value context specializes the value model of its cluster. The set of value models for all contexts in a cluster provide important insights into the differences between what each one expects, and how it is affected by its environment.
Why is this important? A system's architecture must perform a delicate balancing act involving its value drivers. This can be tricky in a single-context system, where all deployment scenarios have equivalent value expectations and limiting factors. Tasters and AA batteries are good examples of single-context systems. So are simple text editors, file difference analyzers, and many other PC desktop utilities. In a single context system, it is still possible to have interdependencies and conflicts among combinations of value expectations and limiting factors.
However, it gets more challenging. Most complex systems have multiple contexts. In other words, as you consider different deployment environments, they have significant variation in value expectations, opposing forces, and change catalysts. As either the number of contexts increases, or their degree of compatibility decreases, it becomes much more difficult to satisfy all of them with a single architecture. While there are several techniques for dealing with this situation, the first step is to recognize when you face them.
Many systems have only a few contexts. These occur most often with systems that are deployed for internal use inside an organization. Different deployment environments can have different limiting factors. For example, a system for dispatching airline baggage handlers is affected by weather extremes, or an international system is affected by local regulations. Other times, deployment environments have different value expectations. This is especially true when there are international or cultural differences. Nurses who operate haemodialysis machines for patients with chronic kidney failure in a government sponsored hospital in Europe will have different wants and priorities than nurses who perform the same task in a small, private walk-in clinic in the U.S. (where private insurance providers pay for treatments).
Many other systems have a large number of contexts. These occur most frequently with technology-centric products that are developed for sale or lease to a wide array of customers. The same conditions that cause variation in slight context systems, occurs in spades due to:
- The number of deployment contexts can be thousands or millions of times larger,
- The organizations (or systems) that the stakeholders participate can have very different sets of value expectations, and
- The catalysts that trigger significant change in each deployment environment are likely to be very different.
In summary, a value model captures the drivers that determine how satisfied a particular market segment is, and how difficult it will be to satisfy them.
Utility Curves
The previous section made reference to an important concept called a utility curve. Very simply, a utility curve is a mapping from one scale of measurement to a second. The first scale represents a result variable that can be quantified. The second scale is the level of value (satisfaction, utility) that is generated. The most common example of a utility curve is one used to map test scores into letter grades for a high school or college exam. As we will show, a good grasp of utility curves is absolutely essential to making effective tradeoff decisions.
Figure 2 illustrates a simple example. The first scale represents the EPA combined city and highway fuel economy for a vehicle. The second scale represents five qualitative values:1
Figure 2. Utility Curves
Worst: The minimum passable requirement. Little or no value is lost with results below this level.
Adequate: This result represents a below average outcome, disappointing, but acceptable.
Satisfactory: This is the expected outcome—no better, no worse.
Preferable: This result represents an above average outcome, satisfying and pleasing, but not far above the range of ordinary.
Best: The best expected outcome. Little or no value is gained with results that exceed this.
The figure shows three distinct utility curves. There are many other possible shapes; these represent three common ones. The first curve is linear, the second has an s-curve shape, and the third is a parabola. All three have the exact same worst and best values. What is interesting to note is the intermediate values. An increase from 10 to 20 MPG yields 10 percent of the available value for the s-curve, but 60 percent for the parabola.
In a single-context system the use of utility curves to analyze architecture strategies is straightforward. The Decision Analysis method described by Kepner and Tregoe [2] can be used for this purpose. Each alternative is evaluated against each value expectation. Utility curves are used to map the value of the quantitative measure achieved by each alternative to its corresponding value. Then the value levels are weighted by the priority of the expectation, and totaled. More preferable alternatives have higher totals.
The most challenging aspect of this method is choosing an appropriate mechanism to evaluate each alternative against each want goal. The best scenario is when the mechanism provides an objective measurement (such as measuring MPG or horsepower for an automobile engine). In some cases, the mechanism might be subjective. The cost of coming up with an appropriate objective measurement must be balanced against the extra accuracy and objectivity provided. In some situations, an initial assessment can be done with subjective assessments. If the results are close, then objective measurements can be made in order to choose among the best alternatives.
Architecture Challenges
An architecture challenge is a situation where one or more limiting factors make it more difficult to satisfy one or more value expectations. Simply put, an architecture challenge is an obstacle or barrier that the system must overcome in order to provide value. This is a key point. Obstacles and value expectations are like yin and yang. If obstacles are not present, then value drops, because the outcome is easy and anyone can do it. Bottled water is the one noteworthy exception to this rule.
Within any context, identification of architecture challenges involves assessing:
- Which limiting factors impact one or more value expectations?
- If impacts are observed, do they make fulfilling the value expectation(s) easier (positive impact) or harder (negative impact)?
- How hard or easy does each impact make things? A simple low, medium, or high scale usually is sufficient here.
Figure 3 describes some architecture challenges that occur in a portfolio management pre-trade compliance system. A more in-depth discussion of architecture challenges and a case study can be found in [4].
Architecture challenges must be considered within their own contexts. While it might be possible to average out utility curves across contexts, the same cannot be done with the impact of limiting factors on value expectations. For example, suppose a Web server supplies pages to users in two contexts. One context accesses static information, such as reference documents. They want response times between 1-3 seconds. The other context accesses very dynamic information, like box scores of in-progress sporting events. They are satisfied with response times in the range of 3-6 seconds.
Both contexts are subject to CPU, memory, disk, and network limitations. However, as request volumes increase by a factor of 10 or 100, these two contexts are likely to run into very different scalability obstacles. In the dynamic content case, synchronization of updates and accesses becomes a limiting factor under heavy load. For the static content, heavy load can be overcome by caching frequently read pages.
There is one final point that should be mentioned about architecture challenges and multiple-context systems. In many cases, it will seem that a single system is capable of supporting many different contexts. However, the architecture contexts that arise from each context are a very good tool for evaluating how compatible these contexts are with each other. When incompatible contexts are addressed by the same architecture, the result is never that both are satisfied. Either one suffers at the expense of the other, or both are compromised. One example of this is a semiconductor tool that attempted to support production and research contexts with a single architecture. Given the very different sets of value expectations (reliability versus flexibility), opposing forces (fab versus lab), and change catalysts (production runs versus experiments), it was unlikely that this marriage could be saved.
Architecture Strategy
As earlier sections described, formulating a system's architecture strategy starts with:
- Recognizing the appropriate value contexts and prioritizing them.
- Defining utility curves for and prioritizing value expectations in each context.
- Identifying and analyzing opposing forces and change catalysts in each context.
- Detecting where limiting factors make it hard to fulfill value expectations.
Figure 2 illustrates this process. The previous list of activities brings us into the Architecture Challenges box in the middle of the diagram. At this point, we are working with a list of architecture challenges that have been gathered from all of the contexts. Each of these challenges represents the impact of one or more limiting factors on one or more value expectations.
As the diagram shows, before we start addressing each challenge, we need to prioritize them. The following observations explain why:
- The earlier a decision is made, the more things it is likely to constrain.
- The later a decision is made, the fewer alternatives there are available.
As a result, it only makes sense to reserve the earliest architecture decisions to be the ones that yield the most value. There are several criteria that can be used for prioritizing architecture challenges. We recommend a balance among the following:
Importance: How high is the priority of value expectations that are impacted by the challenge? If these value expectations are specific to a few contexts, then what is the relative priority of these contexts?
Magnitude: How large of an impact on the value expectations was caused by the limiting factors?
Consequence: How many realistic options do there appear to be? Do these options have significant differences in difficulty or effectiveness?
Isolation: How isolated is the impact of the most realistic options? The more widespread the impact, the more weight this factor has.
Once the architecture challenges are prioritized, approaches are formulated for the highest priority ones. While techniques such as architecture styles and patterns [3] [8] can help, this is an area where deep experience with the problem and solution domains is invaluable. Effective approaches to significant challenges are the result of skill, insight, effort, and painstaking work. This statement is true, regardless of whether the problem is surgery, executive management, or software architecture.
As each challenge is addressed, its approach will constrain the solutions to other challenges, and sometimes create new ones. If the architecture challenge priorities are correct, then most of the downstream constraints will be appropriate. However, in some cases the approach to a high priority challenge might negatively impact several slightly lower priority challenges. The combined priority of the impacted challenges might outweigh the higher priority challenge. In this case, it is advisable to back up and formulate a different approach to the original challenge.
Finally, once approaches have been formulated to the set of high priority challenges, the architecture strategy can be expressed. The architect analyzes the set of approaches, and factors out a set of guiding principles in the following areas:
Organization: How is the system organized into subsystems and components? What is the composition and responsibilities of each? How can the system be deployed over a network? What types of users and external systems are there? Where are they located and how do they connect?
Operation: How do components interact? In which cases is communication synchronous? In which cases asynchronous? How are the actions of components coordinated? When is it acceptable to configure a component or run diagnostics on it? How are errors conditions detected, diagnosed, and corrected?
Variability: Which major features of the system are permitted to vary from one deployment environment to another? Which options are supported for each feature, and when can the choice be made (for example, compile, link, installation, startup, or at runtime)? What dependencies are there between variation points?
Evolution: How is the system designed to support change while retaining its stability? Which specific types of significant change have been anticipated, and what are the preferred ways to address them.
In summary, the architecture strategy is the rudder and keel of a sailboat, providing direction and stability. It is expected to be a brief, high-level statement of direction that must be understandable by all stakeholders, and should be relatively stable over the lifetime of the system.
Integration with Existing Development
Methods
Figure 3 shows how Value Models and Architecture Strategy relate to waterfall and spiral methods. Value Models and Architecture Strategy operate at both an earlier point and a higher level than these methods. When value models are studied and architecture strategies are formulated, they provide a great foundation for specifying requirements and defining a more detailed architecture. The value model drives the requirements, and influences the architecture definition by providing information for making tradeoffs. The architecture strategy drives the more detailed architecture definition, and provides a set of derived requirements that are needed to overcome known obstacles.
An appropriate analogy is to view architecture strategy as strategic planning, and value models as market analysis. In this light, requirements become corporate objectives and policies. Architecture definition is the business organization and operational plan, and use cases are the equivalent of business processes.
Few companies establish corporate objectives, organizational structure, operating plans, and business processes without first having a clear idea of their mission, markets, competitors, resources, and strategy. Even fewer effective ones do this.
Figure 4 shows how Value Models and Architecture Strategy relate to agile methods. Both EXTREME PROGRAMMING and SCRUM make allowances for an architecture definition. SCRUM does this explicitly, expecting the architecture to be defined in the first 4-5 week iteration.
Figure 4. Architecture Strategy Formulation
Figure 5. Value-Driven Architecture with Traditional Methods
Figure 6. Integration of Value Driven Architecture with Agile Methods
EXTREME PROGRAMMING does this implicitly. One of the 12 core principles of EXTREME PROGRAMMING is called System Metaphor.
This principle is not as frequently used or as well understood as its more famous siblings: Small Releases, Pair Programming, Test Driven Development. In the early days of EXTREME PROGRAMMING, the team that worked on the large complex Chrysler Payroll System needed a good way to describe workflow management to the Chrysler developers. Somebody got the idea of drawing an analogy between payroll workflow and an automotive assembly line. The metaphor clicked, and the Chrysler developers got the picture.
The EXTREME PROGRAMMING website [6] defines System Metaphor as:
What Extreme Programming (EXTREME PROGRAMMING) uses instead of a formal architecture. A simple shared story of how the system works, a metaphor. This story typically involves a handful of classes and patterns that shape the core flow of the system being built.
What EXTREME PROGRAMMING refers to as a "formal architecture" is more like what we referred to above as an architecture definition. An architecture strategy plays the same role as a system metaphor, without being a metaphor. This is a significant advantage, since really effective metaphors (such as the one used in Chrysler) can be hard to come by. By contrast, clear, concise core principles are easy to state and easy to understand. A person doesn't need to go out and watch the movie Hidalgo in order to understand what is meant by "life, liberty, and the pursuit of happiness."
Conclusions
In summary, the value model helps us to understand and communicate important information about sources of value. Some of the important issues it addresses are how value flows, why similarities and differences occur in value expectations and external factors, and what subset of that value our system seeks to satisfy. It is the job of the architect to satisfy these value expectations by resolving forces that influence the system in general, forces that are specific to certain contexts and forces that are expected to change over time. In this respect, architecture is similar to flying a jet airplane—the pilot must transport passengers safely to a known destination, while balancing the laws of aerodynamics, the capabilities of the plane, and current and future weather conditions. The link between value models and software architecture is clear and logical, and can be expressed by the nine points listed below:
- Software-intensive products and systems exist to provide value.
- Value is a scalar quantity that incorporates perceptions of marginal utility and relative importance across many distinct goals. Tradeoffs between goals are an extremely important consideration.
- Value exists at multiple levels, some of which contain the target system as a value provider. The value models for these scopes contain the primary drivers of the software architecture.
- Value models that are above these in the hierarchy can cause the value models of their children to change. This is important input in formulating the principles of evolution for the system.
- For each cluster, value models are homogeneous. Value contexts, exposed to different environmental conditions, have different expectations of value.
- The development sponsor for the system has different priorities for trying to satisfy various value contexts.
- Architecture challenges result from the impact of environmental factors on value expectations within a context.
- Architecture approaches seek to maximize value by addressing the highest priority architecture challenges first.
- Architecture strategies are synthesized from the highest priority architecture approaches by factoring out common rules, policies, and principles of organization, operation, variation, and evolution.
The main contributions of this approach are:
- The sources of value in the system are modelled as first-class concepts. Value expectations associate a small number of capabilities with quality attributes, utility curves, and external factors. Value expectations are held by value realms and contexts—realms capture the common aspects of value expectations, while contexts capture the important variabilities within a realm.
- Traceability of architectural reasoning is also a first-class entity. Value expectations link to architecture challenges, which link to architecture approaches, which link to architecture strategies. Stakeholders now can see the thought process that went behind the solution.
- A very useful side effect of this traceability is an increased ability to review software architectures. Because the reasoning behind the decisions is made explicit, it becomes easier for other stakeholders (project sponsors, domain experts, technology experts, end users) to identify aspects that might be missing or incorrect.
References
[1] Greenfield, J. and Short, K., Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools,Wiley, 2004. ISBN: 0-471-2084-3.
[2] Kepner, C., Tregoe, B., The New Rational Manager, Kepner-Tregoe Inc., 1997,
ISBN: 0971562717.
[3] Gamma, E., Helm, R., Johnson, R., Vlissides, J., Design Patterns: Elements of Reusable Object- Oriented Software, Addison Wesley 1995, ISBN: 0201633612.
[4] Alfred, C., "Using Architecture Challenges to Formulate Software Architecture ", 2002, Foliage Software Systems, Inc. white paper.
[5] Kazman, R., Asundi, J., and Klein, M. "Making Architecture Decisions, an Economic Approach", SEI Technical Report: CMU/SEI-2002-TR-35, 2002.
[6] Extreme Programming Core Practices
[7] Porter, M., Competitive Advantage: Creating and Sustaining Superior Performance, MacMillan Inc. 1985, ISBN: 0029250900.
[8] Buschmann, F. et al., Pattern-Oriented Software Architecture, Volume 1: A System of Patterns, 1996, John Wiley and Sons, ISBN: 0471958697.
About Foliage Software Systems
Foliage delivers competitive advantage through technology strategy, software architecture, and custom software development. Since being founded in 1991, Foliage has completed more than 175 projects for clients in financial services, semiconductors, healthcare, aviation, and e-business. Foliage architects, project managers, and software engineers have an average of over 20 years of experience and they leverage their deep knowledge to produce software of outstanding quality to meet deadline and budget goals. The company has been named to the Deloitte & Touche Fast 50 for three consecutive years and Software Magazine's Software 500 for two years.
Footnotes
An equivalent technique for mapping a quantitative metric to a utility scale is described in [5].
This article was published in the Architecture Journal, a print and online publication produced by Microsoft. For more articles from this publication, please visit the Architecture Journal website.