span.sup { vertical-align:text-top; }
Patterns in Practice
Object Role Stereotypes
Jeremy Miller
Contents
What's the Problem?
Responsibility-Driven Design
Information Holder
Structurer
Service Provider
Interfacer
Coordinator
Controller
Using Object Role Stereotypes Effectively
Object-oriented systems are composed of many objects, each one fulfilling some set of responsibilities. Objects often need to collaborate with other objects in order to fulfill these responsibilities. You can better understand the responsibilities of objects and the collaborations between them by applying a concept known as object role stereotypes. This article will discuss the most common object role stereotypes by applying them to real-world examples and well-known design patterns.
What's the Problem?
Neolithic programmers lived in a state of simplicity. Programs were composed of a singular straight, unbroken line of instructions to the computer. Even today, many of us initially learn to program by writing standalone functions:
public static class Program {
public static void Main(string[] args) {
Console.WriteLine("Look at me World! I'm coding!");
// And about 30-50 other lines of code
}
}
The single method style of program construction is the easiest form of programming to learn, but it breaks down quickly as the program becomes larger. Early programmers and computer scientists soon realized that they needed some way of managing complexity as software projects increased in scope and ambition.
While it certainly didn't have the same impact on mankind as the discovery of fire, the ability to decompose a single program into multiple subroutines, classes, or services gave programmers some fantastic advantages over the monolithic block of code: divide and conquer. You can turn a single big problem into a series of easily achievable tasks. The human mind simply can't juggle so many different variables at one time. Decomposing a system allows you to deal only with one issue at a time, whether that be data validation, retrieval, or display.
Ideally, the system becomes much easier to understand but there's plenty of potential for getting things wrong as you split up the system between developers, teams, or even different systems. Adding to this potential for trouble is the fact that a divide-and-conquer strategy enables you to more easily deliver the system's functionality in increments, which means that your code will likely have to work seamlessly with code delivered in a previous release.
As I discussed in the June issue of MSDN® Magazine, the Open Closed Principle can make code much easier and less risky to extend over time by dividing the responsibilities of the system into separate modules (see msdn.microsoft.com/magazine/cc546578). Those are significant benefits, but they come with a cost. You need to distribute the responsibilities of the system in a beneficial way. Fortunately there's an entire software design method built around heuristic tools for determining and distributing the responsibilities of a system.
Responsibility-Driven Design
CRC Cards
Responsibility-Driven Design is closely linked to a lightweight modeling technique known as CRC cards (Class or Candidate/Responsibility/Collaborators). These are actual cards used for exploratory design.
To draw up a CRC card, simply take a 3" x 5" note card and write the proposed name of a class across the top of the lined side of the card. On the unlined side of the card, write a sentence or two that serves as a purpose statement for the class. The purpose statement should reflect the Object Role Stereotypes of the class.
Divide the lined side of the note card into two columns. In the left column, list the responsibilities of the class (something along the lines of "knows the invoice header information" or "decides whether to send an e-mail message" or "sends an e-mail message"). Again, each of these responsibilities should fit in with the object role stereotypes of the class. In the right-hand column, list other classes that must be interacted with in order to fulfill the duties in the left-hand column.
CRC cards seem to have been passed over in popularity by UML modeling, but the technique is very effective, much simpler, and all you need is a stack of 3" x 5" note cards. See c2.com/doc/oopsla89/paper.html for more information on CRC cards.
Responsibility-Driven Design (RDD) is an informal design method developed in the Smalltalk community in the late 1980s and early 1990s. Rebecca Wirfs-Brock, who conceived the theory, sums it up like this: "Objects are not just simple bundles of logic and data. They are responsible members of an object community."
As its name implies, RDD starts by breaking down a system or an individual feature into the various actions and activities that the system must perform, then proceeds to assign these responsibilities to objects within the system. Object responsibilities are described as either knowing, doing, or deciding.
Once you have a list of responsibilities, you need to consider what the objects are going to be and how the responsibilities are going to be distributed between the objects. Objects within a system take on a role consisting of closely related responsibilities. You can employ object role stereotypes to help define the role of a single object in the system and to serve as a guide for assigning responsibilities to the various objects in the system.
RDD identifies the six common stereotypes listed in Figure 1. Before I go on to describe these six in detail, I should point out that you can blend stereotypes within a single class. For example, a single class will frequently be both an information holder and a structurer. You might also find some value in defining your own stereotypes within a project. (See the "CRC Cards" sidebar for another idea for modeling elements of your application.)
Figure 1 RDD Stereotypes
Stereotype | Description |
---|---|
Information Holder | Knows things and provides information. May make calculations from the data that it holds. |
Structurer | Knows the relationships between other objects. |
Controller | Controls and directs the actions of other objects. Decides what other objects should do. |
Coordinator | Reacts to events and relays the events to other objects. |
Service Provider | Does a service for other objects upon request. |
Interfacer | Objects that provide a means to communicate with other parts of the system, external systems or infrastructure, or end users. |
Information Holder
Last year I worked on an energy trading system. In that system users might enter a trade to purchase 1,000 barrels of gasoline. In order to make that trade, we had to check the on-hand inventory, which was frequently tracked in metric tons, and compare the desired quantity with the on-hand quantity. Several other times we had to compare two quantities that were in different units of measure.
Let's say that you were building a simplistic energy trading system from scratch. One of the most obvious responsibilities in the system is to represent a quantity of something being traded. Another is to make all of those "less than" or "greater than" determinations. As you probably already know, in order to compare two quantities in the energy system, you have to know the numeric amount and unit of measure of both quantities (you'll probably also need to know the density of the commodity being traded, but I'm skipping over that). One approach would be to create an Information Holder to "know" facts about unit of measure and quantity conversions. The desired behavior might lead you to two different classes like the ones shown in Figure 2.
Figure 2 Handling Units of Measure and Quantity
// UnitOfMeasure is a strongly typed enumeration that "knows" about,
// and can answer questions about a logical Unit of Measure
public class UnitOfMeasure {
public static readonly UnitOfMeasure MT = new UnitOfMeasure(2500);
public static readonly UnitOfMeasure LB = new UnitOfMeasure(1);
// In a real system you'd have to do the conversion by using
// the density from the real commodity in the barrel
public static readonly UnitOfMeasure BBL = new UnitOfMeasure(100);
private readonly double _weightInPounds;
protected UnitOfMeasure(double weightInPounds) {
_weightInPounds = weightInPounds;
}
// To make the system easier to extend, let's make sure that the
// responsibility for converting quantities between various units of
// measure in the UnitOfMeasure class
public double ConvertIntoThisUOM(Quantity quantity) {
return quantity.Amount*quantity.Uom._weightInPounds/_weightInPounds;
}
}
public class Quantity {
private readonly double _amount;
private readonly UnitOfMeasure _uom;
public Quantity(double amount, UnitOfMeasure uom) {
_amount = amount;
_uom = uom;
}
public double Amount {
get { return _amount; }
}
public UnitOfMeasure Uom {
get { return _uom; }
}
public bool IsLessThan(Quantity other) {
double otherAmount = _uom.ConvertIntoThisUOM(other);
return _amount < otherAmount;
}
// Also implement
// bool IsGreaterThan, Equals, Subtract, Add, etc.
}
The Quantity class is an example of the Money pattern (also known as a Whole Value). Quantity is an information holder. Any time the rest of the application has a question about difference or sum of two quantities, the application just has to ask a Quantity object. Quantity doesn't know where its data comes from or how it's going to be consumed. Because of its simplicity, the Quantity isn't all that useful on its own, but that very same simplicity makes it easy to reuse in different ways.
Two of the other domain concepts in an energy trading system are trades and allocations (see Figure 3), both of which contain a surprising number of quantities. The Trade class is strictly responsible for providing data and making determinations about an energy trade. The Trade uses the information in the Allocation class, and both the Trade and Allocation classes use the Quantity class to make decisions.
Figure 3 Trades and Allocations
public class Trade {
private Quantity _originalQuantity;
private List<Allocation> _allocations = new List<Allocation>();
// Creates a new allocation. If the requested quantity
// is greater than the remaining quantity, return the remaining
// quantity
public Allocation DrawDown(Quantity requestedQuantity) {
Quantity remainingQuantity = GetRemainingQuantity();
Quantity allocationQuantity =
requestedQuantity.IsLessThan(remainingQuantity)
? requestedQuantity
: remainingQuantity;
Allocation allocation = new Allocation(allocationQuantity, DateTime.Today);
_allocations.Add(allocation);
return allocation;
}
// How much of the original Trade quantity is left?
public Quantity GetRemainingQuantity() {
Quantity returnValue = _originalQuantity;
foreach (Allocation allocation in _allocations) {
returnValue = returnValue.Subtract(allocation.Quantity);
}
return returnValue;
}
}
The entity and value objects in a rich domain model are a common example of information holders. An information holder may collaborate with service providers like data access classes or configuration classes to fetch more information on demand, or it might be completely built externally by something else like an object relational mapping (ORM) tool.
There's an important point here about designing around behavior rather than designing from a data-centric viewpoint. By starting from a single responsibility, a design evolves where the function of converting values between different units of measure is largely encapsulated by a single class (Quantity). For example, if you look at the database behind a financial system, you'll see many fields named something like XXX_qty and XXX_uom. If you had started from a data-centric perspective and created objects as a faithful representation of the database, you might have ended up duplicating quite a bit of the quantity comparison and arithmetic functionality.
In fact, that's exactly what I observed in the real-life energy trading system this example was based on. Avoid the primitive obsession. Don't be afraid to create small objects instead of wallowing in the mud of primitive variables.
The job of a structurer is to track, store, and maintain relationships between objects. The humble Dictionary<T, U> class introduced in the Microsoft® .NET Framework 2.0 is an example of a simple structurer. If you have some sort of many-to-many relationship among entities in your domain model, you will often need to use some sort of structurer object to model the connections.
In some situations it's valuable to keep the structurer's responsibility completely separate from the business processes that consume the objects held by the structurer. For example, in the early days of a system you may use a simple, naive data structure to cache data. In this situation, you probably don't have enough time to design the ultimate data structure because you need to make your first release.
As your system grows in scope, the volume of data that it needs to process will grow along with it, potentially making your original data structure a performance and scalability bottleneck. If you have created a separate object to fulfill the structurer role, you should be able to replace that object with a different data structure without requiring a change to the consumers of the original data structure
Service Provider
A service provider knows how to perform a task on behalf of another object, but it's generally passive while it waits to be activated by some other class.
I work on an open source tool called StoryTeller that creates, manages, and runs automated acceptance tests authored for the FitnesseDotNet engine. Test data is stored in a Test class (an information holder stereotype). StoryTeller executes tests with the FitnesseDotNet engine, which expects its test data to be expressed as HTML tables. A couple other screens also display HTML views or export HTML reports of test data.
Needless to say, I needed to be able to convert the Test objects into HTML in several different contexts. I therefore created a service provider class called HTMLWriter that is responsible for converting a Test object into an HTML string:
public interface IHTMLWriter {
// Converts a Test (Test : ILeaf) into HTML form
string GetHTML(ILeaf leaf, bool withStyle);
// Writes out the Test into HTML form, prepended with the tables for
// SetUp and TearDown from the Test's containing Suite
string GetHTMLWithSetupAndTearDown(Test test, bool withStyle);
}
HTMLWriter was easy to build because it simply takes in a Test object and returns a formatted string. I didn't have to consider how to retrieve the Test object or deal with the rest of the system. Unit testing, as shown in Figure 4, was easy because I could just build up a Test object, run it through HTMLWriter, and check that the output was formatted as expected.
Figure 4 Testing HTMLWriter
[Test]
public void Write_out_a_single_table() {
Test test = new Test();
test.AddTable()
.AddRow("type1")
.AddRow("a,b,c");
HTMLWriter writer = new HTMLWriter();
string expectedHTML =
"<table><tr><td colspan=\"3\">type1</td></tr>" +
"<tr><td>a</td><td>b</td><td>c</td></tr></table>";
string actualHtml = writer.GetHTML(test, false);
Assert.IsTrue(actualHtml.Contains(expectedHTML));
}
If I hadn't isolated the responsibility for converting a Test into HTML into a single service provider, I would have gotten into trouble. One of the consumers of HTMLWriter is the TestRunner class, partially shown here:
public class TestRunner : MarshalByRefObject, ITestRunner {
public TestResult ExecuteTest(Test test) {
HTMLWriter writer = new HTMLWriter(test);
string html = writer.GetHTMLWithSetupAndTearDown(test, true);
TestResult result = ExecuteTest(html);
result.ReadPropertiesFromTest(test);
return result;
}
}
As you can see, TestRunner is a very busy class in and of itself. (During the course of writing this article, I've realized that TestRunner has some additional unrelated responsibilities that do not fit the class.) If I had added the responsibility for converting a Test into HTML, TestRunner would be far too complicated for my comfort. Moreover, the system itself would have been much harder to test because there would have been no way to test the HTML conversion independently from executing a Test against the FitnesseDotNet engine.
In enterprise development these days, your code rarely lives in isolation. Some of the most difficult tasks involve integrating with other systems—and these other systems are going to need to interface with your code. Even within your own system, your code will need to interact with other subsystems. The interfacer role is effectively a mediator used to simplify communication with another system or subsystem.
A classic example of an interfacer is the Facade pattern. In the energy trading system I discussed earlier, when new trades were created we needed to invoke the pricing rules to determine a price for the new trade. Just by itself, pricing is a very complicated process. It's a subsystem in its own right that probably contains dozens of individual classes. In order to make the pricing functionality easily consumable by the team down the hall, we could have created a Facade class that hides the complexity of the underlying pricing structure:
public class PricingSystem {
public void Price(Trade trade) {
// PricingSystem is going to call on several underlying
// services in the Pricing subsystem in order to
// add pricing information to the trade
}
}
Now, when we need to price a trade as part of submitting a new trade, the interaction goes like this:
public class TradeService {
public void SubmitTrade(Trade trade) {
// pre-processing
PricingSystem pricingSystem = new PricingSystem();
pricingSystem.Price(trade);
// persist the Trade and send the proper notifications
}
}
The value of the Facade is that the pricing functionality is easy to consume because there is just a single class and method to call without forcing the consumer of the subsystem to be an unmanageable monolithic block of code. If every dependency on pricing in the system goes through PricingSystem, you have much more freedom to change the pricing subsystem without having to worry about breaking or changing other parts of the system.
Coordinator
A coordinator reacts to events and relays commands to other objects. A coordinator is valuable when you have a process that is event driven and it is convenient to decouple listening or detecting events from the processing that occurs when an event is triggered.
As an example, I worked on a shipping system that automated the flow of boxes and activities on a factory floor. The system would direct other actions and process data when it received socket messages from physical scanners along the floor. Let's break that problem down into the responsibilities:
- One way or another, you need to listen for socket messages from the physical scanners.
- You need to route boxes throughout the system once a box reaches a certain point on the factory floor.
- It may not be obvious, but you must translate between the IP address of the incoming socket communication and the physical or logical location of the scanner on the factory floor.
The real system from this example combined all three responsibilities in the same class—to everyone's sorrow. The business processing couldn't be tested without having the scanners replicated in a testing lab. You've got to test everything together at some point, but it would have been much more efficient to have the business processing tested in isolation before bothering with the integration testing. Reading the code was a nightmare because we had to translate IP addresses in the code into their logical meaning.
The main reason not to write code this way is that all three responsibilities identified above change independently of each other. I shouldn't have to touch business logic code when the factory changes to a different type of physical scanner or has to change the IP address when a scanner is replaced.
Instead, I'll separate the responsibilities into three classes as shown in Figure 5. The advantage of this design is that I've completely decoupled the shipment routing functionality from the physical infrastructure of the factory floor. This design would have given the shipping system team far greater abilities to simulate factory scenarios and enabled them to push automated testing out of the specialized testing lab.
Figure 5 Decoupling Responsibilities
// Information Holder class that maps the IP address of the
// physical scanner to the logical "Position" within
// the factory
public interface IFactorySensorMap {
Position FindPositionFromScannerAddress(IPAddress address);
}
// The business processing code
public interface IShippingSystem {
void BoxDetectedAt(string barCode, Position position);
}
// The Coordinator class that listens for socket requests
public class ScannerListener {
private IFactorySensorMap _sensorMap;
private IShippingSystem _shippingSystem;
// scanDetected is called as the result of detecting an
// incoming Socket call
private void scanDetected(string barCode,
System.Net.IPAddress ipAddress) {
Position position =
_sensorMap.FindPositionFromScannerAddress(ipAddress);
_shippingSystem.BoxDetectedAt(barCode, position);
}
}
A multithreaded business process is another great example of the value of a coordinator. Getting thread management code right is hard enough without having to wade through intermingled business logic. From experience, business logic is much harder to understand when it's embedded directly inside of thread management code. In this case, you can make both responsibilities easier to code, understand, and test by using a coordinator class to manage the threads and simply tell a separate business processing object when events occur. The business logic can then be written and tested independently of the thread management code.
A controller object directs the actions of other classes. A controller is differentiated from a coordinator by the level of control. A controller doesn't just tell other objects that an event happened; the controller decides what the other objects should do based on runtime conditions.
Going back to the energy trading system example, let's say you need to create a screen in a user interface to purchase a quantity of gasoline. Before the new purchase trade can be entered into the system, the screen needs to check whether there is enough gasoline in inventory. If there is enough inventory to fulfill the requested quantity, the system will decrement the on-hand inventory by the requested quantity and submit the new trade. If there is not enough inventory, the screen should show a message box indicating that the requested trade cannot be made.
Using object role stereotypes, you would first define the responsibilities of this screen as something like this:
- You need an interfacer to capture the user input and show warning messages.
- You have to process a new trade, so you could build a service provider to process that trade.
- You need to fetch and update the on-hand inventory, so create another service provider to access and update the on-hand inventory.
- You have to compare and also subtract two quantities of gasoline, which may not be expressed in the same units of measure (the inventory may be stored in metric tons and the requested quantity might be expressed in barrels). The Quantity class discussed earlier can handle this responsibility.
- Some class needs to be the controller for the screen and proceed to govern the workflow of the requested trade.
Once you see these responsibilities, you might start creating three new classes. The first class is an EnergyTradingScreen that is the actual view to display the warning messages and capture the user data entry. This could be done with either Windows® Presentation Foundation (WPF) or a Windows Forms control. The second class is an InventoryRepository service provider to fetch and update inventory data. The third class is a TradeService service provider to submit a new trade into the system.
This solves some of the tasks in the system, but there is still something missing. You still need to assign the controller responsibilities to some class. Let's examine the existing candidates. The TradeService and InventoryRepository classes are a poor fit. The workflow responsibility doesn't fit in with the responsibilities of a service provider class. The most common design would be to make the EnergyTradingScreen responsible for the workflow, but that has some negative consequences. Instead, you should create a whole new class. Using the Supervising Controller pattern (a form of Model View Presenter), you can put the workflow in a new controller class called EnergyTradingController, shown in Figure 6.
Figure 6 EnergyTradingController
public class EnergyTradingController {
private readonly EnergyTradingScreen _screen;
private readonly InventoryRepository _repository;
private readonly TradeService _service;
public EnergyTradingController(
EnergyTradingScreen screen,
InventoryRepository repository,
TradeService service) {
_screen = screen;
_repository = repository;
_service = service;
}
public void Purchase(Customer customer, Quantity requestedQuantity) {
// First, we need to decide if the trade could be fulfilled
Quantity onhandQuantity = _repository.GetOnHandQuantity();
if (requestedQuantity.IsLessThan(onhandQuantity)) {
// There is enough inventory to fulfill this request, so place
// the new trade
createTrade(requestedQuantity, customer);
}
else {
// Cannot fulfill the trade, so tell the screen to show a user
// message
_screen.DisplayQuantityUnavailableMessage(requestedQuantity);
}
}
private void createTrade(Quantity requestedQuantity, Customer customer) {
Trade newTrade = _service.StartTrade(customer, requestedQuantity);
// Decrement the onhand inventory so we don't over allocate
_repository.ReserveQuantity(newTrade);
// Now we can go ahead and make the trade
_service.SubmitTrade(customer, requestedQuantity);
}
}
There are a lot of advantages to extracting a separate controller class. The workflow is easy to understand by just scanning EnergyTradingController because the workflow is the only thing happening in this class. The details of how user messages are displayed, how inventory is really stored in the system, or how trades are processed within the system are managed by other classes. More importantly, pulling the workflow out of the other classes allows those other classes to be simpler and more focused on providing a single service. When I remove workflow responsibilities from the service provider classes, I greatly improve my ability to reuse these services in other contexts besides this one energy trade screen.
Using Object Role Stereotypes Effectively
Designing software is often an exercise in managing complexity. While the overall solution is necessarily as complex as the problem that you're trying to solve, you can take steps to limit the complexity of any given class by only assigning it a discrete set of responsibilities.
When you're deciding where a new responsibility should go, you should ask yourself what object role stereotype fits the new responsibility and look for suitable classes that match that stereotype.
It's not just a matter of the initial creation of the code. Your objects will change over time as bugs are fixed and new functionality is added to existing code. It's often the later releases where the design of a system goes irretrievably wrong as the initially clean structure is corrupted and obscured by layers of cruft. Understanding the object role stereotypes of existing classes can help you know where to assign new responsibilities and give you additional clues for when a completely new class should be added to the system.
Send your questions and comments to mmpatt@microsoft.com.
Jeremy Miller is a Microsoft MVP for C#. He is the author of the open source StructureMap (structuremap.sourceforge.net) tool for Dependency Injection with .NET and the forthcoming StoryTeller (storyteller.tigris.org) tool for supercharged FIT testing in .NET.