Udostępnij za pośrednictwem


5 - As Easy As Falling Off a Log: Using the Logging Application Block

patterns & practices Developer Center

On this page: Download:
Introduction | What Does the Logging Block Do? | Logging Categories | Logging Overhead and Additional Context Information | How Do I Use the Logging Block? | Adding the Logging Block to Your Project | Configuring the Logging Block | Diving in with an Example | Creating and Writing Log Entries with a LogWriter - About Logging Categories, Filtering by Category, Writing Log Entries to Multiple Categories, Controlling Output Formatting, Non-Formatted Trace Listeners, Filtering by Severity in a Trace Listener, Filtering All Log Entries by Priority | Creating and Using LogEntry Objects | Capturing Unprocessed Events and Logging Errors - About Special Sources, An Example of Using Special Sources | Logging to a Database - Using the Database Trace Listener | Logging Asynchronously | Reconfiguring Logging at Run Time | Testing Logging Filter Status - Obtaining Information about Trace Sources and Trace Listeners, Checking if Filters Will Block a Log Entry | Adding Additional Context Information | Tracing and Correlating Activities - An Example of Tracing Activities | Creating Custom Trace Listeners, Filters, and Formatters | Summary | More Information

Download code

Download PDF

Download Paperback

Introduction

Just in case you didn't quite grasp it from the title, this chapter is about one of the most useful and popular of the Enterprise Library blocks, the Logging Application Block, which makes it really easy to perform logging in a myriad of different ways depending on the requirements of your application.

Logging generally fulfills two main requirements: monitoring general application performance, and providing information. In terms of performance, logging allows you to monitor what's happening inside your application and, in some cases, what's happening in the world outside as well. For example, logging can indicate what errors or failures have occurred, when something that should have happened did not, and when things are taking a lot longer than they should. It can also simply provide status information on processes that are working correctly—including those that talk to the outside world. Let's face it, there's nothing more rewarding for an administrator than seeing an event log full of those nice blue information icons.

Dn440731.note(en-us,PandP.60).gifPoe says:
Poe I use logs to monitor the health of my applications and make sure they’re running smoothly, to troubleshoot issues when things go wrong, and in some cases to build up an audit trail.

Secondly, and possibly even more importantly, logging can provide vital information about your application. Often referred to as auditing, this type of logging allows you to track the behavior of users and processes in terms of the tasks they carry out, the information they read and change, and the resources they access. It can provide an audit trail that allows you to follow up and get information about malicious activity (whether it succeeds or not), will allow you to trace events that may indicate future attack vectors or reveal security weaknesses, and even help you to recover when disaster strikes (though this doesn't mean you shouldn't be taking the usual precautions such as backing up systems and data). One other area where audit logging is useful is in managing repudiation. For example, your audit logs may be useful in legal or procedural situations where users or external attackers deny their actions.

The Logging block is a highly flexible and configurable solution that allows you to create and store log messages in a wide variety of locations, categorize and filter messages, and collect contextual information useful for debugging and tracing as well as for auditing and general logging requirements. It abstracts the logging functionality from the log destination so that the application code is consistent, irrespective of the location and type of the target logging store. Changes to almost all of the parameters that control logging are possible simply by changing the configuration after deployment and at run time. This means that administrators and operators can vary the logging behavior as they manage the application.

Dn440731.note(en-us,PandP.60).gifPoe says:
Poe Sometimes, I need to change the amount of logging data I collect dynamically. This is especially useful when I’m troubleshooting issues and find that I need to collect more detailed information.

If you are concerned about the potential overhead of logging on the performance of your application, or you want to use more structured log messages, or you plan to automate processes from log entries, you should consider using the Semantic Logging Application Block instead of the Logging Application Block. The Semantic Logging Application Blockis described in the next chapter.

What Does the Logging Block Do?

The Logging Application Block allows you to decouple your logging functionality from your application code. The block can route log entries to a Windows Event Log, a database, or a text (or XML) file. It can also generate an email message containing the logging information, a message you can route through Windows Message Queuing (using a distributor service provided with the block). And, if none of these built-in capabilities meets your requirements, you can create a provider that sends the log entry to any other custom location or executes some other action.

In your application, you simply generate a log entry using a suitable logging object, such as the LogWriter class, and then call a method to write the information it contains to the logging system. The Logging block routes the log message through any filters you define in your configuration, and on to the listeners that you configure. Each listener defines the target of the log entry, such as Windows Event Log or an email message, and uses a formatter to generate suitably formatted content for that logging target.

Dn440731.note(en-us,PandP.60).gifBeth says:
Beth Add code to generate log entries throughout your application wherever you think that logging information may prove useful in the future. You can use configuration to control how much and where information is logged.

You can see from this that there are many objects involved in this multi-step process, and it is important to understand how they interact and how the log message flows through the pipeline of processes. Figure 1 shows the overall process in more detail, and provides an explanation of each stage.

Figure 1 - An overview of the logging process and the objects in the Logging block

Follow link to expand image

Stage

Description

Creating the Log Entry

The user creates a LogWriter instance, and invokes one of the Write method overloads. Alternatively, the user can create a new LogEntry explicitly, populate it with the required information, and use a LogWriter to pass it to the Logging block for processing.

Filtering the Log Entry

The Logging block filters the LogEntry (based on your configuration settings) for message priority, or categories you added to the LogEntry when you created it. It also checks to see if logging is enabled. These filters can prevent any further processing of the log entries. This is useful, for example, when you want to allow administrators to enable and disable additional debug information logging without requiring them to restart the application.

Selecting Trace Sources

Trace sources act as the link between the log entries and the log targets. There is a trace source for each category you define in the logging block configuration; plus, there are three built-in trace sources that capture all log entries, unprocessed entries that do not match any category, and entries that cannot be processed due to an error while logging (such as an error while writing to the target log).

Selecting Trace Listeners

Each trace source has one or more trace listeners defined. These listeners are responsible for taking the log entry, passing it through a separate log formatter that translates the content into a suitable format, and passing it to the target log. Several trace listeners are provided with the block, and you can create your own if required.

Formatting the Log Entry

Each trace listener can use a log formatter to format the information contained in the log entry. The block contains log message formatters, and you can create your own formatter if required. The text formatter uses a template containing placeholders that makes it easy to generate the required format for log entries.

Logging Categories

Categories allow you to specify the target(s) for log entries processed by the block. You can define categories that relate to one or more targets. For example, you might create a category named General containing trace listeners that write to text files and XML files, and a category named Auditing for administrative information that is configured to use trace listeners that write to one or more databases. Then you can assign a log entry to one or more categories, effectively mapping it to multiple targets. The three log sources shown in the schematic in Figure 1 (all events log source, not processed log source, and errors log source) are themselves categories for which you can define trace listeners.

Dn440731.note(en-us,PandP.60).gifPoe says:
Poe The developers and IT Pros should discuss which categories will be useful when configuring the block.

Note

Logging is an added-value service for applications, and so any failures in the logging process must be handled gracefully without raising an exception to the main business processes. The Logging block achieves this by sending all logging failures to a special category (the errors log source) which is named Logging Errors & Warnings. By default, these error messages are written to Windows Event Log, though you can configure this category to write to other targets using different trace listeners if you wish.

Logging Overhead and Additional Context Information

No matter how you implement logging, it will always have some performance impact. The Logging block provides a flexible and highly configurable logging solution that is carefully designed to minimize performance impact. However, you should be aware of this impact, and consider how your own logging strategy will affect it. For example, a complex configuration that writes log entries to multiple logs and uses multiple filters is likely to have more impact than simple configurations. You must balance your requirements for logging against performance and scalability needs.

Dn440731.note(en-us,PandP.60).gifJana says:
Jana If minimizing the overhead of logging is a significant concern, you should consider logging asynchronously as described later in this chapter. You should also consider using the Semantic Logging Application Block in its out-of-process mode; this is described in the next chapter.

The LogWriter automatically collects some context information such as the time, the application domain, the machine name, and the process ID. To minimize the impact on performance, the LogWriter class caches some of these values and uses lazy initialization so the values are only collected if they are actually retrieved by a listener.

However, collecting additional context information can be expensive in processing terms and, if you are not going to use the information, wastes precious resources and may affect performance. Therefore, the Logging block only collects other less commonly used information from the environment, which you might require only occasionally, if you specify that you want this information when you create the LogEntry instance. Four classes within the Logging block can collect specific sets of context information that you can add to your log entry. This includes COM+ diagnostic information, the current stack trace, the****security-related information from the managed runtime, and security-related information from the operating system. There is also a dictionary property for the log entry where you can add any additional custom information you require, and which must appear in your logs.

How Do I Use the Logging Block?

It's time to see some examples of the Logging block use, including how to create log entries and write them to various targets such as the Windows Event Log, disk files, and a database. Later you'll see how you can use some of the advanced features of the block, such as checking filter status and adding context information to a log entry. However, before you can start using the Logging block, you must configure it.

Adding the Logging Block to Your Project

Before you start editing your code to use the Logging block, you must add the block to your project. You can use the NuGet Package Manager to add the required assemblies and references. In the Manage Nuget Packages dialog in Visual Studio, search online for the EnterpriseLibrary.Logging package and install it.

If you intend to send log entries to a database, you must also install the EnterpriseLibrary.Logging.Database package. If you intend to use remote logging, you must also install the EnterpriseLibrary.Logging.Service package on the system where you are collecting the logs.

Now you are ready to add some configuration and write some code.

Configuring the Logging Block

You can configure the Logging block using the programmatic configuration approach described in Chapter 1, "Introduction to the Microsoft Enterprise Library." The Logging block includes a LoggingConfiguration class that you can use to build the configuration data that you need. Typically, you create the formatters and trace listeners that you need, and then add and the log sources to the configuration. The following code sample shows how to create a LoggingConfiguration instance and then use it to create a LogWriter.

// Formatter
TextFormatter briefFormatter = new TextFormatter(...);

// Trace Listener
var flatFileTraceListener = new FlatFileTraceListener(
  @"C:\Temp\FlatFile.log", 
  "----------------------------------------", 
  "----------------------------------------", 
  briefFormatter);

// Build Configuration
var config = new LoggingConfiguration();

config.AddLogSource("DiskFiles", SourceLevels.All, true)
  .AddTraceListener(flatFileTraceListener);

LogWriter defaultWriter = new LogWriter(config);

This is a simplified version of the configuration that the sample in this chapter uses, you can see the complete configuration in the method BuildProgrammaticConfig. If you examine this method, you’ll also see how to configure filters and the special sources.

You should dispose the LogWriter instance when you have finished logging in your application to ensure that all resources are released.

Dn440731.note(en-us,PandP.60).gifBeth says:
Beth You can also use declarative configuration and the Configuration tool from previous versions of Enterprise Library to manage your configuration data in an XML file.

If you prefer to use declarative configuration, you can use the LogWriterFactory class to load the configuration data. The sample application shows how to use this approach to load configuration data from the app.config file.

LogWriterFactory logWriterFactory = new LogWriterFactory();
LogWriter logWriter = logWriterFactory.Create();

Diving in with an Example

To demonstrate the features of the Logging block, we provide a sample application that you can download and run on your own computer. You can run the executable directly from the bin\Debug folder, or you can open the solution named Logging in Visual Studio to see the code and run it under Visual Studio. The application includes a preconfigured database for storing log entries, as well as scripts you can use to create the Logging database within a different database server if you prefer.

Note

You do not need to run the scripts if you have Microsoft SQL Server LocalDB installed locally. If you want to specify a different database for logging, edit the script named CreateLoggingDb.cmd to specify the location of the database and execute it. After you do that, you must change the connection string named ExampleDatabase to point to your new database.
In addition, depending on the version of the operating system you are using, you may need to execute the application under the context of an account with administrative privileges. If you are running the sample from within Visual Studio, start Visual Studio by right-clicking the entry in your Start menu and selecting Run as administrator.

One other point to note about the sample application is that it creates a folder named Temp in the root of your C: drive if one does not already exist, and writes the text log files there so that you can easily find and view them.

Creating and Writing Log Entries with a LogWriter

The first of the examples, Simple logging with the Write method of a LogWriter, demonstrates how you can use a LogWriter directly to create log entries. The first stage is to obtain a LogWriter instance, and the example uses the simplest approach—passing a LoggingConfiguration object to the LogWriter constructor. See the following code.

LoggingConfiguration loggingConfiguration = BuildProgrammaticConfig();
LogWriter defaultWriter = new LogWriter(loggingConfiguration);

Now you can call the Write method and pass in any parameter values you require. There are many overloads of the Write method. They allow you to specify the message text, the category, the priority (a numeric value), the event ID, the severity (a value from the TraceEventType enumeration), and a title for the event. There is also an overload that allows you to add custom values to the log entry by populating a Dictionary with name and value pairs (you will see this used in a later example). Our example code uses several of these overloads. We've removed some of the Console.WriteLine statements from the code listed here to make it easier to see what it actually does.

// Check if logging is enabled before creating log entries.
if (defaultWriter.IsLoggingEnabled())
{
  defaultWriter.Write("Log entry created using the simplest overload.");
  defaultWriter.Write("Log entry with a single category.", "General");
  defaultWriter.Write("Log entry with a category, priority, and event ID.",
                      "General", 6, 9001);
  defaultWriter.Write("Log entry with a category, priority, event ID, "
                      + "and severity.", "General", 5, 9002,
                      TraceEventType.Warning);
  defaultWriter.Write("Log entry with a category, priority, event ID, "
                      + "severity, and title.", "General", 8, 9003,
                      TraceEventType.Warning, "Logging Block Examples");
}
else
{
  Console.WriteLine("Logging is disabled in the configuration.");
}

Notice how the code first checks to see if logging is enabled. There is no point using valuable processor cycles and memory generating log entries if they aren't going anywhere. If you add a LogEnabledFilter instance to the LoggingConfiguration object, you can use it to control logging for the whole block (you can see this in the example application). This filter has the single property, Enabled, that allows administrators to enable and disable all logging for the block. When it is set to False, the IsLoggingEnabled property of the LogWriter will return false as well.

The example produces the following result. All of the events are sent to the General category, which is configured to write events to the Windows Application Event Log (this is the default configuration for the block).

Created a Log Entry using the simplest overload.
Created a Log Entry with a single category.
Created a Log Entry with a category, priority, and event ID.
Created a Log Entry with a category, priority, event ID, and severity.
Created a Log Entry with a category, priority, event ID, severity, and title.
Open Windows Event Viewer 'Application' Log to see the results.

You can open Windows Event Viewer to see the results. Figure 2 shows the event generated by the last of the Write statements in this example.

Figure 2 - The logged event

Follow link to expand image

Note

If you do not specify a value for one of the parameters of the Write method, the Logging block uses the default value for that parameter. The defaults are Category = General, Priority = -1, Event ID = 1, Severity = Information, and an empty string for Title.

About Logging Categories

Categories are the way that Enterprise Library routes events sent to the block to the appropriate target, such as a database, the event log, an e-mail message, and more. The previous example makes use of the default configuration for the Logging block. When you add the Logging block to your application configuration using the Enterprise Library configuration tools, it contains the single category named General that is configured to write events to the Windows Application Event Log.

You can change the behavior of logging for any category. For example, you can change the behavior of the previous example by reconfiguring the event log trace listener specified for the General category, or by reconfiguring the text formatter that this trace listener uses. You can change the event log to which the event log trace listener sends events; edit the template used by the text formatter; or add other trace listeners.

However, it's likely that your application will need to perform different types of logging for different tasks. The typical way to achieve this is to define additional categories, and then specify the type of trace listener you need for each category. For example, you may want to send audit information to a text file or an XML file, to a database, or both; instead of to Windows Event Log. Alternatively, you may want to send indications of catastrophic failures to administrators as email messages; in this scenario, the severity of the event rather than the category determines where to send it. If you are using an enterprise-level monitoring system, you may instead prefer to send events to another system through Windows Message Queuing.

You can easily add categories to your application configuration. The approach is to create the trace listeners for the logging targets you require, such as the flat file trace listener or database trace listener, and then use the AddLogSource method to add the categories you require.

Dn440731.note(en-us,PandP.60).gifBeth says:
Beth Typically, you will want to reuse the file-based listeners.

The following code sample shows how you can configure the General category to output log messages to the Windows Event Log and the Important category to output log messages to the event log and a rolling flat file.

var rollingFlatFileTraceListener = 
  new RollingFlatFileTraceListener(@"C:\Logs\RollingFlatFile.log",
  "----------------------------------------",
  "----------------------------------------",
  extendedFormatter, 20, "yyyy-MM-dd",
  RollFileExistsBehavior.Increment,RollInterval.None, 3);
var eventLog = new EventLog("Application", ".", "Enterprise Library Logging");
var eventLogTraceListener = new FormattedEventLogTraceListener(eventLog);

config.AddLogSource("General", SourceLevels.All, true)
  .AddTraceListener(eventLogTraceListener);
config.AddLogSource("Important", SourceLevels.All, true)
  .AddTraceListener(eventLogTraceListener);
config.LogSources["Important"]
  .AddTraceListener(rollingFlatFileTraceListener);
Dn440731.note(en-us,PandP.60).gifBeth says:
Beth You can have multiple listeners associated with a category.

You can specify two properties for each category (source) you add. You can specify a filtering level for the log source using the SourceLevels enumeration. You can set the AutoFlush property to specify that the block should flush log entries to their configured target trace listeners each time as soon as they are written to the block, or only when you call the Flush method of the listener. If you set the AutoFlush property to False, ensure that your code calls this method when an exception or failure occurs to avoid losing any cached logging information.

Dn440731.note(en-us,PandP.60).gifMarkus says:
Markus The default value for the Auto Flush property is True. If you change it to False it’s your responsibility to flush the listener if something goes wrong in your application. The buffer is also flushed when the log writer is disposed.

The other property you can set for each category is the SourceLevels property****(which sets the Source Levels property of each listener). This specifies the minimum severity (such as Warning or Critical) for the log entries that the category filter will pass to its configured trace listeners. Any log entries with a lower severity will be blocked. The default severity is All, and so no log entries will be blocked unless you change this value. You can also configure a Severity Filter (which sets the Filter property) for each individual trace listener, and these values can be different for trace listeners in the same category. You will see how to use the Filter property of a trace listener in the next example in this chapter.

Filtering by Category

You can use the CategoryFilter class to define a filter that you can use to filter log entries sent to the block based on their membership in specified categories. You can add multiple categories to your configuration to manage filtering, though overuse of this capability can make it difficult to manage logging.

To define your category filters, create a list of category names to pass the constructor of the CategoryFilter class. The constructor also has a parameter that enables you to specify the filter mode (Allow all except..., or Deny all except...). The example application contains only a single filter that is configured to allow logging to all categories except for the category named (rather appropriately) BlockedByFilter. You will see the BlockedByFilter category used in the section "Capturing Unprocessed Events and Logging Errors" later in this chapter.

Writing Log Entries to Multiple Categories

In addition to being able to define multiple categories, you can send a log entry to more than one category in a single operation. This approach often means you can define fewer categories, and it simplifies the configuration because each category can focus on a specific task. You don't need to have multiple categories with similar sets of trace listeners.

The second example, Logging to multiple categories with the Write method of a LogWriter, shows how to write to multiple categories. The example has two categories, named DiskFiles and Important, defined in the configuration. The DiskFiles category contains references to a flat file trace listener and an XML trace listener. The Important category contains references to an event log trace listener and a rolling flat file trace listener.

The example uses the following code to create an array of the two category names, DiskFiles and Important, and then it writes three log messages to these two categories using the Write method of the LogWriter in the same way as in the previous example. Again, we've removed some of the Console.WriteLine statements to make it easier to see what the code actually does.

// Check if logging is enabled before creating log entries.
if (defaultWriter.IsLoggingEnabled())
{
  // Create a string array (or List<>) containing the categories.
  string[] logCategories = new string[] {"DiskFiles", "Important"};

  // Write the log entries using these categories.
  defaultWriter.Write("Log entry with multiple categories.", logCategories);
  defaultWriter.Write("Log entry with multiple categories, a priority, "
                      + "and an event ID.", logCategories, 7, 9004);
  defaultWriter.Write("Log entry with multiple categories, a priority, "
                      + "event ID, severity, and title.", logCategories, 10, 
                      9005, TraceEventType.Critical, "Logging Block Examples");
}
else
{
  Console.WriteLine("Logging is disabled in the configuration.");
}

Controlling Output Formatting

If you run the example above and then open Windows Event Log, you will see the three events generated by this example. Also, in the C:\Temp folder, you will see three files. RollingFlatFile.log is generated by the rolling flat file trace listener, and contains the same information as the event log event generated by the event log trace listener. If you explore the configuration, you will see that they both use the same text formatter to format the output.

The FlatFile.log file, which is generated by the flat file trace listener, contains only a simplified set of values for each event. For example, this is the output generated for the last of the three log entries.

----------------------------------------
Timestamp: 24/11/2009 10:49:26
Message: Log entry with multiple categories, a priority, event ID, severity, and title.
Category: DiskFiles, Important
Priority: 10
EventId: 9005
ActivityId: 00000000-0000-0000-0000-000000000000
Severity: Critical
Title:Logging Block Examples
----------------------------------------

The reason is that the flat file trace listener is configured to use a different text formatter—in this case one named briefFormatter. Most trace listeners use a formatter to translate the contents of the log entry properties into the appropriate format for the target of that trace listener. Trace listeners that create text output, such as a text file or an e-mail message, use a text formatter defined within the configuration of the block. You can also format log entries using XML or JSON formatters.

If you examine the configured text formatter, you will see that it has a Template property. A full list of tokens and their meaning is available in the online documentation for Enterprise Library, although most are fairly self-explanatory.

Note

If you use the XmlLogFormatter class to save the log messages formatted as XML in a flat file, you will need to perform some post-processing on the log files before you can use a tool such as Log Parser to read them. This is because the flat file does not have an XML root element, therefore the file does not contain well-formed XML. It’s easy to add opening and closing tags for a root element to the start and end of the file.

Non-Formatted Trace Listeners

While we are discussing output formatting, there is one other factor to consider. Some trace listeners do not use a text formatter to format the output they generate. This is generally because the output is in a binary or specific format. The XML trace listener is a typical example that does not use a text formatter.

For such trace listeners, you can set the TraceOutputOptions property to one of a range of values to specify the values you want to include in the output. The TraceOutputOptions property accepts a value from the System.Diagnostics.TraceOptions enumeration. Valid values include CallStack, DateTime, ProcessId, LogicalOperationStack, Timestamp, and ThreadId. The documentation installed with Enterprise Library, and the documentation for the System.Diagnostics namespace on MSDN, provide more information.

Filtering by Severity in a Trace Listener

The previous example generates a third disk file that we haven't looked at yet. We didn't forget this, but saved it for this section because it demonstrates another feature of the trace listeners that you will often find extremely useful. To see this, you need to view the file XmlLogFile.xml that was generated in the C:\Temp folder by the XML trace listener we used in the previous example. You should open it in Microsoft Internet Explorer (or another web browser or text editor) to see the structure.

You will see that the file contains only one event from the previous example, not the three that the code in the example generated. This is because the XML trace listener has the Filter property in its configuration set to Error. Therefore, it will log only events with a severity of Error or higher. If you look back at the example code, you will see that only the last of the three calls to the Write method specified a value for the severity (TraceEventType.Critical in this case), and so the default value Information was used for the other two events.

Note

If you get an error indicating that the XML document created by the XML trace listener is invalid, it's probably because you have more than one log entry in the file. This means that it is not a valid XML document—it contains separate event log entries added to the file each time you ran this example. To view it as XML, you must open the file in a text editor and add an opening and closing element (such as <root> and </root>) around the content. Or, just delete it and run the example once more.

All trace listeners expose the Filter property, and you can use this to limit the log entries written to the logging target to only those that are important to you. If your code generates many information events that you use for monitoring and debugging only under specific circumstances, you can filter these to reduce the growth and size of the log when they are not required.

Alternatively, (as in the example) you can use the Filter property to differentiate the granularity of logging for different listeners in the same category. It may be that a flat file trace listener will log all entries to an audit log file for some particular event, but an Email trace listener in the same category will send email messages to administrators only when an Error or Critical event occurs.

Filtering All Log Entries by Priority

As well as being able to filter log entries in individual trace listeners based on their severity, you can set the Logging block to filter all log entries sent to it based on their priority. Alongside the LogEnabledFilter class and CategoryFilter class (which we discussed earlier in this chapter), you can create PriorityFilter instance.

This filter has two properties that you can set: Minimum Priority and Maximum Priority. The default setting for the priority of a log entry is -1, which is the same as the default setting of the Minimum Priority property of the filter, and there is no maximum priority set. Therefore, this filter will not block any log entries. However, if you change the defaults for these properties, only log entries with a priority between the configured values (including the specified maximum and minimum values) will be logged. The exception is log entries that have the default priority of -1. These are never filtered.

Dn440731.note(en-us,PandP.60).gifCarlos says:
Carlos It’s important to be consistent in your code in the way that you use categories, priorities, and severities when you generate log entries if the filtering is going to be effective.The Semantic Logging Application Block discussed in the next chapter takes a different approach with these properties that makes it much easier to be consistent.

Creating and Using LogEntry Objects

So far we have used the Write method of the LogWriter class to generate log entries. An alternative approach that may be useful if you want to create log entries individually, perhaps to return them from methods or to pass them between processes, is to generate instances of the LogEntry class and then write them to the configured targets afterwards.

The example, Creating and writing log entries with a LogEntry object, demonstrates this approach. It creates two LogEntry instances. The code first calls the most complex constructor of the LogEntry class that accepts all of the possible values. This includes a Dictionary of objects with a string key (in this example, the single item Extra Information) that will be included in the output of the trace listener and formatter. Then it writes this log entry using an overload of the Write method of the LogWriter that accepts a LogEntry instance.

Next, the code creates a new empty LogEntry using the default constructor and populates this by setting individual properties, before writing it using the same Write method of the LogWriter.

// Check if logging is enabled before creating log entries.
if (defaultWriter.IsLoggingEnabled())
{
  // Create a Dictionary of extended properties
  Dictionary<string, object> exProperties = new Dictionary<string, object>();
  exProperties.Add("Extra Information", "Some Special Value");  

  // Create a LogEntry using the constructor parameters. 
  LogEntry entry1 = new LogEntry("LogEntry with category, priority, event ID, " 
                                 + "severity, and title.", "General", 8, 9006,
                                 TraceEventType.Error, "Logging Block Examples",
                                 exProperties);
  defaultWriter.Write(entry1);

  // Create a LogEntry and populate the individual properties.
  LogEntry entry2 = new LogEntry
  {
    Categories = new string[] { "General" },
    EventId = 9007,
    Message = "LogEntry with individual properties specified.",
    Priority = 9,
    Severity = TraceEventType.Warning,
    Title = "Logging Block Examples",
    ExtendedProperties = exProperties
  };   
  defaultWriter.Write(entry2);
}
else
{
  Console.WriteLine("Logging is disabled in the configuration.");
}

This example writes the log entries to the Windows Application Event Log by using the General category. If you view the events this example generates, you will see the values set in the code above including (at the end of the list) the extended property we specified using a Dictionary. You can see this in Figure 3.

Figure 3 - A log entry written to the General category

Follow link to expand image

Capturing Unprocessed Events and Logging Errors

The capability to route log entries through different categories to a configured set of trace listener targets provides a very powerful mechanism for performing a wide range of logging activities. However, it prompts some questions. In particular, what happens if the categories specified in a log entry don’t match any in the configuration? And what happens if there is an error when the trace listener attempts to write the log entry to the target?

About Special Sources

In fact, the Logging block includes three special sources that handle these situations. Each is effectively a category, and you can add references to configured trace listeners to each one so that events arriving in that category will be written to the target(s) you specify.

The All Events special source receives all events, irrespective of all other settings within the configuration of the block. You can use this to provide an audit trail of all events, if required. By default, it has no trace listeners configured.

The Unprocessed Category special source receives any log entry that has a category that does not match any configured categories. By default, this category has no trace listeners configured.

Dn440731.note(en-us,PandP.60).gifMarkus says:
Markus You can use the Unprocessed Category as a catch-all mechanism for log entries that weren’t handled elsewhere.

When enabled, the Logging Errors & Warnings special source receives any log entry that causes an error in the logging process. By default, this category contains a reference to a trace listener that writes details of the error to the Windows Application Event Log, though you can reconfigure this if you wish.

An Example of Using Special Sources

The example, Using Special Sources to capture unprocessed events or errors, demonstrates how the Logging block reacts under these two circumstances. The code first writes a log entry to a category named InvalidCategory, which does not exist in the configuration. Next, it writes another log entry to a category named CauseLoggingError that is configured to use a Database trace listener. However, this trace listener specifies a connection string that is invalid; it points to a database that does not exist.

// Check if logging is enabled before creating log entries.
if (defaultWriter.IsLoggingEnabled())
{
  // Create log entry to be processed by the "Unprocessed" special source.
  defaultWriter.Write("Entry with category not defined in configuration.",
                      "InvalidCategory");

  // Create log entry to be processed by the "Errors & Warnings" special source.
  defaultWriter.Write("Entry that causes a logging error.", "CauseLoggingError");
}
else
{
  Console.WriteLine("Logging is disabled in the configuration.");
}

You might expect that neither of these log entries would actually make it to their target. However, the example generates the following messages that indicate where to look for the log entries that are generated.

Created a Log Entry with a category name not defined in the configuration.
The Log Entry will appear in the Unprocessed.log file in the C:\Temp folder.

Created a Log Entry that causes a logging error.
The Log Entry will appear in the Windows Application Event Log.

This occurs because we configured the Unprocessed Category with a reference to a flat file trace listener that writes log entries to a file named Unprocessed.log. If you open this file, you will see the log entry that was sent to the InvalidCategory category.

The example uses the configuration for the Logging Errors & Warnings special source. This means that the log entry that caused a logging error will be sent to the formatted event log trace listener referenced in this category. If you open the application event log, you will see this log entry. The listing below shows some of the content.

Timestamp: 24/11/2009 15:14:30
Message: Tracing to LogSource 'CauseLoggingError' failed. Processing for other 
sources will continue. See summary information below for more information. 
Should this problem persist, stop the service and check the configuration file(s) 
for possible error(s) in the configuration of the categories and sinks.

Summary for Enterprise Library Distributor Service:
======================================
--> 
Message: 
Timestamp: 24/11/2009 15:14:30
Message: Entry that causes a logging error.
Category: CauseLoggingError
...
...
Exception Information Details:
======================================
Exception Type: System.Data.SqlClient.SqlException
Errors: System.Data.SqlClient.SqlErrorCollection
Class: 11
LineNumber: 65536
Number: 4060
Procedure: 
Server: (localdb)\v11.0
State: 1
Source: .Net SqlClient Data Provider
ErrorCode: -2146232060
Message: Cannot open database "DoesNotExist" requested by the login. The login failed.
Login failed for user 'xxxxxxx\xxx'.
...
...
StackTrace Information Details: 
======================================
...
...

In addition to the log entry itself, you can see that the event contains a wealth of information to help you to debug the error. It contains a message indicating that a logging error occurred, followed by the log entry itself. However, after that is a section containing details of the exception raised by the logging mechanism (you can see the error message generated by the SqlClient data access code), and after this is the full stack trace.

Note

One point to be aware of is that logging database and security exceptions should always be done in such a way as to protect sensitive information that may be contained in the logs. You must ensure that you appropriately restrict access to the logs, and only expose non-sensitive information to other users. You may want to consider applying exception shielding, as described in Chapter 3, "Error Management Made Exceptionally Easy."

Logging to a Database

One of the most common requirements for logging, after Windows Event Log and text files, is to store log entries in a database. The Logging block contains the database trace listener that makes this easy. You configure the database using a script provided with Enterprise Library, located in the \Blocks\Logging\Src\DatabaseTraceListener\Scripts folder of the source code. We also include these scripts with the example for this chapter.

Dn440731.note(en-us,PandP.60).gifMarkus says:
Markus To log to a database you must add the Enterprise Library - Logging Application Block Database Provider NuGet package to your project.

The scripts assume that you will use the SQL Server Express Local Database instance, but you can edit the CreateLoggingDb.cmd file to change the target to a different database server. The SQL script that the command file executes creates a database named Logging, and adds the required tables and stored procedures to it.

However, if you only want to run the example application we provide for this chapter, you do not need to create a database. The project contains a preconfigured database file named Logging.mdf (located in the bin\Debug folder) that is auto-attached to your SQL Server LocalDB database instance. You can connect to this database using Visual Studio Server Explorer to see the contents. The constructor of the database trace listener includes a database parameter, which is a reference to this database. The following code sample shows this configuration.

var databaseTraceListener = 
  new FormattedDatabaseTraceListener(
  exampleDatabase, "WriteLog", "AddCategory", extendedFormatter);
config.AddLogSource("Database", SourceLevels.All, true)
  .AddTraceListener(databaseTraceListener);

The database trace listener uses a text formatter to format the output, and so you can edit the template used to generate the log message to suit your requirements. You can also add extended properties to the log entry if you wish. In addition, as with all trace listeners, you can filter log entries based on their severity if you like.

The Log table in the database contains columns for only the commonly required values, such as the message, event ID, priority, severity, title, timestamp, machine and process details, and more. It also contains a column named FormattedMessage that contains the message generated by the text formatter.

The sample application includes the following code to initialize the DatabaseFactory static façade from the configuration file.

DatabaseProviderFactory factory = new DatabaseProviderFactory(
  new SystemConfigurationSource(false).GetSection);
DatabaseFactory.SetDatabaseProviderFactory(factory, false);
Dn440731.note(en-us,PandP.60).gifBeth says:
Beth If you want to use declarative configuration you need to bootstrap the DatabaseFactory static façade to allow the database listener configuration object to properly set up the listener.

Using the Database Trace Listener

The example, Sending log entries to a database, demonstrates the use of the database trace listener. The code is relatively simple, following the same style as the earlier example of creating a Dictionary of extended properties, and then using the Write method of the LogWriter to write two log entries. The first log entry is created by the LogWriter from the parameter values provided to the Write method. The second is generated in code as a new LogEntry instance by specifying the values for the constructor parameters. Also notice how easy it is to add additional information to a log entry using a simple Dictionary as the ExtendedProperties of the log entry.

// Check if logging is enabled before creating log entries.
if (defaultWriter.IsLoggingEnabled())
{
  // Create a Dictionary of extended properties
  Dictionary<string, object> exProperties = new Dictionary<string, object>();
  exProperties.Add("Extra Information", "Some Special Value");

  // Create a LogEntry using the constructor parameters.
  defaultWriter.Write("Log entry with category, priority, event ID, severity, "
                      + "title, and extended properties.", "Database",
                      5, 9008, TraceEventType.Warning, 
                      "Logging Block Examples", exProperties);

  // Create a LogEntry using the constructor parameters. 
  LogEntry entry = new LogEntry("LogEntry with category, priority, event ID, "
                                + "severity, title, and extended properties.",
                                "Database", 8, 9009, TraceEventType.Error, 
                                "Logging Block Examples", exProperties);
  defaultWriter.Write(entry);
}
else
{
  Console.WriteLine("Logging is disabled in the configuration.");
}

To see the two log messages created by this example, you can open the Logging.mdf database from the bin\Debug folder using Visual Studio Server Explorer. You will find that the FormattedMessage column of the second message contains the following. You can see the extended property information we added using a Dictionary at the end of the message.

Timestamp: 03/12/2009 17:14:02
Message: LogEntry with category, priority, event ID, severity, title, and extended properties.
Category: Database
Priority: 8
EventId: 9009
Severity: Error
Title: Logging Block Examples
Activity ID: 00000000-0000-0000-0000-000000000000
Machine: BIGFOOT
App Domain: LoggingExample.vshost.exe
ProcessId: 5860
Process Name: E:\Logging\Logging\bin\Debug\LoggingExample.vshost.exe
Thread Name: 
Win32 ThreadId:3208
Extended Properties: Extra Information - Some Special Value

Note that you cannot simply delete logged information due to the references between the Log and CategoryLog tables. However, the database contains a stored procedure named ClearLogs that you can execute to remove all log entries.

Note

The connection string for the database we provide with this example is:
Data Source=(localdb)\v11.0;AttachDbFilename=|DataDirectory|\Database\Logging.mdf;
Integrated Security=True
If you have configured a different database using the scripts provided with Enterprise Library, you may find that you get an error when you run this example. It is likely to be that you have an invalid connection string in your App.config file for your database. In addition, use the Services applet in your Administrative Tools folder to check that the SQL Server (SQLEXPRESS) database service (the service is named SQL Server (SQLEXPRESS)) is running.

Logging Asynchronously

Where you have high volumes of trace messages or where performance is critical you may want to use the AsynchronousTraceListenerWrapper class to wrap an existing listener and enable it to write log messages asynchronously. However, the risk with writing log messages asynchronously is that if the internal buffer overflows or the application shuts down before all the entries are flushed, you will lose some log messages. It is possible to adjust the size of the buffer to your requirements.

Dn440731.note(en-us,PandP.60).gifJana says:
Jana Using the asynchronous wrapper changes the perceived time that it takes to log an entry. Control returns to the application faster, but the block still needs to write the log entry to its destination.

The following code sample shows how you can wrap an existing database listener so that you can write messages asynchronously to the database.

var databaseTraceListener = new FormattedDatabaseTraceListener(
  DatabaseFactory.CreateDatabase("ExampleDatabase"),"WriteLog",
  "AddCategory",extendedFormatter);

config.AddLogSource("AsyncDatabase", SourceLevels.All, true)
  .AddAsynchronousTraceListener(databaseTraceListener);

You can use the asynchronous listener in exactly the same way that you use the wrapped listener. For more information about how to manage the buffer size, see the topic Logging Messages Asynchronously in the Enterprise Library Reference Documentation.

Dn440731.note(en-us,PandP.60).gifCarlos says:
Carlos If you want to use declarative configuration, you can enable asynchronous logging by selecting it in the trace listener configuration.

Reconfiguring Logging at Run Time

If you are troubleshooting a problem with a production system, you may want to be able to make temporary changes to your logging configuration to collect additional information. You may also want to be able to do this without stopping and starting your application.

Dn440731.note(en-us,PandP.60).gifMarkus says:
Markus You assign names to the listeners that you plan to reconfigure, it makes it easier to find them.
During reconfiguration, all incoming logging operations are blocked, so you want to be as quick as possible.

The example, Dynamically changing logging settings, demonstrates how you can modify the logging configuration settings on the fly. The following code sample shows a method that replaces the filter settings in the configuration. In this example, the only change is to the maximumPriority setting in the priority filter.

static void ReplacePriorityFilter(int maximumPriority)
{
    defaultWriter.Configure(cfg => { 
      cfg.Filters.Clear();
      // Category Filters
      ICollection<string> categories = new List<string>();
      categories.Add("BlockedByFilter");
 
      // Log Filters
      var priorityFilter = new PriorityFilter(
        "Priority Filter", 2, maximumPriority);
      var logEnabledFilter = new LogEnabledFilter("LogEnabled Filter", true);
      var categoryFilter = new CategoryFilter(
        "Category Filter", categories, CategoryFilterMode.AllowAllExceptDenied);
      cfg.Filters.Add(priorityFilter);
      cfg.Filters.Add(logEnabledFilter);
      cfg.Filters.Add(categoryFilter);
  });
}

The Configure method in the LogWriter class enables you to modify parts of the configuration: in this case the collection of filters in the configuration. You can use the same approach to modify the category sources, special sources, and global configuration settings such as the IsLoggingEnabled property.

Dn440731.note(en-us,PandP.60).gifPoe says:
Poe You can also use the ConfigurationManager class to read a configuration value, such as the maximumPriority setting from an external source. The ConfigurationManager class can monitor the external source for changes and then invoke the Configure method. Chapter 1, "Introduction to the Microsoft Enterprise Library," describes how to use the ConfigurationManager class.

Testing Logging Filter Status

As you've seen in earlier examples, the Logging block allows you to check if logging is enabled before you create and write a log entry. You can avoid the additional load that this places on your application if logging is not enabled. However, even when logging is enabled, there is no guarantee that a specific log entry will be written to the target log store. For example, it may be blocked by a priority filter if the message priority is below a specified level, or it may belong only to one or more categories where the relevant category filter(s) have logging disabled (a common scenario in the case of logging code specifically designed only for debugging use).

The example, Checking filter status and adding context information to the log entry, demonstrates how you can check if a specific log entry will be written to its target before you actually call the Write method. After checking that logging is not globally disabled, the example creates two LogEntry instances with different categories and priorities. It passes each in turn to another method named ShowDetailsAndAddExtraInfo. The following is the code that creates the LogEntry instances.

// Check if logging is enabled before creating log entries.
if (defaultWriter.IsLoggingEnabled())
{
  // Create a new LogEntry with two categories and priority 3.
  string[] logCategories = new string[] {"General", "DiskFiles"};
  LogEntry entry1 = new LogEntry("LogEntry with categories 'General' and "
                                 + "'DiskFiles' and Priority 3.", logCategories,
                                 3, 9009, TraceEventType.Error, 
                                 "Logging Block Examples", null);
  ShowDetailsAndAddExtraInfo(entry1);

  // Create a new LogEntry with one category and priority 1.
  logCategories = new string[] { "BlockedByFilter" };
  LogEntry entry2 = new LogEntry("LogEntry with category 'BlockedByFilter' and "
                                 + "Priority 1.", logCategories, 1, 9010, 
                                 TraceEventType.Information, 
                                 "Logging Block Examples", null);
  ShowDetailsAndAddExtraInfo(entry2);
}
else
{
  Console.WriteLine("Logging is disabled in the configuration.");
}

The ShowDetailsAndAddExtraInfo method takes a LogEntry instance and does two different things. Firstly, it shows how you can obtain information about the way that the Logging block will handle the log entry. This may be useful in advanced scenarios where you need to be able to programmatically determine if a specific log entry was detected by a specific trace source, or will be written to a specific target. Secondly, it demonstrates how you can check if specific filters, or all filters, will block a log entry from being written to its target.

Obtaining Information about Trace Sources and Trace Listeners

The first section of the ShowDetailsAndAddExtraInfo method iterates through the collection of trace sources (LogSource instances) exposed by the GetMatchingTraceSources method of the LogWriter class. Each LogSource instance exposes a Listeners collection that contains information about the listeners (which specify the targets to which the log entry will be sent).

void ShowDetailsAndAddExtraInfo(LogEntry entry)
{
  // Display information about the Trace Sources and Listeners for this LogEntry. 
  IEnumerable<LogSource> sources = defaultWriter.GetMatchingTraceSources(entry);
  foreach (LogSource source in sources)
  {
    Console.WriteLine("Log Source name: '{0}'", source.Name);
    foreach (TraceListener listener in source.Listeners)
    {
      Console.WriteLine(" - Listener name: '{0}'", listener.Name);
    }
  }
  ...

Checking if Filters Will Block a Log Entry

Next, the ShowDetailsAndAddExtraInfo method checks if any filters will block the current log entry. There are two ways you can do this. You can query each filter type in turn, or just a specific filter type, by using the GetFilter method of the LogWriter class to get a reference to that type of filter. Then you can check if this filter is enabled, and also use the ShouldLog method (to which you pass the list of categories for the log entry) to see if logging will succeed.

Dn440731.note(en-us,PandP.60).gifBeth says:
Beth To use this feature, you need to know which filters exist before you can perform the check.

The following code shows this approach. It also shows the simpler approach that you can use if you are not interested in the result for a specific filter type. The LogWriter class also exposes the ShouldLog method, which indicates if any filters will block this entry.

...
  // Check if any filters will block this LogEntry.
  // This approach allows you to check for specific types of filter.
  // If there are no filters of the specified type configured, the GetFilter 
  // method returns null, so check this before calling the ShouldLog method.
  CategoryFilter catFilter = defaultWriter.GetFilter<CategoryFilter>();
  if (null == catFilter || catFilter.ShouldLog(entry.Categories))
  {
    Console.WriteLine("Category Filter(s) will not block this LogEntry.");
  }
  else
  {
    Console.WriteLine("A Category Filter will block this LogEntry.");
  }

  PriorityFilter priFilter = defaultWriter.GetFilter<PriorityFilter>();
  if (null == priFilter || priFilter.ShouldLog(entry.Priority))
  {
    Console.WriteLine("Priority Filter(s) will not block this LogEntry.");
  }
  else
  {
    Console.WriteLine("A Priority Filter will block this LogEntry.");
  }

  // Alternatively, a simple approach can be used to check for any type of filter
  if (defaultWriter.ShouldLog(entry))
  {
    Console.WriteLine("This LogEntry will not be blocked by config settings.");
    ....
    // Add context information to log entries after checking that the log entry    // will not be blocked due to configuration settings. See the following    // section 'Adding Additional Context Information' for details.
    ....
  }
  else
  {
    Console.WriteLine("This LogEntry will be blocked by configuration settings.");
  }
}

After you determine that logging will succeed, you can add extra context information and write the log entry. You'll see the code to achieve this shortly. In the meantime, this is the output generated by the example. You can see that it contains details of the log (trace) sources and listeners for each of the two log entries created by the earlier code, and the result of checking if any category filters will block each log entry.

Dn440731.note(en-us,PandP.60).gifMarkus says:
Markus By checking whether the logging will succeed, you can decide whether to perform additional operations such as collecting context information to add to the log entry which might be relatively expensive.
Created a LogEntry with categories 'General' and 'DiskFiles'.
Log Source name: 'General'
 - Listener name: 'Formatted EventLog TraceListener'
Log Source name: 'DiskFiles'
 - Listener name: 'FlatFile TraceListener'
 - Listener name: 'XML Trace Listener'
Category Filter(s) will not block this LogEntry.
Priority Filter(s) will not block this LogEntry.
This LogEntry will not be blocked due to configuration settings.
...
Created a LogEntry with category 'BlockedByFilter', and Priority 1.
Log Source name: 'BlockedByFilter'
 - Listener name: 'Formatted EventLog TraceListener'
A Category Filter will block this LogEntry.
A Priority Filter will block this LogEntry.
This LogEntry will be blocked due to configuration settings.

Adding Additional Context Information

While it's useful to have every conceivable item of information included in your log messages, it's not always the best approach. Collecting information from the environment absorbs processing cycles and increases the load that logging places on your application. The Logging block is highly optimized to minimize the load that logging incurs. As an example, some of the less useful information is not included in the log messages by default—particularly information that does require additional resources to collect.

However, you can collect this information if you wish. You may decide to do so in special debugging instrumentation that you only turn on when investigating problems, or for specific areas of your code where you need the additional information, such as security context details for a particularly sensitive process.

After checking that a log entry will not be blocked by filters, the ShowDetailsAndAddExtraInfo method (shown in the previous section) adds a range of additional context and custom information to the log entry. It uses the four standard Logging block helper classes that can generate additional context information and add it to a Dictionary. These helper classes are:

  • The DebugInformationProvider, which adds the current stack trace to the Dictionary.
  • The ManagedSecurityContextInformationProvider, which adds the current identity name, authorization type, and authorization status to the Dictionary.
  • The UnmanagedSecurityContextInformationProvider, which adds the current user name and process account name to the Dictionary.
  • The ComPlusInformationProvider, which adds the current activity ID, application ID, transaction ID (if any), direct caller account name, and original caller account name to the Dictionary.

The following code shows how you can use these helper classes to create additional information for a log entry. It also demonstrates how you can add custom information to the log entry—in this case by reading the contents of the application configuration file into the Dictionary. After populating the Dictionary, you simply set it as the value of the ExtendedProperties property of the log entry before writing that log entry.

...
// Create additional context information to add to the LogEntry. 
Dictionary<string, object> dict = new Dictionary<string, object>();
// Use the information helper classes to get information about 
// the environment and add it to the dictionary.
DebugInformationProvider debugHelper = new DebugInformationProvider();
debugHelper.PopulateDictionary(dict);

ManagedSecurityContextInformationProvider infoHelper 
    = new ManagedSecurityContextInformationProvider();
infoHelper.PopulateDictionary(dict);

UnmanagedSecurityContextInformationProvider secHelper 
    = new UnmanagedSecurityContextInformationProvider();
secHelper.PopulateDictionary(dict);

ComPlusInformationProvider comHelper = new ComPlusInformationProvider();
comHelper.PopulateDictionary(dict);

// Get any other information you require and add it to the dictionary.
string configInfo = File.ReadAllText(@"..\..\App.config");
dict.Add("Config information", configInfo);

// Set dictionary in the LogEntry and write it using the default LogWriter.
entry.ExtendedProperties = dict;
defaultWriter.Write(entry);
....

The example produces the following output on screen.

Added the current stack trace to the Log Entry.
Added current identity name, authentication type, and status to the Log Entry.
Added the current user name and process account name to the Log Entry.
Added COM+ IDs and caller account information to the Log Entry.
Added information about the configuration of the application to the Log Entry.
LogEntry written to configured trace listeners.

To see the additional information added to the log entry, open Windows Event Viewer and locate the new log entry. We haven't shown the contents of this log entry here as it runs to more than 350 lines and contains just about all of the information about an event occurring in your application that you could possibly require!

Tracing and Correlating Activities

The final topic for this chapter demonstrates another feature of the Logging block that makes it easier to correlate multiple log entries when you are trying to trace or debug some recalcitrant code in your application. One of the problems with logging is that relying simply on the event ID to correlate multiple events that are related to a specific process or section of code is difficult and error prone. Event IDs are often not unique, and there can be many events with the same event ID generated from different instances of the components in your application that are intermixed in the logs: using the Semantic Logging Application Block (described in the next chapter) helps you to address this specific issue.

Dn440731.note(en-us,PandP.60).gifCarlos says:
Carlos Often, you want to correlate the events for a specific instance of an operation. The more independent each instance is, the more valuable it is to narrow your view of events to those that are related.

The Logging block makes it easy to add an additional unique identifier to specific log entries that you can later use to filter the log and locate only entries related to a specific process or task. The Logging block tracing feature makes use of the .NET Correlation Manager class, which maintains an Activity ID that is a GUID. By default, this is not set, but the Logging block allows you to use a TraceManager to generate Tracer instances. Each of these sets the Activity ID to a user supplied or randomly generated GUID value that is maintained only during the context of the tracer. The Activity ID returns to its previous value when the tracer is disposed or goes out of scope.

You specify an operation name when you create the tracer. This is effectively the name of a category defined in the configuration of the block. All log entries created within the context of the tracer will be assigned to that category in addition to any categories you specify when you create the log entry.

You can specify a GUID value when you create and start a tracer, and all subsequent log entries within the scope of that tracer and all nested tracers that do not specify a different GUID will have the specified activity ID. If you start a new nested tracer instance within the scope of a previous one, it will have the same activity ID as the parent tracer unless you specify a different one when you create and start the nested tracer; in that case, this new activity ID will be used in subsequent log entries within the scope of this tracer. Tracing and activity IDs rely on thread-affinity which won’t be guaranteed in asynchronous scenarios such as when you use Tasks.

Dn440731.note(en-us,PandP.60).gifCarlos says:
Carlos Although the Logging block automatically adds the activity ID to each log entry, this does not appear in the resulting message when you use the text formatter with the default template. To include the activity ID in the logged message that uses a text formatter, you must edit the template property in the configuration tools to include the token {property(ActivityId)}. Note that property names are case-sensitive in the template definition.

An Example of Tracing Activities

The example, Tracing activities and publishing activity information to categories, should help to make this clear. At the start of the application, the code creates a TraceManager instance.

LoggingConfiguration loggingConfiguration = BuildProgrammaticConfig();
LogWriter defaultWriter = new LogWriter(loggingConfiguration);
 
// Create a TraceManager object.
TraceManager traceMgr = new TraceManager(defaultWriter);

Next, the code creates and starts a new Tracer instance using the StartTrace method of the TraceManager, specifying the category named General. As it does not specify an Activity ID value, the TraceManager creates one automatically (assuming it doesn’t have one already). This is the preferred approach, because each separate process running an instance of this code will generate a different GUID value. This means you can isolate individual events for each process.

The code then creates and writes a log entry within the context of this tracer, specifying that it belongs to the DiskFiles category in addition to the General category defined by the tracer. Next, it creates a nested Tracer instance that specifies the category named Database, and writes another log entry that itself specifies the category named Important. This log entry will therefore belong to the General, Database, and Important categories. Then, after the Database tracer goes out of scope, the code creates a new Tracer that again specifies the Database category, but this time it also specifies the Activity ID to use in the context of this new tracer. Finally, it writes another log entry within the context of the new Database tracer scope.

// Start tracing for category 'General'. All log entries within trace context
// will be included in this category and use any specified Activity ID (GUID).
// If you do not specify an Activity ID, the TraceManager will create a new one.
using (traceMgr.StartTrace("General"))
{
  // Write a log entry with another category, will be assigned to both.
  defaultWriter.Write("LogEntry with category 'DiskFiles' created within "
         + "context of 'General' category tracer.", "DiskFiles");

  // Start tracing for category 'Database' within context of 'General' tracer.
  // Do not specify a GUID to use so that the existing one is used.
  using (traceMgr.StartTrace("Database"))
  {
    // Write a log entry with another category, will be assigned to all three.
    defaultWriter.Write("LogEntry with category 'Important' created within "
       + "context of first nested 'Database' category tracer.", "Important");
  }

  // Back in context of 'General' tracer here.
  // Start tracing for category 'Database' within context of 'General' tracer
  // as above, but this time specify a GUID to use.
  using (traceMgr.StartTrace("Database", 
                  new Guid("{12345678-1234-1234-1234-123456789ABC}")))
  {
    // Write a log entry with another category, will be assigned to all three.
    defaultWriter.Write("LogEntry with category 'Important' created within "
           + "context of nested 'Database' category tracer.", "Important");
  }
  // Back in context of 'General' tracer here and back to the
  // randomly created activity ID.
}

Not shown above are the lines of code that, at each stage, write the current Activity ID to the screen. The output generated by the example is shown here. You can see that, initially, there is no Activity ID. The first tracer instance then sets the Activity ID to a random value (you will get a different value if you run the example yourself), which is also applied to the nested tracer.

However, the second tracer for the Database category changes the Activity ID to the value we specified in the StartTrace method. When this tracer goes out of scope, the Activity ID is reset to that for the parent tracer. When all tracers go out of scope, the Activity ID is reset to the original (empty) value.

- Current Activity ID is: 00000000-0000-0000-0000-000000000000

Written LogEntry with category 'DiskFiles' created within context of 'General'
 category tracer.

- Current Activity ID is: a246ada3-e4d5-404a-bc28-4146a190731d

Written LogEntry with category 'Important' created within context of first
 'Database' category tracer nested within 'DiskFiles' category TraceManager.

- Current Activity ID is: a246ada3-e4d5-404a-bc28-4146a190731d

Leaving the context of the first Database tracer

- Current Activity ID is: a246ada3-e4d5-404a-bc28-4146a190731d

Written LogEntry with category 'Important' created within context of second
 'Database' category tracer nested within 'DiskFiles' category TraceManager.

- Current Activity ID is: 12345678-1234-1234-1234-123456789abc

Leaving the context of the second Database tracer

- Current Activity ID is: a246ada3-e4d5-404a-bc28-4146a190731d

Leaving the context of the General tracer

- Current Activity ID is: 00000000-0000-0000-0000-000000000000

Open the log files in the folder C:\Temp to see the results.

If you open the RollingFlatFile.log file you will see the two log entries generated within the context of the nested tracers. These belong to the categories Important, Database, and General. You will also see the Activity ID for each one, and can confirm that it is different for these two entries. For example, this is the first part of the log message for the second nested tracer, which specifies the Activity ID GUID in the StartTrace method.

Timestamp: 01/12/2009 12:12:00
Message: LogEntry with category 'Important' created within context of second nested 'Database' category tracer.
Category: Important, Database, General
Priority: -1
EventId: 1
Severity: Information
Title: 
Activity ID: 12345678-1234-1234-1234-123456789abc

Note

Be aware that other software and services may use the Activity ID of the Correlation Manager to provide information and monitoring facilities. An example is Windows Communication Foundation (WCF), which uses the Activity ID to implement tracing.
You must also ensure that you correctly dispose Tracer instances. If you do not take advantage of the using construct to automatically dispose instances, you must ensure that you dispose nested instances in the reverse order you created them—by disposing the child instance before you dispose the parent instance. You must also ensure that you dispose Tracer instances on the same thread that created them.
You must be careful in asynchronous scenarios such as when you use the await keyword: this makes it easy to forget that the disposal of a tracer created in a using statement could happen in another thread.

Creating Custom Trace Listeners, Filters, and Formatters

You can extend the capabilities of the Logging block if you need to add specific functionality to it. In general, you will only need to implement custom log filters, trace listeners, or log formatters. The design of the block makes it easy to add these and make them available through configuration.

To create a new log filter, you can either implement the ILogFilter interface, which specifies the single method Filter that must accept an instance of a LogEntry and return true or false, or you can inherit the base class LogFilter and implement the Filter method.

To create a custom trace listener, you can inherit from the abstract base class CustomTraceListener and implement the methods required to send your log entry to the appropriate location or execute the relevant actions to log the message. You can expose a property for the relevant log formatter if you want to allow users to select a specific formatter for the message.

To create a custom log formatter, you can either implement the ILogFormatter interface, which specifies the single method, Format, that must accept an instance of a LogEntry and return the formatted message, or you can inherit the base class, LogFormatter, and implement the Format method.

For more information about using and extending the Logging block, see The Logging Application Block in the Enterprise Library Reference Documentation.

Summary

This chapter described the Enterprise Library Logging Application Block. This block is extremely useful for logging activities, events, messages, and other information that your application must persist or expose—both to monitor performance and to generate auditing information. The Logging block is, like all of the other Enterprise Library blocks, highly customizable and driven through either programmatic or declarative configuration so that you (or administrators and operations staff) can modify the behavior to suit your requirements exactly.

You can use the Logging block to categorize, filter, and write logging information to a wide variety of targets, including Windows event logs, email messages, disk files, Windows Message Queuing, and a database. You can even collect additional context information and add it to the log entries automatically, and add activity IDs to help you correlate related messages and activities. And, if none of the built-in features meets your requirements, you can create and integrate custom listeners, filters, and formatters.

This chapter explained why you should consider decoupling your logging features from your application code, what the Logging block can do to help you implement flexible and configurable logging, and how you actually perform the common tasks related to logging. For more information about using the Logging block, see the Enterprise Library Reference Documentation.

Before deciding to use the Logging Application Block in your application, you should consider using the Semantic Logging Application Block. The Semantic Logging Application Block offers a more structured approach to logging and when used out-of-process further reduces the overhead of logging on your application.

More Information

All links in this book are accessible from the book's online bibliography on MSDN at https://aka.ms/el6biblio.

For more information about using and extending the Logging block, see The Logging Application Block in the Enterprise Library Reference Documentation.

For more information about how to manage the buffer size, see the topic Logging Messages Asynchronously in the Enterprise Library Reference Documentation.

General Links:

Next Topic | Previous Topic | Home | Community