Partager via


Overview of logging using the Semantic Logging Application Block

patterns & practices Developer Center

Although the process of creating and writing log entries is relatively simple when using the Semantic Logging Application Block, the number of options available (such as the many logging targets and the ability to filter entries) means that the underlying structure of the block and the options available for using it can seem complex. In fact, the process is reasonably simple once you understand the objects and the way that they interact.

At a high level, the way that the logging process works is as follows:

  • Your application writes an event using one of the custom methods in your event source class. Each custom event source method specifies a log level and keywords for the event type, together with any event-specific information that can be logged.
  • The event source notifies any event listeners that are enabled to receive events that match certain filter criteria. An event listener can specify that it will receive events at or below a given log level; and it can also specify that it will receive events that match certain keywords. The event listener receives the event if it matches its filter criteria for that specific event source.
  • One or more event sinks subscribe to the event listener, may format the message using a formatter, and then write the message to the destination specific to that event sink.

The main decision you need to make when using the application block is whether to follow the in-process or the out-of-process approach. The only difference between these is the way that events are delivered to sinks. The way that events are originated by the application, and how they are stored by the sinks, is the same in both scenarios.

This section of the guide explains how each of these two scenarios works, and the way that the objects involved in the process interact. It also explores the advantages of each approach to help you make a decision as to which best suits your own requirements. The topics covered are:

  • Using the application block in the in-process scenario
  • Using the application block in the out-of-process scenario
  • Choosing between the in-process and the out-of-process scenario

Using the application block in the in-process scenario

When used in-process, all of the component objects run in the same process as your application. This is the traditional approach used by many logging mechanisms. Figure 1 shows the overall process.

Figure 1 - Using the application block in the in-process scenario

Figure 1 - Using the application block in the in-process scenario

In this scenario, the objects used in the logging process are:

  • Event Source. Your event source class is the main entry point for creating log entries and writing them to your chosen logging targets. It defines your custom log messages and their associated metadata, and enables you to write log messages from your application using event listeners (in process) or the Event Tracing for Windows (ETW) infrastructure (out-of-process).
  • Observable Event Listeners. An observable event listener listens for log entries arriving from event sources when you are using the application block in the in-process scenario. Each observable event listener is associated with one or more event sources and can filter messages based on event severity and keywords. The observable event listener implements the IObservable interface and provides an implementation of the Subscribe method.
  • Event sinks. Event sinks represent the targets for your log entries, and typically you configure one for each type of target (such as a database, a disk file, or Azure table storage) to which you want to send the log entries. When used in-process, event sinks subscribe to an observable event listener to receive log entries, format each log entry as required, and dispatch it to the target configured for that event sink. The important point to note here is that this allows you to dispatch each log entry to zero, one, or more targets (such as sending it to the console as well as writing it to a file).
  • Log Formatters. Some event sinks that you add to your configuration can use a log formatter to convert the data in the log entry from a series of properties into a format suitable for sending to the log target. The block contains a text formatter, an XML formatter, and a JSON formatter that you can use with sinks that dispatch log entries to targets such as disk files, or the Windows console. The text formatter is configurable so that you can modify the format of the text message and the JSON formatter is configurable so that you can control the layout of the JSON in the log message.

Using the application block in the out-of-process scenario

When used out-of-process, the objects provided with the application block run within a separate trace service application installed on the same physical computer as your application. The trace application interacts with Event Tracing for Windows (ETW), which is a feature built into the Windows operating system, to collect events that will be logged. Figure 2 shows the overall process.

Figure 2 - Using the application block in the out-of-process scenario

Figure 2 - Using the application block in the out-of-process scenario

In this scenario, the objects used in the logging process are:

  • Event Source. Your event source class is the main entry point for creating log entries and writing them to your chosen logging targets. It defines your custom log messages and their associated metadata, and enables you to write log messages from your application using event listeners (in process) or the Event Tracing for Windows (ETW) infrastructure (out-of-process).
  • Trace Event Service. The trace event service uses configuration information that is loaded when it starts. This configuration information specifies the event sources it should monitor. The service sets up an ETW session, together with any filtering specified in the configuration, to listen for events.
  • Event sinks. Event sinks represent the targets for your log entries, and typically you configure one for each type of target (such as a database, a disk file, or Azure table storage) to which you want to send the log entries. When used out-of-process, event sinks subscribe to a trace event service to receive log entries, format each log entry as required, and dispatch it to the target configured for that event sink. The important point to note here is that this allows you to dispatch each log entry to zero, one, or more targets (such as sending it to the console as well as writing it to a file).
  • Log Formatters. Some event sinks that you add to your configuration can use a log formatter to convert the data in the log entry from a series of properties into a format suitable for sending to the log target. The block contains a text formatter, an XML formatter, and a JSON formatter that you can use with sinks that dispatch log entries to targets such as disk files, or the Windows console. The text formatter is configurable so that you can modify the format of the text message and the JSON formatter is configurable so that you can control the layout of the JSON in the log message.

Choosing between the in-process and the out-of-process scenario

When should you use the Semantic Logging Application Block in-process and when should you use it out-of-process? Although it is easier to configure and use the block in-process, there are a number of advantages to using it in the out-of-process scenario.

The in-process approach is typically suited to these scenarios:

  • While developing and testing your event logging strategy. Setting up the block to capture and log events is quick and easy. You can also use the Console event sink to view log messages directly, without needing to open the logs to check that events are being correctly recorded.
  • In production applications where the primary concern is simplicity when implementing your logging strategy, and there is less concern over the impact on performance and the loss of events should your application fail.

The out-of-process approach is typically suited to these scenarios:

  • In production applications where the logging process should not negatively affect the running of the application. By using a separate trace event listening service to handle events, you minimize the in-process resource usage, and consequently minimize the load on the application.
  • You need to be able to change the logging configuration without stopping, updating, redeploying, and restarting the application. An administrator can change the configuration at runtime, perhaps to increase the verbosity of logging in response to issues within the application or to change the logging destinations, without needing to update the application.
  • It is important to minimize the chance of losing log messages if the application you are monitoring should fail. When the application creates a log message in the out-of-process scenario, it immediately delivers it to the ETW infrastructure in the operating system. Any log messages written by the application will not get lost if the application crashes.
  • You are using a sink with a high latency, such as the Azure Table Storage sink. However, in very high load out-of-process scenarios, using the SQL Database sink rather than the Azure Table Storage sink will reduce the chance of losing events.

Lost and missed events

The out-of-process scenario minimizes the chances of events being missed and not being logged. However, you could still lose log messages in some extreme cases:

  • In the out-of-process scenario:
    • If the server itself fails between the time the application writes the log message and the time the out-of-process host persists the message
    • If the out-of-process host application crashes before it persists the message. However, the out-of-process host is a robust and relatively simple application and is unlikely to crash.
    • During periods of very high event throughput if the ETW buffers become full.
  • In both the in-process and the out-of-process scenarios:
    • During periods of very high event throughput if the buffer of one of the Semantic Logging Application Block sinks becomes full. See the topic Performance considerations for more information about configuring buffering for sinks in order to minimize the chance of buffer overflow.

Next Topic | Previous Topic | Home | Community