Partilhar via


Performance considerations

patterns & practices Developer Center

The Semantic Logging Application Block is designed to support scenarios with a high-throughput of events. It uses buffering and asynchronous methods in the sinks (except for the Console sink) in order to minimize the load on applications and to maximize logging performance.

In the in-process scenario, there is just one level of buffering. This occurs within the event sinks. In the out-of-process scenario, there are two levels of buffering. ETW buffers events before passing them to the event sinks, while the sinks also buffer events before writing them to the logging destination. For more information about how ETW buffers events, see the topic About Event Tracing on MSDN.

It is possible for the Semantic Logging Application Block sinks to drop events if their internal buffers overflow, or if a sink encounters non-transient errors. You can modify the buffering configuration options for the sinks to support very high throughput scenarios, and to reduce the chance that the buffers will overflow under your typical workloads. However, in most cases, the default settings are sufficient to support high-throughput scenarios.

Note

You should monitor the events generated by the Semantic Logging Application Block for any indication of buffer overflows and lost messages. Events with ID 900 and 901 indicate that a sink's internal buffers have overflowed. In the out-of-process scenario, events with ID 806 and 807 indicate that the ETW buffers have overflowed.

Configuring buffering

The SQL Database sink and the Azure Table Storage sink support configurable internal buffering. Each of these sinks has configuration options that control buffering behavior within the sink:

  • bufferingCount. This parameter controls how many log messages can accumulate in the sink’s internal buffer before the sink writes them to the log file. The default for the SQL Database sink is 1000, but it can be configured if required (this parameter is not configurable for the Azure Table sink). If both this parameter and the bufferingInterval parameter are set to zero, the sink will not buffer any messages internally but will write the message to the sink destination immediately.
  • bufferingInterval. This parameter controls how frequently the sink writes the accumulated log messages to the sink destination. The default depends on the sink being used. If both this parameter and bufferingCount parameter are set to zero, the sink will not buffer any messages internally but will write the message to the sink destination immediately.
  • maxBufferSize. This parameter controls the maximum number of entries that can be buffered by the sink before it begins to drop entries. The default is 30,000.
  • onCompletedTimeout. This parameter defines the timeout interval used when flushing the entries after an OnCompleted call is received, and before disposing of the sink. In the out-of-process configuration schema, the equivalent to onCompletedTimeout is the attribute named bufferingFlushAllTimeoutInSeconds. By default, the sink will wait until all entries have been flushed. If a timeout interval is specified, and that period elapses before flushing has completed, some event entries will be dropped and not sent to the store. If null is specified, the call will block indefinitely until the flush operation finishes. Normally, calling Dispose on the ObservableEventListener will block until all the entries are flushed or the interval elapses.

When used in the in-process scenario, the Flat File sink and the Rolling Flat File sink support buffering when they are created with the isAsync parameter set to true (the default is false). In the out-of-process scenario, these sinks run on a background thread and there are no configuration options for buffering.

Using the ConsoleSink class in high throughput scenarios

Using the Console sink in an out-of-process high-throughput scenario is not recommended. The Console sink is intended for use as part of the development process as a way to validate behavior and get immediate visible feedback when running the application locally. It is not designed to support production scenarios and may have a large negative impact on performance if you use it in this way.

Next Topic | Previous Topic | Home | Community