If you get an error like "There are no versions available for the package Azure.Monitor.OpenTelemetry.Exporter," it's probably because the setting of NuGet package sources is missing. Try to specify the source with the -s option:
# Install the latest package with the NuGet package source specified.
dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter -s https://api.nuget.org/v3/index.json
If you are upgrading from an earlier 3.x version, you may be impacted by changing defaults or slight differences in the data we collect. See the migration notes at the top of the release notes for
3.4.0,
3.3.0,
3.2.0, and
3.1.0
for more details.
The following code demonstrates how to enable OpenTelemetry in a C# console application by setting up OpenTelemetry TracerProvider. This code must be in the application startup. For ASP.NET Core, it's done typically in the ConfigureServices method of the application Startup class. For ASP.NET applications, it's done typically in Global.asax.cs.
using System.Diagnostics;
using Azure.Monitor.OpenTelemetry.Exporter;
using OpenTelemetry;
using OpenTelemetry.Trace;
public class Program
{
private static readonly ActivitySource MyActivitySource = new ActivitySource(
"OTel.AzureMonitor.Demo");
public static void Main()
{
using var tracerProvider = Sdk.CreateTracerProviderBuilder()
.AddSource("OTel.AzureMonitor.Demo")
.AddAzureMonitorTraceExporter(o =>
{
o.ConnectionString = "<Your Connection String>";
})
.Build();
using (var activity = MyActivitySource.StartActivity("TestActivity"))
{
activity?.SetTag("CustomTag1", "Value1");
activity?.SetTag("CustomTag2", "Value2");
}
System.Console.WriteLine("Press Enter key to exit.");
System.Console.ReadLine();
}
}
Note
The Activity and ActivitySource classes from the System.Diagnostics namespace represent the OpenTelemetry concepts of Span and Tracer, respectively. You create ActivitySource directly by using its constructor instead of by using TracerProvider. Each ActivitySource class must be explicitly connected to TracerProvider by using AddSource(). That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see Introduction to OpenTelemetry .NET Tracing API.
Java auto-instrumentation is enabled through configuration changes; no code changes are required.
Point the JVM to the jar file by adding -javaagent:"path/to/applicationinsights-agent-3.4.10.jar" to your application's JVM args.
The following code demonstrates how to enable OpenTelemetry in a simple JavaScript application:
const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
const { BatchSpanProcessor } = require("@opentelemetry/sdk-trace-base");
const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
const { context, trace } = require("@opentelemetry/api")
const provider = new NodeTracerProvider();
provider.register();
// Create an exporter instance.
const exporter = new AzureMonitorTraceExporter({
connectionString: "<Your Connection String>"
});
// Add the exporter to the provider.
provider.addSpanProcessor(
new BatchSpanProcessor(exporter)
);
// Create a tracer.
const tracer = trace.getTracer("example-basic-tracer-node");
// Create a span. A span must be closed.
const parentSpan = tracer.startSpan("main");
for (let i = 0; i < 10; i += 1) {
doWork(parentSpan);
}
// Be sure to end the span.
parentSpan.end();
function doWork(parent) {
// Start another span. In this example, the main method already started a
// span, so that will be the parent span, and this will be a child span.
const ctx = trace.setSpan(context.active(), parent);
// Set attributes to the span.
// Check the SpanOptions interface for more options that can be set into the span creation
const spanOptions = {
attributes: {
"key": "value"
}
};
const span = tracer.startSpan("doWork", spanOptions, ctx);
// Simulate some random work.
for (let i = 0; i <= Math.floor(Math.random() * 40000000); i += 1) {
// empty
}
// Annotate our span to capture metadata about our operation.
span.addEvent("invoking doWork");
// Mark the end of span execution.
span.end();
}
The following code demonstrates how to enable OpenTelemetry in a simple Python application:
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import trace
configure_azure_monitor(
connection_string="<Your Connection String>",
)
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("hello"):
print("Hello, World!")
input()
Tip
For .NET, Node.js, and Python, you'll need to manually add instrumentation libraries to autocollect telemetry across popular frameworks and libraries. For Java, these instrumentation libraries are already included and no additional steps are required.
Set the Application Insights connection string
You can find your connection string in the Overview Pane of your Application Insights Resource.
Create a configuration file named applicationinsights.json, and place it in the same directory as applicationinsights-agent-3.4.10.jar with the following content:
Replace the <Your Connection String> in the preceding code with the connection string from your Application Insights resource.
Replace the <Your Connection String> in the preceding code with the connection string from your Application Insights resource.
Confirm data is flowing
Run your application and open your Application Insights Resource tab in the Azure portal. It might take a few minutes for data to show up in the portal.
Note
If you can't run the application or you aren't getting data as expected, see Troubleshooting.
Important
If you have two or more services that emit telemetry to the same Application Insights resource, you're required to set Cloud Role Names to represent them properly on the Application Map.
As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see Statsbeat in Azure Application Insights.
Set the Cloud Role Name and the Cloud Role Instance
You might want to update the Cloud Role Name and the Cloud Role Instance from the default values to something that makes sense to your team. They'll appear on the Application Map as the name underneath a node.
Set the Cloud Role Name and the Cloud Role Instance via Resource attributes. Cloud Role Name uses service.namespace and service.name attributes, although it falls back to service.name if service.namespace isn't set. Cloud Role Instance uses the service.instance.id attribute value. For information on standard attributes for resources, see Resource Semantic Conventions.
// Setting role name and role instance
var resourceAttributes = new Dictionary<string, object> {
{ "service.name", "my-service" },
{ "service.namespace", "my-namespace" },
{ "service.instance.id", "my-instance" }};
var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttributes);
// Done setting role name and role instance
// Set ResourceBuilder on the provider.
var tracerProvider = Sdk.CreateTracerProviderBuilder()
.SetResourceBuilder(resourceBuilder)
.AddSource("OTel.AzureMonitor.Demo")
.AddAzureMonitorTraceExporter(o =>
{
o.ConnectionString = "<Your Connection String>";
})
.Build();
Set the Cloud Role Name and the Cloud Role Instance via Resource attributes. Cloud Role Name uses service.namespace and service.name attributes, although it falls back to service.name if service.namespace isn't set. Cloud Role Instance uses the service.instance.id attribute value. For information on standard attributes for resources, see Resource Semantic Conventions.
...
const { Resource } = require("@opentelemetry/resources");
const { SemanticResourceAttributes } = require("@opentelemetry/semantic-conventions");
const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
const { MeterProvider } = require("@opentelemetry/sdk-metrics")
// ----------------------------------------
// Setting role name and role instance
// ----------------------------------------
const testResource = new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: "my-helloworld-service",
[SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace",
[SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance",
});
// ----------------------------------------
// Done setting role name and role instance
// ----------------------------------------
const tracerProvider = new NodeTracerProvider({
resource: testResource
});
const meterProvider = new MeterProvider({
resource: testResource
});
Set the Cloud Role Name and the Cloud Role Instance via Resource attributes. Cloud Role Name uses service.namespace and service.name attributes, although it falls back to service.name if service.namespace isn't set. Cloud Role Instance uses the service.instance.id attribute value. For information on standard attributes for resources, see Resource Semantic Conventions.
...
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry.sdk.resources import Resource, ResourceAttributes
configure_azure_monitor(
connection_string="<your-connection-string>",
resource=Resource.create(
{
ResourceAttributes.SERVICE_NAME: "my-helloworld-service",
# ----------------------------------------
# Setting role name and role instance
# ----------------------------------------
ResourceAttributes.SERVICE_NAMESPACE: "my-namespace",
ResourceAttributes.SERVICE_INSTANCE_ID: "my-instance",
# ----------------------------------------------
# Done setting role name and role instance
# ----------------------------------------------
}
)
)
...
Enable Sampling
You may want to enable sampling to reduce your data ingestion volume, which reduces your cost. Azure Monitor provides a custom fixed-rate sampler that populates events with a "sampling ratio", which Application Insights converts to "ItemCount". The fixed-rate sampler ensures accurate experiences and event counts. The sampler is designed to preserve your traces across services, and it's interoperable with older Application Insights SDKs. For more information, see Learn More about sampling.
Starting from 3.4.0, rate-limited sampling is available and is now the default. See sampling for more information.
const { BasicTracerProvider, SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base");
const { ApplicationInsightsSampler, AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
// Sampler expects a sample rate of between 0 and 1 inclusive
// A rate of 0.1 means approximately 10% of your traces are sent
const aiSampler = new ApplicationInsightsSampler(0.75);
const provider = new BasicTracerProvider({
sampler: aiSampler
});
const exporter = new AzureMonitorTraceExporter({
connectionString: "<Your Connection String>"
});
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.register();
The configure_azure_monitor() function will automatically utilize
ApplicationInsightsSampler for compatibility with Application Insights SDKs and
to sample your telemetry. The sampling_ratio parameter can be used to specify
the sampling rate, with a valid range of 0 to 1, where 0 is 0% and 1 is 100%.
For example, a value of 0.1 means 10% of your traces will be sent.
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import trace
configure_azure_monitor(
# connection_string="<your-connection-string>",
# Sampling ratio of between 0 and 1 inclusive
# 0.1 means approximately 10% of your traces are sent
sampling_ratio=0.1,
)
tracer = trace.get_tracer(__name__)
for i in range(100):
# Approximately 90% of these spans should be sampled out
with tracer.start_as_current_span("hello"):
print("Hello, World!")
Tip
When using fixed-rate/percentage sampling and you aren't sure what to set the sampling rate as, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on OpenTelemetry metrics, which are unaffected by sampling.
Instrumentation libraries
The following libraries are validated to work with the current release.
Warning
Instrumentation libraries are based on experimental OpenTelemetry specifications, which impacts languages in preview status. Microsoft's preview support commitment is to ensure that the following libraries emit data to Azure Monitor Application Insights, but it's possible that breaking changes or experimental mapping will block some data elements.
The OpenTelemetry-based offerings currently emit all metrics as Custom Metrics and Performance Counters in Metrics Explorer. For .NET, Node.js, and Python, whatever you set as the meter name becomes the metrics namespace.
See this for examples of using the Python logging library.
Footnotes
1: Supports automatic reporting of unhandled exceptions
2: By default, logging is only collected when that logging is performed at the INFO level or higher. To change this level, see the configuration options.
3: By default, logging is only collected when that logging is performed at the WARNING level or higher. To change this level, see the configuration options and specify logging_level.
Collect custom telemetry
This section explains how to collect custom telemetry from your application.
Depending on your language and signal type, there are different ways to collect custom telemetry, including:
OpenTelemetry API
Language-specific logging/metrics libraries
Application Insights Classic API
The following table represents the currently supported custom telemetry types:
Custom Events
Custom Metrics
Dependencies
Exceptions
Page Views
Requests
Traces
.NET
OpenTelemetry API
Yes
Yes
Yes
iLogger API
Yes
AI Classic API
Java
OpenTelemetry API
Yes
Yes
Yes
Yes
Logback, Log4j, JUL
Yes
Yes
Micrometer Metrics
Yes
AI Classic API
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Node.js
OpenTelemetry API
Yes
Yes
Yes
Yes
Winston, Pino, Bunyan
Yes
AI Classic API
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Python
OpenTelemetry API
Yes
Yes
Yes
Yes
Python Logging Module
Yes
Note
Application Insights Java 3.x listens for telemetry that's sent to the Application Insights Classic API. Similarly, Application Insights Node.js 3.x collects events created with the Application Insights Classic API. This makes upgrading easier and fills a gap in our custom telemetry support until all custom telemetry types are supported via the OpenTelemetry API.
Add Custom Metrics
Note
Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to opt-in.
The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you'll need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
The following table shows the recommended aggregation types for each of the OpenTelemetry Metric Instruments.
OpenTelemetry Instrument
Azure Monitor Aggregation Type
Counter
Sum
Asynchronous Counter
Sum
Histogram
Min, Max, Average, Sum and Count
Asynchronous Gauge
Average
UpDownCounter
Sum
Asynchronous UpDownCounter
Sum
Caution
Aggregation types beyond what's shown in the table typically aren't meaningful.
The OpenTelemetry Specification
describes the instruments and provides examples of when you might use each one.
Tip
The histogram is the most versatile and most closely equivalent to the Application Insights Track Metric Classic API. Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance.
using System.Diagnostics.Metrics;
using Azure.Monitor.OpenTelemetry.Exporter;
using OpenTelemetry;
using OpenTelemetry.Metrics;
public class Program
{
private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
public static void Main()
{
using var meterProvider = Sdk.CreateMeterProviderBuilder()
.AddMeter("OTel.AzureMonitor.Demo")
.AddAzureMonitorMetricExporter(o =>
{
o.ConnectionString = "<Your Connection String>";
})
.Build();
Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice");
var rand = new Random();
myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green"));
myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
System.Console.WriteLine("Press Enter key to exit.");
System.Console.ReadLine();
}
}
import io.opentelemetry.api.GlobalOpenTelemetry;
import io.opentelemetry.api.metrics.DoubleHistogram;
import io.opentelemetry.api.metrics.Meter;
public class Program {
public static void main(String[] args) {
Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
DoubleHistogram histogram = meter.histogramBuilder("histogram").build();
histogram.record(1.0);
histogram.record(100.0);
histogram.record(30.0);
}
}
using System.Diagnostics.Metrics;
using Azure.Monitor.OpenTelemetry.Exporter;
using OpenTelemetry;
using OpenTelemetry.Metrics;
public class Program
{
private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
public static void Main()
{
using var meterProvider = Sdk.CreateMeterProviderBuilder()
.AddMeter("OTel.AzureMonitor.Demo")
.AddAzureMonitorMetricExporter(o =>
{
o.ConnectionString = "<Your Connection String>";
})
.Build();
var process = Process.GetCurrentProcess();
ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process));
System.Console.WriteLine("Press Enter key to exit.");
System.Console.ReadLine();
}
private static IEnumerable<Measurement<int>> GetThreadState(Process process)
{
foreach (ProcessThread thread in process.Threads)
{
yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id));
}
}
}
import io.opentelemetry.api.GlobalOpenTelemetry;
import io.opentelemetry.api.common.AttributeKey;
import io.opentelemetry.api.common.Attributes;
import io.opentelemetry.api.metrics.Meter;
public class Program {
public static void main(String[] args) {
Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
meter.gaugeBuilder("gauge")
.buildWithCallback(
observableMeasurement -> {
double randomNumber = Math.floor(Math.random() * 100);
observableMeasurement.record(randomNumber, Attributes.of(AttributeKey.stringKey("testKey"), "testValue"));
});
}
}
const {
MeterProvider,
PeriodicExportingMetricReader
} = require("@opentelemetry/sdk-metrics");
const { AzureMonitorMetricExporter } = require("@azure/monitor-opentelemetry-exporter");
const provider = new MeterProvider();
const exporter = new AzureMonitorMetricExporter({
connectionString:
connectionString: "<Your Connection String>",
});
const metricReader = new PeriodicExportingMetricReader({
exporter: exporter
});
provider.addMetricReader(metricReader);
const meter = provider.getMeter("OTel.AzureMonitor.Demo");
let gauge = meter.createObservableGauge("gauge");
gauge.addCallback((observableResult) => {
let randomNumber = Math.floor(Math.random() * 100);
observableResult.observe(randomNumber, {"testKey": "testValue"});
});
from typing import Iterable
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import metrics
from opentelemetry.metrics import CallbackOptions, Observation
configure_azure_monitor(
connection_string="<your-connection-string>",
)
meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_gauge_demo")
def observable_gauge_generator(options: CallbackOptions) -> Iterable[Observation]:
yield Observation(9, {"test_key": "test_value"})
def observable_gauge_sequence(options: CallbackOptions) -> Iterable[Observation]:
observations = []
for i in range(10):
observations.append(
Observation(9, {"test_key": i})
)
return observations
gauge = meter.create_observable_gauge("gauge", [observable_gauge_generator])
gauge2 = meter.create_observable_gauge("gauge2", [observable_gauge_sequence])
input()
Add Custom Exceptions
Select instrumentation libraries automatically report exceptions to Application Insights.
However, you may want to manually report exceptions beyond what instrumentation libraries report.
For instance, exceptions caught by your code aren't ordinarily reported. You may wish to report them
to draw attention in relevant experiences including the failures section and end-to-end transaction views.
The OpenTelemetry Python SDK is implemented in such a way that exceptions thrown will automatically be captured and recorded. See below for an example of this behavior.
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import trace
configure_azure_monitor(
connection_string="<your-connection-string>",
)
tracer = trace.get_tracer("otel_azure_monitor_exception_demo")
# Exception events
try:
with tracer.start_as_current_span("hello") as span:
# This exception will be automatically recorded
raise Exception("Custom exception message.")
except Exception:
print("Exception raised")
If you would like to record exceptions manually, you can disable that option
within the context manager and use record_exception() directly as shown below:
...
with tracer.start_as_current_span("hello", record_exception=False) as span:
try:
raise Exception("Custom exception message.")
except Exception as ex:
# Manually record exception
span.record_exception(ex)
...
Add Custom Spans
You may want to add a custom span when there's a dependency request that's not already collected by an instrumentation library or an application process that you wish to model as a span on the end-to-end transaction view.
By default, the span will end up in the dependencies table with dependency type InProc.
If your method represents a background job that isn't already captured by auto-instrumentation,
we recommend that you apply the attribute kind = SpanKind.SERVER to the @WithSpan annotation
so that it will end up in the Application Insights requests table.
Use the OpenTelemetry API
If the preceding OpenTelemetry @WithSpan annotation doesn't meet your needs,
you can add your spans by using the OpenTelemetry API.
Add opentelemetry-api-1.0.0.jar (or later) to your application:
Use the GlobalOpenTelemetry class to create a Tracer:
import io.opentelemetry.api.GlobalOpenTelemetry;
import io.opentelemetry.api.trace.Tracer;
static final Tracer tracer = GlobalOpenTelemetry.getTracer("com.example");
Create a span, make it current, and then end it:
Span span = tracer.spanBuilder("my first span").startSpan();
try (Scope ignored = span.makeCurrent()) {
// do stuff within the context of this
} catch (Throwable t) {
span.recordException(t);
} finally {
span.end();
}
Coming soon.
Use the OpenTelemetry API
The OpenTelemetry API can be used to add your own spans, which will appear in
the requests and dependencies tables in Application Insights.
The code example shows how to use the tracer.start_as_current_span() method to
start, make the span current, and end the span within its context.
...
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
# The "with" context manager starts, makes the span current, and ends the span within it's context
with tracer.start_as_current_span("my first span") as span:
try:
# Do stuff within the context of this
except Exception as ex:
span.record_exception(ex)
...
By default, the span will be in the dependencies table with a dependency type of InProc.
If your method represents a background job that isn't already captured by
auto-instrumentation, we recommend that you set the attribute kind = SpanKind.SERVER so that it will end up in the Application Insights requests
table.
...
from opentelemetry import trace
from opentelemetry.trace import SpanKind
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("my request span", kind=SpanKind.SERVER) as span:
...
Send custom telemetry using the Application Insights Classic API
We recommend you use the OpenTelemetry APIs whenever possible, but there may be some scenarios when you have to use the Application Insights Classic APIs.
These attributes might include adding a custom property to your telemetry. You might also use attributes to set optional fields in the Application Insights schema, like Client IP.
Add a custom property to a Span
Any attributes you add to spans are exported as custom properties. They populate the customDimensions field in the requests, dependencies, traces, or exceptions table.
The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute.
Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries:
Add the processor shown here before the Azure Monitor Exporter.
using var tracerProvider = Sdk.CreateTracerProviderBuilder()
.AddSource("OTel.AzureMonitor.Demo")
.AddProcessor(new ActivityEnrichingProcessor())
.AddAzureMonitorTraceExporter(o =>
{
o.ConnectionString = "<Your Connection String>"
})
.Build();
Add ActivityEnrichingProcessor.cs to your project with the following code:
using System.Diagnostics;
using OpenTelemetry;
using OpenTelemetry.Trace;
public class ActivityEnrichingProcessor : BaseProcessor<Activity>
{
public override void OnEnd(Activity activity)
{
// The updated activity will be available to all processors which are called after this processor.
activity.DisplayName = "Updated-" + activity.DisplayName;
activity.SetTag("CustomDimension1", "Value1");
activity.SetTag("CustomDimension2", "Value2");
}
}
You can use opentelemetry-api to add attributes to spans.
Adding one or more span attributes populates the customDimensions field in the requests, dependencies, traces, or exceptions table.
Add opentelemetry-api-1.0.0.jar (or later) to your application:
...
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import trace
configure_azure_monitor(
connection_string="<your-connection-string>",
)
span_enrich_processor = SpanEnrichingProcessor()
# Add the processor shown below to the current `TracerProvider`
trace.get_tracer_provider().add_span_processor(span_enrich_processor)
...
Add SpanEnrichingProcessor.py to your project with the following code:
from opentelemetry.sdk.trace import SpanProcessor
class SpanEnrichingProcessor(SpanProcessor):
def on_end(self, span):
span._name = "Updated-" + span.name
span._attributes["CustomDimension1"] = "Value1"
span._attributes["CustomDimension2"] = "Value2"
Set the user IP
You can populate the client_IP field for requests by setting the http.client_ip attribute on the span. Application Insights uses the IP address to generate user location attributes and then discards it by default.
You can populate the user_Id or user_AuthenticatedId field for requests by using the guidance below. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
Important
Consult applicable privacy laws before you set the Authenticated User ID.
The Python logging library is auto-instrumented. You can attach custom dimensions to your logs by passing a dictionary into the extra argument of your logs.
...
logger.warning("WARNING: Warning log with properties", extra={"key1": "value1"})
...
Filter telemetry
You might use the following ways to filter out telemetry before it leaves your application.
using var tracerProvider = Sdk.CreateTracerProviderBuilder()
.AddSource("OTel.AzureMonitor.Demo")
.AddProcessor(new ActivityFilteringProcessor())
.AddAzureMonitorTraceExporter(o =>
{
o.ConnectionString = "<Your Connection String>"
})
.Build();
Add ActivityFilteringProcessor.cs to your project with the following code:
using System.Diagnostics;
using OpenTelemetry;
using OpenTelemetry.Trace;
public class ActivityFilteringProcessor : BaseProcessor<Activity>
{
public override void OnStart(Activity activity)
{
// prevents all exporters from exporting internal activities
if (activity.Kind == ActivityKind.Internal)
{
activity.IsAllDataRequested = false;
}
}
}
If a particular source isn't explicitly added by using AddSource("ActivitySourceName"), then none of the activities created by using that source will be exported.
Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set TraceFlag to DEFAULT.
Use the add custom property example, but replace the following lines of code:
Exclude the URL option provided by many HTTP instrumentation libraries.
The following example shows how to exclude a specific URL from being tracked by using the Flask instrumentation configuration options in the configure_azure_monitor() function.
...
import flask
from azure.monitor.opentelemetry import configure_azure_monitor
# Configure Azure monitor collection telemetry pipeline
configure_azure_monitor(
connection_string="<your-connection-string>",
# Pass in instrumentation configuration via kwargs
# Key: <instrumentation-name>_config
# Value: Dictionary of configuration keys and values
flask_config={"excluded_urls": "http://localhost:8080/ignore"},
)
app = flask.Flask(__name__)
# Requests sent to this endpoint will not be tracked due to
# flask_config configuration
@app.route("/ignore")
def ignore():
return "Request received but not tracked."
...
Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set TraceFlag to DEFAULT.
...
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import trace
configure_azure_monitor(
connection_string="<your-connection-string>",
)
trace.get_tracer_provider().add_span_processor(SpanFilteringProcessor())
...
Add SpanFilteringProcessor.py to your project with the following code:
from opentelemetry.trace import SpanContext, SpanKind, TraceFlags
from opentelemetry.sdk.trace import SpanProcessor
class SpanFilteringProcessor(SpanProcessor):
# prevents exporting spans from internal activities
def on_start(self, span):
if span._kind is SpanKind.INTERNAL:
span._context = SpanContext(
span.context.trace_id,
span.context.span_id,
span.context.is_remote,
TraceFlags.DEFAULT,
span.context.trace_state,
)
Get the trace ID or span ID
You might want to get the trace ID or span ID. If you have logs that are sent to a different destination besides Application Insights, you might want to add the trace ID or span ID to enable better correlation when you debug and diagnose issues.
Get the request trace ID and the span ID in your code:
from opentelemetry import trace
trace_id = trace.get_current_span().get_span_context().trace_id
span_id = trace.get_current_span().get_span_context().span_id
Enable the OTLP Exporter
You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside your Azure Monitor Exporter to send your telemetry to two locations.
Note
The OTLP Exporter is shown for convenience only. We don't officially support the OTLP Exporter or any components or third-party experiences downstream of it.
Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the example on GitHub.
// Sends data to Application Insights as well as OTLP
using var tracerProvider = Sdk.CreateTracerProviderBuilder()
.AddSource("OTel.AzureMonitor.Demo")
.AddAzureMonitorTraceExporter(o =>
{
o.ConnectionString = "<Your Connection String>"
})
.AddOtlpExporter()
.Build();
Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the example on GitHub.
Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see this README.
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
configure_azure_monitor(
connection_string="<your-connection-string>",
)
tracer = trace.get_tracer(__name__)
exporter = AzureMonitorTraceExporter(connection_string="<your-connection-string>")
otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4317")
span_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
with tracer.start_as_current_span("test"):
print("Hello world!")
Configuration
Offline Storage and Automatic Retries
To improve reliability and resiliency, Azure Monitor OpenTelemetry-based offerings write to offline/local storage by default when an application loses its connection with Application Insights. It saves the application telemetry to disk and periodically tries to send it again for up to 48 hours. In addition to exceeding the allowable time, telemetry will occasionally be dropped in high-load applications when the maximum file size is exceeded or the SDK doesn't have an opportunity to clear out the file. If we need to choose, the product will save more recent events over old ones. Learn More
The Azure Monitor Exporter uses EventSource for its own internal logging. The exporter logs are available to any EventListener by opting into the source named OpenTelemetry-AzureMonitor-Exporter. For troubleshooting steps, see OpenTelemetry Troubleshooting.
Diagnostic logging is enabled by default. For more information, see the dedicated troubleshooting article.
Azure Monitor Exporter uses the OpenTelemetry API Logger for internal logs. To enable it, use the following code:
The Azure Monitor Exporter uses the Python standard logging library for its own internal logging. OpenTelemetry API and Azure Monitor Exporter logs are logged at the severity level of WARNING or ERROR for irregular activity. The INFO severity level is used for regular or successful activity. By default, the Python logging library sets the severity level to WARNING, so you must change the severity level to see logs under this severity setting. The following example shows how to output logs of all severity levels to the console and a file:
Operation name is missing on dependency telemetry, which adversely affects failures and performance tab experience.
Device model is missing on request and dependency telemetry, which adversely affects device cohort analysis.
Database server name is left out of dependency name, which incorrectly aggregates tables with the same name on different servers.
No known issues.
Operation name is missing on dependency telemetry, which adversely affects failures and performance tab experience.
Device model is missing on request and dependency telemetry, which adversely affects device cohort analysis.
Database server name is left out of dependency name, which incorrectly aggregates tables with the same name on different servers.
Operation name is missing on dependency telemetry, which adversely affects failures and performance tab experience.
Device model is missing on request and dependency telemetry, which adversely affects device cohort analysis.
Database server name is left out of dependency name, which incorrectly aggregates tables with the same name on different servers.
Test connectivity between your application host and the ingestion service
Application Insights SDKs and agents send telemetry to get ingested as REST calls to our ingestion endpoints. You can test connectivity from your web server or application host machine to the ingestion service endpoints by using raw REST clients from PowerShell or curl commands. See Troubleshoot missing application telemetry in Azure Monitor Application Insights.