Share via


Agent observability

Important

You need to be part of the Frontier preview program to get early access to Microsoft Agent 365. Frontier connects you directly with Microsoft’s latest AI innovations. Frontier previews are subject to the existing preview terms of your customer agreements. As these features are still in development, their availability and capabilities may change over time.

To participate in the Agent 365 ecosystem, you must add Agent 365 Observability capabilities into your agent. The Agent 365 Observability builds on OpenTelemetry (OTel) and provides a unified framework for capturing telemetry consistently and securely across all agent platforms. By implementing this required component, you enable IT admins to monitor your agent's activity in Microsoft Admin Center and allow security teams to use Defender and Purview for compliance and threat detection.

Key benefits

  • End-to-End Visibility: Capture comprehensive telemetry for every agent invocation, including sessions, tool calls, and exceptions, giving you full traceability across platforms.
  • Security & Compliance Enablement: Feed unified audit logs into Defender and Purview, enabling advanced security scenarios and compliance reporting for your agent.
  • Cross-Platform Flexibility: Build on OTel standards and support diverse runtimes and platforms like Copilot Studio, Foundry, and future agent frameworks.
  • Operational Efficiency for Admins: Provide centralized observability in Microsoft 365 Admin Center, reducing troubleshooting time and improving governance with role-based access controls for IT teams managing your agent.

Installation

Use these commands to install the observability modules for the languages supported by Agent 365.

pip install microsoft-agents-a365-observability-core
pip install microsoft-agents-a365-runtime

Configuration

Use the following settings to enable and customize Agent 365 Observability for your agent.

The environment variables required for observability are:

Environment Variable Description
ENABLE_OBSERVABILITY=true Flag to enable/disable tracing. By default false
ENABLE_A365_OBSERVABILITY_EXPORTER=true True exports logs to our service. Otherwise falls back to the console exporter
from microsoft_agents_a365.observability.core import config

def token_resolver(agent_id: str, tenant_id: str) -> str | None:
    # Implement secure token retrieval here
    return "Bearer <token>"

config.configure(
    service_name="my-agent-service",
    service_namespace="my.namespace",
    token_resolver=token_resolver,
)

Exclude token resolver to log to console.

Baggage attributes

Use BaggageBuilder to set contextual information that flows through all spans in a request. The SDK implements a SpanProcessor copies all nonempty baggage entries to newly started spans without overwriting existing attributes.

from microsoft_agents_a365.observability.core.middleware.baggage_builder import BaggageBuilder

with (
    BaggageBuilder()
    .tenant_id("tenant-123")
    .agent_id("agent-456")
    .correlation_id("corr-789")
    .build()
):
    # Any spans started in this context will receive these as attributes
    pass

Token Resolver

When using the Agent 365 exporter, you must provide a token resolver function that returns authentication token. When using the Agent 365 Observability SDK with the Agent Hosting framework, you can generate tokens using the TurnContext from agent activities

from microsoft_agents.activity import load_configuration_from_env
from microsoft_agents.authentication.msal import MsalConnectionManager
from microsoft_agents.hosting.aiohttp import CloudAdapter
from microsoft_agents.hosting.core import (
    AgentApplication,
    Authorization,
    MemoryStorage,
    TurnContext,
    TurnState,
)
from microsoft_agents_a365.runtime.environment_utils import (
    get_observability_authentication_scope,
)

agents_sdk_config = load_configuration_from_env(environ)

STORAGE = MemoryStorage()
CONNECTION_MANAGER = MsalConnectionManager(**agents_sdk_config)
ADAPTER = CloudAdapter(connection_manager=CONNECTION_MANAGER)
ADAPTER.use(TranscriptLoggerMiddleware(ConsoleTranscriptLogger()))
AUTHORIZATION = Authorization(STORAGE, CONNECTION_MANAGER, **agents_sdk_config)

AGENT_APP = AgentApplication[TurnState](
    storage=STORAGE, adapter=ADAPTER, authorization=AUTHORIZATION, **agents_sdk_config
)

@AGENT_APP.activity("message", auth_handlers=["AGENTIC"])
async def on_message(context: TurnContext, _state: TurnState):
    aau_auth_token = await AGENT_APP.auth.exchange_token(
                        context,
                        scopes=get_observability_authentication_scope(),
                        auth_handler_id="AGENTIC",
                    )
    # cache this auth token and return via token resolver

Auto-instrumentation

Auto-instrumentation automatically listens to agentic frameworks (SDKs) existing telemetry signals for traces and forwards them to Agent 365 observability service. This eliminates the need for developers to write monitoring code manually, simplifying setup, and ensuring consistent performance tracking.

Auto-instrumentation is supported across multiple SDKs and platforms:

Platform Supported SDKs / Frameworks
.NET Semantic Kernel, OpenAI, Agent Framework
Python Semantic Kernel, OpenAI, Agent Framework, LangChain
Node.js OpenAI

Note

Support for auto-instrumentation varies by platform and SDK implementation.

Semantic Kernel

Auto instrumentation requires the use of baggage builder. Set agent ID and tenant ID using BaggageBuilder.

Install the package

pip install microsoft-agents-a365-observability-extensions-semantic-kernel

Configure observability

from microsoft_agents_a365.observability.core.config import configure
from microsoft_agents_a365.observability.extensions.semantic_kernel import SemanticKernelInstrumentor

# Configure observability
configure(
    service_name="my-semantic-kernel-agent",
    service_namespace="ai.agents"
)

# Enable auto-instrumentation
instrumentor = SemanticKernelInstrumentor()
instrumentor.instrument()

# Your Semantic Kernel code is now automatically traced

OpenAI

Auto instrumentation requires the use of baggage builder. Set agent ID and tenant ID using BaggageBuilder.

Install the package.

pip install microsoft-agents-a365-observability-extensions-openai

Configure observability.

from microsoft_agents_a365.observability.core.config import configure
from microsoft_agents_a365.observability.extensions.openai_agents import OpenAIAgentsTraceInstrumentor

# Configure observability
configure(
    service_name="my-openai-agent",
    service_namespace="ai.agents"
)

# Enable auto-instrumentation
instrumentor = OpenAIAgentsTraceInstrumentor()
instrumentor.instrument()

# Your OpenAI Agents code is now automatically traced

Agent Framework

Auto instrumentation requires the use of baggage builder. Set agent ID and tenant ID using BaggageBuilder.

Install the package

pip install microsoft-agents-a365-observability-extensions-agent-framework

Configure observability

from microsoft_agents_a365.observability.core.config import configure
from microsoft_agents_a365.observability.extensions.agentframework.trace_instrumentor import (
    AgentFrameworkInstrumentor,
)

# Configure observability
configure(
    service_name="AgentFrameworkTracingWithAzureOpenAI",
    service_namespace="AgentFrameworkTesting",
)

# Enable auto-instrumentation
AgentFrameworkInstrumentor().instrument()

LangChain Framework

Auto-instrumentation requires the use of baggage builder. Set agent ID and tenant ID using BaggageBuilder.

Install the package.

pip install microsoft-agents-a365-observability-extensions-langchain

Configure observability

from microsoft_agents_a365.observability.core.config import configure
from microsoft_agents_a365.observability.extensions.langchain import CustomLangChainInstrumentor

# Configure observability
configure(
    service_name="my-langchain-agent",
    service_namespace="ai.agents"
)

# Enable auto-instrumentation
CustomLangChainInstrumentor()

# Your LangChain code is now automatically traced

Manual Instrumentation

Agent 365 observability SDK can be used to understand the internal working of the agent. The SDK provides three scopes that can be started: InvokeAgentScope, ExecuteToolScope, and InferenceScope.

Agent Invocation

This scope should be used at the start of your agent process. With invoke agent scope, you capture properties like the current agent being invoked, agent user data, etc.

from microsoft_agents_a365.observability.core.invoke_agent_scope import InvokeAgentScope
from microsoft_agents_a365.observability.core.invoke_agent_details import InvokeAgentDetails
from microsoft_agents_a365.observability.core.tenant_details import TenantDetails
from microsoft_agents_a365.observability.core.request import Request

invoke_details = InvokeAgentDetails(
    details=agent_details,        # AgentDetails instance
    endpoint=my_endpoint,         # Optional endpoint (with hostname/port)
    session_id="session-42"
)
tenant_details = TenantDetails(tenant_id="tenant-123")
req = Request(content="User asks a question")

with InvokeAgentScope.start(invoke_details, tenant_details, req):
    # Perform agent invocation logic
    response = call_agent(...)

Tool Execution

The following examples demonstrate how to instrument your agent's tool execution with observability tracking to capture telemetry for monitoring and auditing purposes.

from microsoft_agents_a365.observability.core.execute_tool_scope import ExecuteToolScope
from microsoft_agents_a365.observability.core.tool_call_details import ToolCallDetails

tool_details = ToolCallDetails(
    tool_name="summarize",
    tool_type="function",
    tool_call_id="tc-001",
    arguments="{'text': '...'}",
    description="Summarize provided text",
    endpoint=None  # or endpoint object with hostname/port
)

with ExecuteToolScope.start(tool_details, agent_details, tenant_details):
    result = run_tool(tool_details)

Inference

The following examples show how to instrument AI model inference calls with observability tracking to capture token usage, model details, and response metadata.

from microsoft_agents_a365.observability.core.inference_scope import InferenceScope
from microsoft_agents_a365.observability.core.inference_call_details import InferenceCallDetails
from microsoft_agents_a365.observability.core.request import Request

inference_details = InferenceCallDetails(
    operationName=SomeEnumOrValue("chat"),
    model="gpt-4o-mini",
    providerName="azure-openai",
    inputTokens=123,
    outputTokens=456,
    finishReasons=["stop"],
    responseId="resp-987"
)
req = Request(content="Explain quantum computing simply.")

with InferenceScope.start(inference_details, agent_details, tenant_details, req):
    completion = call_llm(...)

Validate Locally

To verify that you have successfully integrated with the observability SDK, examine the console logs generated by your agent.

Set the environment variable ENABLE_A365_OBSERVABILITY_EXPORTER to false. This exports spans (traces) to console.

Sample logs

The logs might look slightly different depending on the platform.

Console log Invoke agent span

This example illustrates a typical invoke agent span printed by the console exporter when local validation is enabled.

    {
    "name": "invoke_agent Azure OpenAI Agent",
    "context": {
        "trace_id": "0x4bd8f606688c3f3347d69c1b6539c957",
        "span_id": "0x0766d68605234692",
        "trace_state": "[]"
    },
    "kind": "SpanKind.CLIENT",
    "parent_id": null,
    "start_time": "2025-11-24T16:16:54.017403Z",
    "end_time": "2025-11-24T16:17:09.373357Z",
    "status": {
        "status_code": "UNSET"
    },
    "attributes": {
        "operation.source": "SDK",
        "correlation.id": "aaaa0000-bb11-2222-33cc-444444dddddd",
        "gen_ai.conversation.item.link": "http://localhost:56150/_connector",
        "gen_ai.agent.upn": "Delivery Agent",
        "gen_ai.agent.user.id": "aaaaaaaa-bbbb-cccc-1111-222222222222",
        "gen_ai.caller.id": "bbbbbbbb-cccc-dddd-2222-333333333333",
        "gen_ai.caller.name": "Alex Wilber",
        "gen_ai.caller.upn": "Sample UPN",
        "gen_ai.channel.name": "msteams",
        "gen_ai.system": "az.ai.agent365",
        "gen_ai.operation.name": "invoke_agent",
        "gen_ai.agent.id": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
        "gen_ai.agent.name": "Azure OpenAI Agent",
        "gen_ai.agent.description": "An AI agent powered by Azure OpenAI",
        "gen_ai.agent.applicationid": "00001111-aaaa-2222-bbbb-3333cccc4444",
        "gen_ai.conversation.id": "__PERSONAL_CHAT_ID__",
        "tenant.id": "aaaabbbb-0000-cccc-1111-dddd2222eeee",
        "session.id": "__PERSONAL_CHAT_ID__",
        "gen_ai.execution.type": "HumanToAgent",
        "gen_ai.input.messages": "[\"hi, what can you do\"]",
        "gen_ai.output.messages": "[\"Hi! I can assist you with a variety of tasks, including answering questions, providing information on a wide range of topics, helping with problem-solving, offering writing assistance, and more. Just let me know what you need help with!\"]"
    },
    "events": [],
    "links": [],
    "resource": {
        "attributes": {
            "telemetry.sdk.language": "python",
            "telemetry.sdk.name": "opentelemetry",
            "telemetry.sdk.version": "1.38.0",
            "service.namespace": "MyAgentTesting",
            "service.name": "MyAgentTracing"
        },
        "schema_url": ""
    }}

Console Log for execute tool

This example shows a typical execute tool span emitted by the console exporter during local validation.

{
    "name": "execute_tool get_weather",
    "context": {
        "trace_id": "0xa9a1be6323bd52476d6a28b8893c6aa8",
        "span_id": "0x1dec90d34ecc0823",
        "trace_state": "[]"
    },
    "kind": "SpanKind.INTERNAL",
    "parent_id": "0x2e727b4c133cbd50",
    "start_time": "2025-11-24T18:47:55.960305Z",
    "end_time": "2025-11-24T18:47:55.962306Z",
    "status": {
        "status_code": "UNSET"
    },
    "attributes": {
        "operation.source": "SDK",
        "correlation.id": "aaaa0000-bb11-2222-33cc-444444dddddd",
        "gen_ai.conversation.item.link": "http://localhost:56150/_connector",
        "gen_ai.agent.upn": "Delivery Agent",
        "gen_ai.agent.user.id": "aaaaaaaa-bbbb-cccc-1111-222222222222",
        "gen_ai.execution.type": "HumanToAgent",
        "gen_ai.channel.name": "msteams",
        "gen_ai.system": "az.ai.agent365",
        "gen_ai.operation.name": "execute_tool",
        "gen_ai.agent.id": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
        "gen_ai.agent.name": "Azure OpenAI Agent",
        "gen_ai.agent.description": "An AI agent powered by Azure OpenAI",
        "gen_ai.agent.applicationid": "00001111-aaaa-2222-bbbb-3333cccc4444",
        "gen_ai.conversation.id": "__PERSONAL_CHAT_ID__",
        "tenant.id": "aaaabbbb-0000-cccc-1111-dddd2222eeee",
        "gen_ai.tool.name": "get_weather",
        "gen_ai.tool.arguments": "current location",
        "gen_ai.tool.type": "function",
        "gen_ai.tool.call.id": "bbbbbbbb-1111-2222-3333-cccccccccccc",
        "gen_ai.tool.description": "Executing get_weather tool"
    },
    "events": [],
    "links": [],
    "resource": {
        "attributes": {
            "telemetry.sdk.language": "python",
            "telemetry.sdk.name": "opentelemetry",
            "telemetry.sdk.version": "1.38.0",
            "service.namespace": "MyAgentTesting",
            "service.name": "MyAgentTracing"
        },
        "schema_url": ""
    }
}

Console Log inference span

This example demonstrates a typical inference span output from the console exporter used for local validation.

{
    "name": "Chat gpt-4o-mini",
    "context": {
        "trace_id": "0xceb86559a6f7c2c16a45ec6e0b201ae1",
        "span_id": "0x475beec8c1c4fa56",
        "trace_state": "[]"
    },
    "kind": "SpanKind.CLIENT",
    "parent_id": "0x959a854f18fa2c22",
    "start_time": "2025-11-24T18:04:07.061703Z",
    "end_time": "2025-11-24T18:04:09.506951Z",
    "status": {
        "status_code": "UNSET"
    },
    "attributes": {
        "operation.source": "SDK",
        "correlation.id": "aaaa0000-bb11-2222-33cc-444444dddddd",
        "gen_ai.conversation.item.link": "http://localhost:56150/_connector",
        "gen_ai.agent.upn": "Delivery Agent",
        "gen_ai.agent.user.id": "aaaaaaaa-bbbb-cccc-1111-222222222222",
        "gen_ai.execution.type": "HumanToAgent",
        "gen_ai.channel.name": "msteams",
        "gen_ai.system": "az.ai.agent365",
        "gen_ai.agent.id": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
        "gen_ai.agent.name": "Azure OpenAI Agent",
        "gen_ai.agent.description": "An AI agent powered by Azure OpenAI",
        "gen_ai.agent.applicationid": "00001111-aaaa-2222-bbbb-3333cccc4444",
        "gen_ai.conversation.id": "__PERSONAL_CHAT_ID__",
        "tenant.id": "aaaabbbb-0000-cccc-1111-dddd2222eeee",
        "gen_ai.input.messages": "hi, what can you do",
        "gen_ai.operation.name": "Chat",
        "gen_ai.request.model": "gpt-4o-mini",
        "gen_ai.provider.name": "Azure OpenAI",
        "gen_ai.output.messages": "\"Hello! I can help answer questions, provide information, assist with problem-solving, offer writing suggestions, and more. Just let me know what you need!\"",
        "gen_ai.usage.input_tokens": "33",
        "gen_ai.usage.output_tokens": "32",
        "gen_ai.response.finish_reasons": "[\"stop\"]"
    },
    "events": [],
    "links": [],
    "resource": {
        "attributes": {
            "telemetry.sdk.language": "python",
            "telemetry.sdk.name": "opentelemetry",
            "telemetry.sdk.version": "1.38.0",
            "service.namespace": "MyAgentTesting",
            "service.name": "MyAgentTracing"
        },
        "schema_url": ""
    }
}

Observability requirements

IT administrators use the data you set in your code to monitor your agent's activity. Incomplete data means the benefits of observability aren't fully realized. Agents must provide the required data to receive all expected benefits. A validation process will verify that this data exists.

Within telemetry there are concepts of scope or context. Each operation your agent performs exists within a different scope. The data must be included within the BaggageScope created using Baggage attributes, or within the individual scopes as described in Manual Instrumentation.

To validate your implementation, follow the instructions to validate locally to generate console logs for your instrumentation. Then review the [Validate for required attributes](#validate-for-rewuired attributes) section to identify which attributes are required and which are optional. All required attributes must be set to successfully pass validation.

Review the required properties and parameter values described for these classes:

  • Properties set using BaggageBuilder class might be set or overridden by the properties for the respective scopes.

  • Set the properties in the following table using the InvokeAgentScope.start method.

    Data Description
    invoke_agent_details.details.agent_id The unique identifier for the AI agent
    invoke_agent_details.details.agent_name The human-readable name of the AI agent
    invoke_agent_details.details.agent_auid The agent user ID (AUID)
    invoke_agent_details.details.agent_upn The agent user principal name (UPN)
    invoke_agent_details.details.agent_blueprint_id The agent blueprint/application ID
    invoke_agent_details.details.tenant_id The tenant ID for the agent
    invoke_agent_details.details.conversation_id The identifier for the conversation or session
    invoke_agent_details.endpoint The endpoint for the agent invocation
    tenant_details.tenant_id The unique identifier for the tenant
    request.content The payload content sent to the agent for invocation
    request.execution_type Invocation type indicating request origin (for example, HumanToAgent or AgentToAgent)
    caller_details.caller_id The unique identifier for the caller
    caller_details.caller_upn The user principal name (UPN) of the caller
    caller_details.caller_user_id The user ID of the caller
    caller_details.tenant_id The tenant ID of the caller
  • Set the properties in the following table using the ExecuteToolScope.start method.

    Data Description
    details.tool_name The name of the tool being executed
    details.arguments Tool arguments/parameters
    details.tool_call_id The unique identifier for the tool call
    details.tool_type The type of the tool being executed
    details.endpoint If an external tool call is made
    agent_details.agent_id The unique identifier for the AI agent
    agent_details.agent_name The human-readable name of the AI agent
    agent_details.agent_auid The agent user ID
    agent_details.agent_upn The agent user principal name (UPN)
    agent_details.agent_blueprint_id The agent blueprint/application ID
    agent_details.tenant_id Tenant ID for the agent.
    agent_details.conversation_id The conversation ID for the agent invocation.
    tenant_details.tenant_id Tenant ID for the agent.
  • Set the properties in the following table using the InferenceScope.start method.

    Data Description
    details.operationName The operation name/type for the inference
    details.model The model name/identifier
    details.providerName The provider name
    agent_details.agent_id The unique identifier for the AI agent
    agent_details.agent_name The human-readable name of the AI agent
    agent_details.agent_auid The agent user ID (AUID)
    agent_details.agent_upn The agent user principal name (UPN)
    agent_details.agent_blueprint_id The agent blueprint/application ID
    agent_details.tenant_id The unique identifier for the tenant
    agent_details.conversation_id The identifier for the conversation or session
    tenant_details.tenant_id The unique identifier for the tenant
    request.content The payload content sent to the agent for inference
    request.execution_type Invocation type indicating request origin (for example, HumanToAgent or AgentToAgent)
    request.source_metadata Represent the channel information

Validate for store publishing

Use console logs to validate your observability integration for agent before publishing by implementing the required invoke agent, execute tool, and inference scopes. Then compare your agent's logs against the following attribute lists to verify all required attributes are present; capture attributes on each scope or via the baggage builder, and include optional attributes at your discretion.

For more information about store publishing requirements, see store validation guidelines.

InvokeAgentScope attributes

The following list summarizes the required and optional telemetry attributes recorded when you start an InvokeAgentScope.

"attributes": {
        "correlation.id": "Optional",
        "gen_ai.agent.applicationid": "Required",
        "gen_ai.agent.description": "Optional",
        "gen_ai.agent.id": "Required",
        "gen_ai.agent.name": "Required",
        "gen_ai.agent.platformid": "Optional",
        "gen_ai.agent.type": "Optional",
        "gen_ai.agent.thought.process": "Optional",
        "gen_ai.agent.upn": "Required",
        "gen_ai.agent.user.id": "Required",
        "gen_ai.caller.agent.applicationid": "Required (if execution type is agent to agent)",
        "gen_ai.caller.agent.id": "Required (if execution type is agent to agent)",
        "gen_ai.caller.agent.name": "Required (if execution type is agent to agent)",
        "gen_ai.caller.agent.platformid": "Optional",
        "gen_ai.caller.agent.type": "Optional",
        "gen_ai.caller.client.ip": "Required",
        "gen_ai.caller.id": "Required",
        "gen_ai.caller.name": "Optional",
        "gen_ai.caller.upn": "Required",
        "gen_ai.channel.link": "Optional",
        "gen_ai.channel.name": "Required",
        "gen_ai.conversation.id": "Required",
        "gen_ai.conversation.item.link": "Optional",
        "gen_ai.execution.type": "Required",
        "gen_ai.input.messages": "Required",
        "gen_ai.operation.name": "Required (Set by the SDK)",
        "gen_ai.output.messages": "Required",
        "gen_ai.system": "Optional",
        "hiring.manager.id": "Optional",
        "operation.source": "Required (SDK sets a default value)",
        "server.address": "Required",
        "server.port": "Required",
        "session.id": "Optional",
        "session.description": "Optional",
        "tenant.id": "Required"
    },

ExecuteToolScope attributes

The following list summarizes the required and optional telemetry attributes recorded when you start an ExecuteToolScope.

"attributes": {
        "correlation.id": "Optional",
        "gen_ai.agent.applicationid": "Required",
        "gen_ai.agent.description": "Optional",
        "gen_ai.agent.id": "Required",
        "gen_ai.agent.name": "Required",
        "gen_ai.agent.platformid": "Optional",
        "gen_ai.agent.type": "Optional",
        "gen_ai.agent.upn": "Required",
        "gen_ai.agent.user.id": "Required",
        "gen_ai.caller.client.ip": "Required",
        "gen_ai.channel.name": "Required",
        "gen_ai.conversation.id": "Required",
        "gen_ai.conversation.item.link": "Optional",
        "gen_ai.execution.type": "Optional",
        "gen_ai.operation.name": "Required (Set by the SDK)",
        "gen_ai.system": "Optional",
        "gen_ai.tool.arguments": "Required",
        "gen_ai.tool.call.id": "Required",
        "gen_ai.tool.description": "Optional",
        "gen_ai.tool.name": "Required",
        "gen_ai.tool.type": "Required",
        "hiring.manager.id": "Optional",        
        "operation.source": "Required (SDK sets a default value)",
        "server.address": "Required (if tool call is external)",
        "server.port": "Required (if tool call is external)",
        "session.id": "Optional",
        "session.description": "Optional",
        "tenant.id": "Required"
    },

InferenceScope attributes

The following list summarizes the required and optional telemetry attributes recorded when you start an InferenceScope.

"attributes": {
        "correlation.id": "Optional",
        "gen_ai.agent.applicationid": "Required",
        "gen_ai.agent.description": "Optional",
        "gen_ai.agent.id": "Required",
        "gen_ai.agent.name": "Required",
        "gen_ai.agent.platformid": "Optional",
        "gen_ai.agent.type": "Optional",
        "gen_ai.agent.thought.process": "Optional",
        "gen_ai.agent.upn": "Required",
        "gen_ai.agent.user.id": "Required",
        "gen_ai.caller.client.ip": "Required",
        "gen_ai.channel.link": "Optional",
        "gen_ai.channel.name": "Required",
        "gen_ai.conversation.id": "Required",
        "gen_ai.conversation.item.link": "Optional",
        "gen_ai.execution.type": "Optional",
        "gen_ai.input.messages": "Required",
        "gen_ai.operation.name": "Chat",
        "gen_ai.output.messages": "Required",
        "gen_ai.provider.name": "Required",
        "gen_ai.request.model": "Required",
        "gen_ai.response.finish_reasons": "Optional",
        "gen_ai.usage.input_tokens": "Optional",
        "gen_ai.usage.output_tokens": "Optional",
        "hiring.manager.id": "Optional",
        "operation.source": "Required (SDK sets a default value)",
        "session.description": "Optional",
        "session.id": "Optional",
        "tenant.id": "Required"
    }

Test your agent with observability

After you implement observability in your agent, test to ensure telemetry is being captured correctly. Follow the testing guide to set up your environment, then focus primarily on the View observability logs section to validate your observability implementation is working as expected.