responses Package
Public API surface for the Azure AI Agent Server Responses package.
Packages
| hosting |
HTTP hosting, routing, and request orchestration for the Responses server. |
| models |
Canonical non-generated model types for the response server. |
| store | |
| streaming |
Event streaming, SSE encoding, and output item builders. |
Classes
| CreateResponse |
Override generated |
| FoundryApiError |
Raised for all other non-success HTTP responses. |
| FoundryBadRequestError |
Raised for invalid-request or conflict errors (HTTP 400, 409). |
| FoundryResourceNotFoundError |
Raised when the requested resource does not exist (HTTP 404). |
| FoundryStorageError |
Base class for errors returned by the Foundry storage API. |
| FoundryStorageProvider |
An HTTP-backed response storage provider that persists data via the Foundry storage API. This class satisfies the
<xref:azure.ai.agentserver.responses.store._base.ResponseProviderProtocol> structural
protocol. Obtain an instance through the constructor and supply it when building a
Uses AsyncPipelineClient for HTTP transport, providing built-in retry, logging, distributed tracing, and bearer-token authentication. Example:
|
| FoundryStorageSettings |
Immutable runtime configuration for FoundryStorageProvider. |
| InMemoryResponseProvider |
In-memory provider implementing both Initialize in-memory state and an async mutation lock. |
| IsolationContext |
Platform-injected isolation keys for multi-tenant state partitioning. The Foundry hosting platform injects
When the headers are absent (e.g. local development), both keys are |
| ResponseContext |
Runtime context exposed to response handlers and used by hosting orchestration.
|
| ResponseEventStream |
Response event stream with deterministic sequence numbers. Initialize a new response event stream. |
| ResponseObject |
Override generated |
| ResponseProviderProtocol |
Protocol for response storage providers. Implementations provide response envelope storage plus input/history item lookup. Every operation accepts an optional |
| ResponseStreamProviderProtocol |
Protocol for providers that can persist and replay SSE stream events. Implement this protocol alongside ResponseProviderProtocol to enable SSE replay for responses that are no longer resident in the in-process runtime state (for example, after a process restart). |
| ResponsesAgentServerHost |
Responses protocol host for Azure AI Hosted Agents. A AgentServerHost subclass that adds the Responses API endpoints. Use the response_handler decorator to wire a handler function to the create endpoint. For multi-protocol agents, compose via cooperative inheritance:
Usage:
|
| ResponsesServerOptions |
Configuration values for hosting and runtime behavior. |
| TextResponse |
A high-level convenience that produces a complete text-message response stream. Implements <xref:AsyncIterable> so it can be returned directly from a
Handles the full SSE lifecycle automatically:
Plain string:
Callable (sync or async):
Async iterable (token streaming):
|
Functions
get_conversation_id
Extract conversation ID from a request or response's conversation field.
If conversation is a plain string, returns it directly.
If it is a <xref:azure.ai.agentserver.responses.ConversationParam_2> object, returns its id field.
get_conversation_id(request: CreateResponse | ResponseObject) -> str | None
Parameters
| Name | Description |
|---|---|
|
request
Required
|
The create-response request or response object. |
Returns
| Type | Description |
|---|---|
|
The conversation ID, or |
get_input_expanded
Normalize CreateResponse.input into a list of <xref:azure.ai.agentserver.responses.Item>.
If input is
None, returns[].If input is a string, wraps it as a single <xref:azure.ai.agentserver.responses.ItemMessage> with
role=userand <xref:azure.ai.agentserver.responses.MessageContentInputTextContent>.If input is already a list, each element is deserialized into the appropriate <xref:azure.ai.agentserver.responses.Item> subclass (e.g., <xref:azure.ai.agentserver.responses.ItemMessage>, <xref:azure.ai.agentserver.responses.FunctionCallOutputItemParam>).
get_input_expanded(request: CreateResponse) -> list[azure.ai.agentserver.responses.models._generated.sdk.models.models._models.Item]
Parameters
| Name | Description |
|---|---|
|
request
Required
|
The create-response request. |
Returns
| Type | Description |
|---|---|
|
A list of typed input items. |
to_output_item
Convert an <xref:azure.ai.agentserver.responses.Item> to the corresponding <xref:azure.ai.agentserver.responses.OutputItem>.
Generates a type-specific ID via <xref:IdGenerator.new_item_id> and applies status according to per-type rules:
Completed — explicitly listed types get
status = "completed".Preserve status — types whose original status must be kept (
ItemOutputMessage,ApplyPatch*).No status — all other types (including any future types) receive no status value. This opt-in design prevents newly-added item types from accidentally gaining a status field.
Returns None for <xref:azure.ai.agentserver.responses.ItemReferenceParam> or unrecognised types.
The conversion leverages _deserialize(OutputItem, data) which
resolves the correct subtype via the type discriminator. All 24
input/output discriminator pairs share the same string values, so the
dict representation produced by dict(item) is directly compatible
with OutputItem deserialization.
to_output_item(item: Item, response_id: str | None = None) -> OutputItem | None
Parameters
| Name | Description |
|---|---|
|
item
Required
|
The input item to convert. |
|
response_id
|
An existing ID (typically the response ID) used as a partition-key hint for the generated item ID. Default value: None
|
Returns
| Type | Description |
|---|---|
|
The converted output item, or |