Share via


Response Class

Response.

Constructor

Response(*args: Any, **kwargs: Any)

Variables

Name Description
metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Required.

temperature

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. Required.

top_p

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. Required.

user
str

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more about safety best practices. Required.

service_tier

Note: service_tier is not applicable to Azure OpenAI. Known values are: "auto", "default", "flex", "scale", and "priority".

top_logprobs
int

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

previous_response_id
str

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about managing conversation state.

model
str

The model deployment to use for the creation of this response.

reasoning
background

Whether to run the model response in the background. Learn more about background responses.

max_output_tokens
int

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls
int

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

text

Configuration options for a text response from the model. Can be plain text or structured JSON data. See Text inputs and outputs and Structured Outputs.

tools

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter. The two categories of tools you can provide the model are:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search.

  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code.

tool_choice

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call. Is either a Union[str, "_models.ToolChoiceOptions"] type or a ToolChoiceObject type.

prompt
truncation
str or str

The truncation strategy to use for the model response.

  • auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.
  • disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error. Is either a Literal["auto"] type or a Literal["disabled"] type.
id
str

Unique identifier for this Response. Required.

object
str

The object type of this resource - always set to response. Required. Default value is "response".

status
str or str or str or str or str or str

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete. Is one of the following types: Literal["completed"], Literal["failed"], Literal["in_progress"], Literal["cancelled"], Literal["queued"], Literal["incomplete"]

created_at

Unix timestamp (in seconds) of when this Response was created. Required.

error

Required.

incomplete_details

Details about why the response is incomplete. Required.

output

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs. Required.
instructions

A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. Required. Is either a str type or a [ItemParam] type.

output_text
str

SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs.

usage
parallel_tool_calls

Whether to allow the model to run tool calls in parallel. Required.

conversation

Required.

agent

The agent used for this response.

structured_inputs

The structured inputs to the response that can participate in prompt template substitution or tool argument bindings.

Methods

as_dict

Return a dict that can be turned into json using json.dump.

clear

Remove all items from D.

copy
get

Get the value for key if key is in the dictionary, else default. :param str key: The key to look up. :param any default: The value to return if key is not in the dictionary. Defaults to None :returns: D[k] if k in D, else d. :rtype: any

items
keys
pop

Removes specified key and return the corresponding value. :param str key: The key to pop. :param any default: The value to return if key is not in the dictionary :returns: The value corresponding to the key. :rtype: any :raises KeyError: If key is not found and default is not given.

popitem

Removes and returns some (key, value) pair :returns: The (key, value) pair. :rtype: tuple :raises KeyError: if D is empty.

setdefault

Same as calling D.get(k, d), and setting D[k]=d if k not found :param str key: The key to look up. :param any default: The value to set if key is not in the dictionary :returns: D[k] if k in D, else d. :rtype: any

update

Updates D from mapping/iterable E and F. :param any args: Either a mapping object or an iterable of key-value pairs.

values

as_dict

Return a dict that can be turned into json using json.dump.

as_dict(*, exclude_readonly: bool = False) -> dict[str, Any]

Keyword-Only Parameters

Name Description
exclude_readonly

Whether to remove the readonly properties.

Default value: False

Returns

Type Description

A dict JSON compatible object

clear

Remove all items from D.

clear() -> None

copy

copy() -> Model

get

Get the value for key if key is in the dictionary, else default. :param str key: The key to look up. :param any default: The value to return if key is not in the dictionary. Defaults to None :returns: D[k] if k in D, else d. :rtype: any

get(key: str, default: Any = None) -> Any

Parameters

Name Description
key
Required
default
Default value: None

items

items() -> ItemsView[str, Any]

Returns

Type Description

set-like object providing a view on D's items

keys

keys() -> KeysView[str]

Returns

Type Description

a set-like object providing a view on D's keys

pop

Removes specified key and return the corresponding value. :param str key: The key to pop. :param any default: The value to return if key is not in the dictionary :returns: The value corresponding to the key. :rtype: any :raises KeyError: If key is not found and default is not given.

pop(key: str, default: ~typing.Any = <object object>) -> Any

Parameters

Name Description
key
Required
default

popitem

Removes and returns some (key, value) pair :returns: The (key, value) pair. :rtype: tuple :raises KeyError: if D is empty.

popitem() -> tuple[str, Any]

setdefault

Same as calling D.get(k, d), and setting D[k]=d if k not found :param str key: The key to look up. :param any default: The value to set if key is not in the dictionary :returns: D[k] if k in D, else d. :rtype: any

setdefault(key: str, default: ~typing.Any = <object object>) -> Any

Parameters

Name Description
key
Required
default

update

Updates D from mapping/iterable E and F. :param any args: Either a mapping object or an iterable of key-value pairs.

update(*args: Any, **kwargs: Any) -> None

values

values() -> ValuesView[Any]

Returns

Type Description

an object providing a view on D's values

Attributes

agent

The agent used for this response.

agent: _models.AgentId | None

background

Whether to run the model response in the background. Learn more about background responses.

background: bool | None

conversation

Required.

conversation: _models.ResponseConversation1

created_at

Unix timestamp (in seconds) of when this Response was created. Required.

created_at: datetime

error

Required.

error: _models.ResponseError

id

Unique identifier for this Response. Required.

id: str

incomplete_details

Details about why the response is incomplete. Required.

incomplete_details: _models.ResponseIncompleteDetails1

instructions

A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. Required. Is either a str type or a [ItemParam] type.

instructions: str | list['_models.ItemParam']

max_output_tokens

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_output_tokens: int | None

max_tool_calls

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

max_tool_calls: int | None

metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Required.

metadata: dict[str, str]

model

The model deployment to use for the creation of this response.

model: str | None

object

The object type of this resource - always set to response. Required. Default value is "response".

object: Literal['response']

output

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs. Required.
output: list['_models.ItemResource']

output_text

SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs.

output_text: str | None

parallel_tool_calls

Whether to allow the model to run tool calls in parallel. Required.

parallel_tool_calls: bool

previous_response_id

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about managing conversation state.

previous_response_id: str | None

prompt

prompt: _models.Prompt | None

reasoning

reasoning: _models.Reasoning | None

service_tier

"auto", "default", "flex", "scale", and "priority".

service_tier: str | _models.ServiceTier | None

status

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete. Is one of the following types: Literal["completed"], Literal["failed"], Literal["in_progress"], Literal["cancelled"], Literal["queued"], Literal["incomplete"]

status: Literal['completed', 'failed', 'in_progress', 'cancelled', 'queued', 'incomplete'] | None

structured_inputs

The structured inputs to the response that can participate in prompt template substitution or tool argument bindings.

structured_inputs: dict[str, Any] | None

temperature

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. Required.

temperature: float

text

Configuration options for a text response from the model. Can be plain text or structured JSON data. See Text inputs and outputs and Structured Outputs.

text: _models.ResponseText | None

tool_choice

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call. Is either a Union[str, "_models.ToolChoiceOptions"] type or a ToolChoiceObject type.

tool_choice: str | _models.ToolChoiceOptions | _models.ToolChoiceObject | None

tools

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter. The two categories of tools you can provide the model are:

tools: list['_models.Tool'] | None

top_logprobs

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

top_logprobs: int | None

top_p

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. Required.

top_p: float

truncation

The truncation strategy to use for the model response.

  • auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.
  • disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error. Is either a Literal["auto"] type or a Literal["disabled"] type.
truncation: Literal['auto', 'disabled'] | None

usage

usage: _models.ResponseUsage | None

user

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more about safety best practices. Required.

user: str