CreateResponse Class

Override generated CreateResponse to correct temperature/top_p types.

Constructor

CreateResponse(*args: Any, **kwargs: Any)

Methods

as_dict

Return a dict that can be turned into json using json.dump.

clear

Remove all items from D.

copy
get

Get the value for key if key is in the dictionary, else default. :param str key: The key to look up. :param any default: The value to return if key is not in the dictionary. Defaults to None :returns: D[k] if k in D, else d. :rtype: any

items
keys
pop

Removes specified key and return the corresponding value. :param str key: The key to pop. :param any default: The value to return if key is not in the dictionary :returns: The value corresponding to the key. :rtype: any :raises KeyError: If key is not found and default is not given.

popitem

Removes and returns some (key, value) pair :returns: The (key, value) pair. :rtype: tuple :raises KeyError: if D is empty.

setdefault

Same as calling D.get(k, d), and setting D[k]=d if k not found :param str key: The key to look up. :param any default: The value to set if key is not in the dictionary :returns: D[k] if k in D, else d. :rtype: any

update

Updates D from mapping/iterable E and F. :param any args: Either a mapping object or an iterable of key-value pairs.

values

as_dict

Return a dict that can be turned into json using json.dump.

as_dict(*, exclude_readonly: bool = False) -> dict[str, Any]

Keyword-Only Parameters

Name Description
exclude_readonly

Whether to remove the readonly properties.

Default value: False

Returns

Type Description

A dict JSON compatible object

clear

Remove all items from D.

clear() -> None

copy

copy() -> Model

get

Get the value for key if key is in the dictionary, else default. :param str key: The key to look up. :param any default: The value to return if key is not in the dictionary. Defaults to None :returns: D[k] if k in D, else d. :rtype: any

get(key: str, default: Any = None) -> Any

Parameters

Name Description
key
Required
default
Default value: None

items

items() -> ItemsView[str, Any]

Returns

Type Description

set-like object providing a view on D's items

keys

keys() -> KeysView[str]

Returns

Type Description

a set-like object providing a view on D's keys

pop

Removes specified key and return the corresponding value. :param str key: The key to pop. :param any default: The value to return if key is not in the dictionary :returns: The value corresponding to the key. :rtype: any :raises KeyError: If key is not found and default is not given.

pop(key: str, default: ~typing.Any = <object object>) -> Any

Parameters

Name Description
key
Required
default

popitem

Removes and returns some (key, value) pair :returns: The (key, value) pair. :rtype: tuple :raises KeyError: if D is empty.

popitem() -> tuple[str, Any]

setdefault

Same as calling D.get(k, d), and setting D[k]=d if k not found :param str key: The key to look up. :param any default: The value to set if key is not in the dictionary :returns: D[k] if k in D, else d. :rtype: any

setdefault(key: str, default: ~typing.Any = <object object>) -> Any

Parameters

Name Description
key
Required
default

update

Updates D from mapping/iterable E and F. :param any args: Either a mapping object or an iterable of key-value pairs.

update(*args: Any, **kwargs: Any) -> None

values

values() -> ValuesView[Any]

Returns

Type Description

an object providing a view on D's values

Attributes

agent_reference

The agent to use for generating the response.

agent_reference: '_models.AgentReference' | None

context_management

Context management configuration for this request.

context_management: list['_models.ContextManagementParam'] | None

conversation

Is either a str type or a ConversationParam_2 type.

conversation: '_types.ConversationParam' | None

input

Is either a str type or a [Item] type.

input: '_types.InputParam' | None

model

The model deployment to use for the creation of this response.

model: str | None

prompt_cache_key

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

prompt_cache_key: str | None

prompt_cache_retention

Is either a Literal["in-memory"] type or a Literal["24h"] type.

prompt_cache_retention: Literal['in-memory', '24h'] | None

safety_identifier

A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

safety_identifier: str | None

service_tier

Literal["auto"], Literal["default"], Literal["flex"], Literal["scale"], Literal["priority"]

service_tier: Literal['auto', 'default', 'flex', 'scale', 'priority'] | None

structured_inputs

The structured inputs to the response that can participate in prompt template substitution or tool argument bindings.

structured_inputs: dict[str, Any] | None

temperature

Sampling temperature. Float between 0 and 2.

temperature: float | None

tool_choice

Is either a Union[str, "_models.ToolChoiceOptions"] type or a ToolChoiceParam type.

tool_choice: str | '_models.ToolChoiceOptions' | '_models.ToolChoiceParam' | None

top_p

Nucleus sampling parameter. Float between 0 and 1.

top_p: float | None

truncation

Is either a Literal["auto"] type or a Literal["disabled"] type.

truncation: Literal['auto', 'disabled'] | None

user

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

user: str | None

background

background: bool | None

include

include: list[str | '_models.IncludeEnum'] | None

instructions

instructions: str | None

max_output_tokens

max_output_tokens: int | None

max_tool_calls

max_tool_calls: int | None

metadata

metadata: '_models.Metadata' | None

parallel_tool_calls

parallel_tool_calls: bool | None

previous_response_id

previous_response_id: str | None

prompt

prompt: '_models.Prompt' | None

reasoning

reasoning: '_models.Reasoning' | None

store

store: bool | None

stream

stream: bool | None

stream_options

stream_options: '_models.ResponseStreamOptions' | None

text

text: '_models.ResponseTextParam' | None

tools

tools: list['_models.Tool'] | None

top_logprobs

top_logprobs: int | None