ChatCompletionClientBase Class
Base class for chat completion AI services.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
Constructor
ChatCompletionClientBase(*, ai_model_id: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)], service_id: str = '', instruction_role: str = None)
Keyword-Only Parameters
| Name | Description |
|---|---|
|
ai_model_id
Required
|
|
|
service_id
Required
|
|
|
instruction_role
Required
|
|
Methods
| get_chat_message_content |
This is the method that is called from the kernel to get a response from a chat-optimized LLM. |
| get_chat_message_contents |
Create chat message contents, in the number specified by the settings. |
| get_streaming_chat_message_content |
This is the method that is called from the kernel to get a stream response from a chat-optimized LLM. |
| get_streaming_chat_message_contents |
Create streaming chat message contents, in the number specified by the settings. |
get_chat_message_content
This is the method that is called from the kernel to get a response from a chat-optimized LLM.
async get_chat_message_content(chat_history: ChatHistory, settings: PromptExecutionSettings, **kwargs: Any) -> ChatMessageContent | None
Parameters
| Name | Description |
|---|---|
|
chat_history
Required
|
<xref:semantic_kernel.connectors.ai.chat_completion_client_base.ChatHistory>
A list of chat chat_history, that can be rendered into a set of chat_history, from system, user, assistant and function. |
|
settings
Required
|
<xref:semantic_kernel.connectors.ai.chat_completion_client_base.PromptExecutionSettings>
Settings for the request. |
|
kwargs
Required
|
The optional arguments. |
Returns
| Type | Description |
|---|---|
|
A string representing the response from the LLM. |
get_chat_message_contents
Create chat message contents, in the number specified by the settings.
async get_chat_message_contents(chat_history: ChatHistory, settings: PromptExecutionSettings, **kwargs: Any) -> list[ChatMessageContent]
Parameters
| Name | Description |
|---|---|
|
chat_history
Required
|
<xref:semantic_kernel.connectors.ai.chat_completion_client_base.ChatHistory>
A list of chats in a chat_history object, that can be rendered into messages from system, user, assistant and tools. |
|
settings
Required
|
<xref:semantic_kernel.connectors.ai.chat_completion_client_base.PromptExecutionSettings>
Settings for the request. |
|
**kwargs
Required
|
The optional arguments. |
Returns
| Type | Description |
|---|---|
|
A list of chat message contents representing the response(s) from the LLM. |
get_streaming_chat_message_content
This is the method that is called from the kernel to get a stream response from a chat-optimized LLM.
async get_streaming_chat_message_content(chat_history: ChatHistory, settings: PromptExecutionSettings, **kwargs: Any) -> AsyncGenerator[StreamingChatMessageContent | None, Any]
Parameters
| Name | Description |
|---|---|
|
chat_history
Required
|
<xref:semantic_kernel.connectors.ai.chat_completion_client_base.ChatHistory>
A list of chat chat_history, that can be rendered into a set of chat_history, from system, user, assistant and function. |
|
settings
Required
|
<xref:semantic_kernel.connectors.ai.chat_completion_client_base.PromptExecutionSettings>
Settings for the request. |
|
kwargs
Required
|
The optional arguments. |
get_streaming_chat_message_contents
Create streaming chat message contents, in the number specified by the settings.
async get_streaming_chat_message_contents(chat_history: ChatHistory, settings: PromptExecutionSettings, **kwargs: Any) -> AsyncGenerator[list[StreamingChatMessageContent], Any]
Parameters
| Name | Description |
|---|---|
|
chat_history
Required
|
<xref:semantic_kernel.connectors.ai.chat_completion_client_base.ChatHistory>
A list of chat chat_history, that can be rendered into a set of chat_history, from system, user, assistant and function. |
|
settings
Required
|
<xref:semantic_kernel.connectors.ai.chat_completion_client_base.PromptExecutionSettings>
Settings for the request. |
|
kwargs
Required
|
The optional arguments. |
Attributes
SUPPORTS_FUNCTION_CALLING
SUPPORTS_FUNCTION_CALLING: ClassVar[bool] = False
instruction_role
instruction_role: str