GeminiPromptExecutionSettings Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Represents the settings for executing a prompt with the Gemini model.
[System.Text.Json.Serialization.JsonNumberHandling(System.Text.Json.Serialization.JsonNumberHandling.AllowReadingFromString)]
public sealed class GeminiPromptExecutionSettings : Microsoft.SemanticKernel.PromptExecutionSettings
[<System.Text.Json.Serialization.JsonNumberHandling(System.Text.Json.Serialization.JsonNumberHandling.AllowReadingFromString)>]
type GeminiPromptExecutionSettings = class
inherit PromptExecutionSettings
Public NotInheritable Class GeminiPromptExecutionSettings
Inherits PromptExecutionSettings
- Inheritance
- Attributes
Constructors
GeminiPromptExecutionSettings() |
Properties
AudioTimestamp |
Indicates if the audio response should include timestamps. if enabled, audio timestamp will be included in the request to the model. |
CachedContent |
Optional. The name of the cached content used as context to serve the prediction. Note: only used in explicit caching, where users can have control over caching (e.g. what content to cache) and enjoy guaranteed cost savings. Format: projects/{project}/locations/{location}/cachedContents/{cachedContent} |
CandidateCount |
The count of candidates. Possible values range from 1 to 8. |
DefaultTextMaxTokens |
Default max tokens for a text generation. |
ExtensionData |
Extra properties that may be included in the serialized execution settings. (Inherited from PromptExecutionSettings) |
FunctionChoiceBehavior |
Gets or sets the behavior defining the way functions are chosen by LLM and how they are invoked by AI connectors. (Inherited from PromptExecutionSettings) |
IsFrozen |
Gets a value that indicates whether the PromptExecutionSettings are currently modifiable. (Inherited from PromptExecutionSettings) |
MaxTokens |
The maximum number of tokens to generate in the completion. |
ModelId |
Model identifier. This identifies the AI model these settings are configured for e.g., gpt-4, gpt-3.5-turbo (Inherited from PromptExecutionSettings) |
ResponseMimeType |
The output response MIME type of the generated candidate text. The following MIME types are supported:
|
ResponseSchema |
Optional. Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays. If set, a compatible responseMimeType must also be set. Compatible MIME types: application/json: Schema for JSON response. Refer to the https://ai.google.dev/gemini-api/docs/json-mode for more information. |
SafetySettings |
Represents a list of safety settings. |
ServiceId |
Service identifier. This identifies the service these settings are configured for e.g., azure_openai_eastus, openai, ollama, huggingface, etc. (Inherited from PromptExecutionSettings) |
StopSequences |
Sequences where the completion will stop generating further tokens. Maximum number of stop sequences is 5. |
Temperature |
Temperature controls the randomness of the completion. The higher the temperature, the more random the completion. Range is 0.0 to 1.0. |
ToolCallBehavior |
Gets or sets the behavior for how tool calls are handled. |
TopK |
Gets or sets the value of the TopK property. The TopK property represents the maximum value of a collection or dataset. |
TopP |
TopP controls the diversity of the completion. The higher the TopP, the more diverse the completion. |
Methods
Clone() |
Creates a new PromptExecutionSettings object that is a copy of the current instance. |
Freeze() |
Makes the current PromptExecutionSettings unmodifiable and sets its IsFrozen property to true. |
FromExecutionSettings(PromptExecutionSettings) |
Converts a PromptExecutionSettings object to a GeminiPromptExecutionSettings object. |
ThrowIfFrozen() |
Throws an InvalidOperationException if the PromptExecutionSettings are frozen. (Inherited from PromptExecutionSettings) |