CompletionsOptions Class

Definition

The configuration information for a completions request. Completions support a wide variety of tasks and generate text that continues from or "completes" provided prompt data.

public class CompletionsOptions : System.ClientModel.Primitives.IJsonModel<Azure.AI.OpenAI.CompletionsOptions>, System.ClientModel.Primitives.IPersistableModel<Azure.AI.OpenAI.CompletionsOptions>
type CompletionsOptions = class
    interface IJsonModel<CompletionsOptions>
    interface IPersistableModel<CompletionsOptions>
Public Class CompletionsOptions
Implements IJsonModel(Of CompletionsOptions), IPersistableModel(Of CompletionsOptions)
Inheritance
CompletionsOptions
Implements

Constructors

CompletionsOptions()

Initializes a new instance of CompletionsOptions.

CompletionsOptions(String, IEnumerable<String>)

Initializes a new instance of CompletionsOptions.

Properties

ChoicesPerPrompt

Gets or sets the number of choices that should be generated per provided prompt. Has a valid range of 1 to 128.

DeploymentName

The deployment name to use for a completions request.

Echo

A value specifying whether completions responses should include input prompts as prefixes to their generated output.

FrequencyPenalty

Gets or sets a value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text. Has a valid range of -2.0 to 2.0.

GenerationSampleCount

Gets or sets a value that controls how many completions will be internally generated prior to response formulation.

LogProbabilityCount

Gets or sets a value that controls generation of log probabilities on the LogProbabilityCount most likely tokens. Has a valid range of 0 to 5.

MaxTokens

Gets the maximum number of tokens to generate. Has minimum of 0.

NucleusSamplingFactor

Gets or set a an alternative value to Temperature, called nucleus sampling, that causes the model to consider the results of the tokens with NucleusSamplingFactor probability mass.

PresencePenalty

Gets or sets a value that influences the probability of generated tokens appearing based on their existing presence in generated text. Has a valid range of -2.0 to 2.0.

Prompts

Gets the prompts to generate completions from. Defaults to a single prompt of <|endoftext|> if not otherwise provided.

StopSequences

Gets a list of textual sequences that will end completions generation. A maximum of four stop sequences are allowed.

Suffix

The suffix that comes after a completion of inserted text.

Temperature

Gets or sets the sampling temperature to use that controls the apparent creativity of generated completions. Has a valid range of 0.0 to 2.0 and defaults to 1.0 if not otherwise specified.

TokenSelectionBiases

Gets a dictionary of modifications to the likelihood of specified GPT tokens appearing in a completions result. Maps token IDs to associated bias scores from -100 to 100, with minimum and maximum values corresponding to a ban or exclusive selection of that token, respectively.

User

An identifier for the caller or end user of the operation. This may be used for tracking or rate-limiting purposes.

Explicit Interface Implementations

IJsonModel<CompletionsOptions>.Create(Utf8JsonReader, ModelReaderWriterOptions)

Reads one JSON value (including objects or arrays) from the provided reader and converts it to a model.

IJsonModel<CompletionsOptions>.Write(Utf8JsonWriter, ModelReaderWriterOptions)

Writes the model to the provided Utf8JsonWriter.

IPersistableModel<CompletionsOptions>.Create(BinaryData, ModelReaderWriterOptions)

Converts the provided BinaryData into a model.

IPersistableModel<CompletionsOptions>.GetFormatFromOptions(ModelReaderWriterOptions)

Gets the data interchange format (JSON, Xml, etc) that the model uses when communicating with the service.

IPersistableModel<CompletionsOptions>.Write(ModelReaderWriterOptions)

Writes the model into a BinaryData.

Applies to