Codeunit "AOAI Text Completion Params"

ID 7765
Namespace: System.AI

Represents the Completion parameters used by the API. See more details at https://aka.ms/AAlsi39.

Properties

Name Value
Access Public
InherentEntitlements X
InherentPermissions X

Methods

GetMaxTokens

Get the maximum number of tokens to generate in the completion.

procedure GetMaxTokens(): Integer

Returns

Type Description
Integer

The maximum number of tokens to generate in the completion.

Remarks

0 or less uses the API default.

GetTemperature

Get the sampling temperature to use.

procedure GetTemperature(): Decimal

Returns

Type Description
Decimal

The sampling temperature to use.

GetTopP

Get the nucleus sampling to use.

procedure GetTopP(): Decimal

Returns

Type Description
Decimal

The nucleus sampling to use.

GetSuffix

Get the suffix that comes after a completion of inserted text.

procedure GetSuffix(): Text

Returns

Type Description
Text

The suffix that comes after a completion of inserted text.

GetPresencePenalty

Get the presence penalty to use.

procedure GetPresencePenalty(): Decimal

Returns

Type Description
Decimal

The presence penalty to use.

GetFrequencyPenalty

Get the frequency penalty to use.

procedure GetFrequencyPenalty(): Decimal

Returns

Type Description
Decimal

The frequency penalty to use.

SetMaxTokens

The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).

procedure SetMaxTokens(NewMaxTokens: Integer)

Parameters

Name Type Description
NewMaxTokens Integer

The new maximum number of tokens to generate in the completion.

Remarks

If the prompt's tokens + max_tokens exceeds the model's context length, the generate request will return an error.

SetTemperature

Sets the sampling temperature to use, between 0 and 2. A higher temperature increases the likelihood that the next most probable token will not be selected. When requesting structured data, set the temperature to 0. For human sounding speech, 0.7 is a typical value

procedure SetTemperature(NewTemperature: Decimal)

Parameters

Name Type Description
NewTemperature Decimal

The new sampling temperature to use.

SetTopP

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

procedure SetTopP(NewTopP: Decimal)

Parameters

Name Type Description
NewTopP Decimal

New nucleus sampling to use

SetSuffix

The suffix that comes after a completion of inserted text.

procedure SetSuffix(NewSuffix: Text)

Parameters

Name Type Description
NewSuffix Text

The new suffix that comes after a completion of inserted text.

SetPresencePenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

procedure SetPresencePenalty(NewPresencePenalty: Decimal)

Parameters

Name Type Description
NewPresencePenalty Decimal

The new presence penalty to use.

SetFrequencyPenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

procedure SetFrequencyPenalty(NewFrequencyPenalty: Decimal)

Parameters

Name Type Description
NewFrequencyPenalty Decimal

The new frequency penalty to use.

See also