LLMClientOptions interface

Options for an LLMClient instance.

Properties

endStreamHandler

Optional handler to run when a stream is about to conclude.

history_variable

Optional. Memory variable used for storing conversation history.

input_variable

Optional. Memory variable used for storing the users input message.

logRepairs

Optional. If true, any repair attempts will be logged to the console.

max_history_messages

Optional. Maximum number of conversation history messages to maintain.

max_repair_attempts

Optional. Maximum number of automatic repair attempts the LLMClient instance will make.

model

AI model to use for completing prompts.

startStreamingMessage

Optional message to send a client at the start of a streaming response.

template

Prompt to use for the conversation.

tokenizer

Optional. Tokenizer to use when rendering the prompt or counting tokens.

validator

Optional. Response validator to use when completing prompts.

Property Details

endStreamHandler

Optional handler to run when a stream is about to conclude.

endStreamHandler?: PromptCompletionModelResponseReceivedEvent

Property Value

history_variable

Optional. Memory variable used for storing conversation history.

history_variable?: string

Property Value

string

Remarks

The history will be stored as a Message[] and the variable defaults to conversation.history.

input_variable

Optional. Memory variable used for storing the users input message.

input_variable?: string

Property Value

string

Remarks

The users input is expected to be a string but it's optional and defaults to temp.input.

logRepairs

Optional. If true, any repair attempts will be logged to the console.

logRepairs?: boolean

Property Value

boolean

max_history_messages

Optional. Maximum number of conversation history messages to maintain.

max_history_messages?: number

Property Value

number

Remarks

The number of tokens worth of history included in the prompt is controlled by the ConversationHistory section of the prompt. This controls the automatic pruning of the conversation history that's done by the LLMClient instance. This helps keep your memory from getting too big and defaults to a value of 10 (or 5 turns.)

max_repair_attempts

Optional. Maximum number of automatic repair attempts the LLMClient instance will make.

max_repair_attempts?: number

Property Value

number

Remarks

This defaults to a value of 3 and can be set to 0 if you wish to disable repairing of bad responses.

model

AI model to use for completing prompts.

model: PromptCompletionModel

Property Value

startStreamingMessage

Optional message to send a client at the start of a streaming response.

startStreamingMessage?: string

Property Value

string

template

Prompt to use for the conversation.

template: PromptTemplate

Property Value

tokenizer

Optional. Tokenizer to use when rendering the prompt or counting tokens.

tokenizer?: Tokenizer

Property Value

Remarks

If not specified, a new instance of GPTTokenizer will be created. GPT3Tokenizer can be passed in for gpt-3 models.

validator

Optional. Response validator to use when completing prompts.

validator?: PromptResponseValidator<TContent>

Property Value

Remarks

If not specified a new instance of DefaultResponseValidator will be created. The DefaultResponseValidator returns a Validation that says all responses are valid.