ActionPlanner class
A planner that uses a Large Language Model (LLM) to generate plans.
Remarks
The ActionPlanner is a powerful planner that uses a LLM to generate plans. The planner can trigger parameterized actions and send text based responses to the user. The ActionPlanner supports the following advanced features:
- Augmentations: Augmentations virtually eliminate the need for prompt engineering. Prompts can be configured to use a named augmentation which will be automatically appended to the outgoing prompt. Augmentations let the developer specify whether they want to support multi-step plans (sequence), use OpenAI's functions support (functions), or create an AutoGPT style agent (monologue).
- Validations: Validators are used to validate the response returned by the LLM and can guarantee that the parameters passed to an action mach a supplied schema. The validator used is automatically selected based on the augmentation being used. Validators also prevent hallucinated action names making it impossible for the LLM to trigger an action that doesn't exist.
- Repair: The ActionPlanner will automatically attempt to repair invalid responses returned by the LLM using a feedback loop. When a validation fails, the ActionPlanner sends the error back to the model, along with an instruction asking it to fix its mistake. This feedback technique leads to a dramatic reduction in the number of invalid responses returned by the model.
Constructors
Action |
Creates a new |
Properties
default |
|
model | |
prompts |
Methods
add |
Creates a semantic function that can be registered with the apps prompt manager. |
begin |
Starts a new task. |
complete |
Completes a prompt using an optional validator. |
continue |
Continues the current task. |
Constructor Details
ActionPlanner<TState>(ActionPlannerOptions<TState>)
Creates a new ActionPlanner
instance.
new ActionPlanner(options: ActionPlannerOptions<TState>)
Parameters
- options
-
ActionPlannerOptions<TState>
Options used to configure the planner.
Property Details
defaultPrompt
undefined | string defaultPrompt
Property Value
undefined | string
model
prompts
Method Details
addSemanticFunction(string | PromptTemplate, PromptResponseValidator<any>)
Creates a semantic function that can be registered with the apps prompt manager.
function addSemanticFunction(prompt: string | PromptTemplate, validator?: PromptResponseValidator<any>): ActionPlanner<TState>
Parameters
- prompt
-
string | PromptTemplate
The name of the prompt to use.
- validator
Optional. A validator to use to validate the response returned by the model.
Returns
ActionPlanner<TState>
A promise that resolves to the result of the semantic function.
Remarks
Semantic functions are functions that make model calls and return their results as template parameters to other prompts. For example, you could define a semantic function called 'translator' that first translates the user's input to English before calling your main prompt:
app.ai.prompts.addFunction('translator', app.ai.createSemanticFunction('translator-prompt'));
You would then create a prompt called "translator-prompt" that does the translation and then in
your main prompt you can call it using the template expression {{translator}}
.
beginTask(TurnContext, TState, AI<TState>)
Starts a new task.
function beginTask(context: TurnContext, state: TState, ai: AI<TState>): Promise<Plan>
Parameters
- context
-
TurnContext
Context for the current turn of conversation.
- state
-
TState
Application state for the current turn of conversation.
- ai
-
AI<TState>
The AI system that is generating the plan.
Returns
Promise<Plan>
The plan that was generated.
Remarks
This method is called when the AI system is ready to start a new task. The planner should generate a plan that the AI system will execute. Returning an empty plan signals that there is no work to be performed.
The planner should take the users input from state.temp.input
.
completePrompt<TContent>(TurnContext, Memory, string | PromptTemplate, PromptResponseValidator<TContent>)
Completes a prompt using an optional validator.
function completePrompt<TContent>(context: TurnContext, memory: Memory, prompt: string | PromptTemplate, validator?: PromptResponseValidator<TContent>): Promise<PromptResponse<TContent>>
Parameters
- context
-
TurnContext
Context for the current turn of conversation.
- memory
- Memory
A memory interface used to access state variables (the turn state object implements this interface.)
- prompt
-
string | PromptTemplate
Name of the prompt to use or a prompt template.
- validator
-
PromptResponseValidator<TContent>
Optional. A validator to use to validate the response returned by the model.
Returns
Promise<PromptResponse<TContent>>
The result of the LLM call.
Remarks
This method allows the developer to manually complete a prompt and access the models response. If a validator is specified, the response will be validated and repaired if necessary. If no validator is specified, the response will be returned as-is.
If a validator like the JSONResponseValidator
is used, the response returned will be
a message containing a JSON object. If no validator is used, the response will be a
message containing the response text as a string.
continueTask(TurnContext, TState, AI<TState>)
Continues the current task.
function continueTask(context: TurnContext, state: TState, ai: AI<TState>): Promise<Plan>
Parameters
- context
-
TurnContext
Context for the current turn of conversation.
- state
-
TState
Application state for the current turn of conversation.
- ai
-
AI<TState>
The AI system that is generating the plan.
Returns
Promise<Plan>
The plan that was generated.
Remarks
This method is called when the AI system has finished executing the previous plan and is ready to continue the current task. The planner should generate a plan that the AI system will execute. Returning an empty plan signals that the task is completed and there is no work to be performed.
The output from the last plan step that was executed is passed to the planner via state.temp.input
.