Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
The Teams AI library simplifies building intelligent Microsoft Teams applications with AI components. It offers APIs for data access, custom UI creation, prompt management, and safety moderation. You can easily create bots using OpenAI or Azure OpenAI to deliver an AI-driven experience.
Initial setup
Teams AI library is built on top of the Bot Framework SDK. It extends the capabilities of the Bot Framework by importing core functionalities. As part of the initial setup, import the Bot Framework SDK components. The adapter class that handles connectivity with the channels is imported from Bot Framework SDK.
.NET Code: Initial Configuration and Adapter Setup
using Microsoft.Teams.AI;
using Microsoft.Bot.Builder;
using Microsoft.Bot.Builder.Integration.AspNet.Core;
using Microsoft.Bot.Connector.Authentication;
using Microsoft.TeamsFx.Conversation;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
builder.Services.AddHttpClient("WebClient", client => client.Timeout = TimeSpan.FromSeconds(600));
builder.Services.AddHttpContextAccessor();
// Prepare Configuration for ConfigurationBotFrameworkAuthentication
var config = builder.Configuration.Get<ConfigOptions>();
builder.Configuration["MicrosoftAppType"] = "MultiTenant";
builder.Configuration["MicrosoftAppId"] = config.BOT_ID;
builder.Configuration["MicrosoftAppPassword"] = config.BOT_PASSWORD;
// Create the Bot Framework Authentication to be used with the Bot Adapter.
builder.Services.AddSingleton<BotFrameworkAuthentication, ConfigurationBotFrameworkAuthentication>();
// Create the Cloud Adapter with error handling enabled.
// Note: some classes expect a BotAdapter and some expect a BotFrameworkHttpAdapter, so
// register the same adapter instance for all types.
builder.Services.AddSingleton<CloudAdapter, AdapterWithErrorHandler>();
builder.Services.AddSingleton<IBotFrameworkHttpAdapter>(sp => sp.GetService<CloudAdapter>());
builder.Services.AddSingleton<BotAdapter>(sp => sp.GetService<CloudAdapter>());
Import Teams AI library
Import all the classes from @microsoft/teams-ai
to build your bot and use the Teams AI library capabilities.
JavaScript Code: Importing Teams AI Library
// import Teams AI library
import {
AI,
Application,
ActionPlanner,
OpenAIModerator,
OpenAIModel,
PromptManager,
TurnState
} from '@microsoft/teams-ai';
import { addResponseFormatter } from './responseFormatter';
import { VectraDataSource } from './VectraDataSource';
Create AI components
You can create AI components in an existing bot app or in a new Bot Framework app. The main components include:
- OpenAIModel: Provides access to the OpenAI API—or any service following the OpenAI REST format. It works with both OpenAI and Azure OpenAI language models.
- Prompt Manager: Manages prompt creation. It inserts functions, conversation state, and user state into the prompt automatically.
- ActionPlanner: Calls your Large Language Model (LLM) and includes features for enhancing and customizing your model. This component generates and executes plans based on user input and available actions.
.NET Code: Creating AI Components
// Create model
OpenAIModel? model = null;
if (!string.IsNullOrEmpty(config.OpenAI?.ApiKey))
{
model = new(new OpenAIModelOptions(config.OpenAI.ApiKey, "gpt-3.5-turbo"));
}
else if (!string.IsNullOrEmpty(config.Azure?.OpenAIApiKey) && !string.IsNullOrEmpty(config.Azure.OpenAIEndpoint))
{
model = new(new AzureOpenAIModelOptions(
config.Azure.OpenAIApiKey,
"gpt-35-turbo",
config.Azure.OpenAIEndpoint
));
}
if (model == null)
{
throw new Exception("please configure settings for either OpenAI or Azure");
}
// Create prompt manager
PromptManager prompts = new(new()
{
PromptFolder = "./Prompts",
});
// Add function to be referenced in the prompt template
prompts.AddFunction("getLightStatus", async (context, memory, functions, tokenizer, args) =>
{
bool lightsOn = (bool)(memory.GetValue("conversation.lightsOn") ?? false);
return await Task.FromResult(lightsOn ? "on" : "off");
});
// Create ActionPlanner
ActionPlanner<AppState> planner = new(
options: new(
model: model,
prompts: prompts,
defaultPrompt: async (context, state, planner) =>
{
PromptTemplate template = prompts.GetPrompt("sequence");
return await Task.FromResult(template);
}
)
{ LogRepairs = true },
loggerFactory: loggerFactory
);
Define storage and application
The application object automatically manages the conversation and user state of your bot. It includes:
- Storage: A storage provider stores the conversation and user state.
- Application: The
Application
class registers actions or activity handlers for the app. It contains all the necessary information and bot logic.
.NET Code: Defining Storage and Application
return new TeamsLightBot(new()
{
Storage = sp.GetService<IStorage>(),
AI = new(planner),
LoggerFactory = loggerFactory,
TurnStateFactory = () =>
{
return new AppState();
}
});
The TurnStateFactory
property lets you create a custom state class to store additional information or logic. Extend the default turn state by creating a class that includes additional properties (like user input, bot output, or conversation history) and pass a function that creates an instance of your class to the app constructor.
Register data sources
A vector data source simplifies adding Retrieval-Augmented Generation (RAG) to any prompt. Register a named data source with the planner and specify it in the prompt's config.json
file to augment the prompt. This allows the AI to inject relevant information from external sources (such as vector databases or cognitive search) into the prompt.
JavaScript Code: Registering a Data Source with the Planner
// Register your data source with planner
planner.prompts.addDataSource(new VectraDataSource({
name: 'teams-ai',
apiKey: process.env.OPENAI_API_KEY!,
indexFolder: path.join(__dirname, '../index'),
}));
Embeddings
An embedding is a vector generated by an LLM to represent text, capturing its semantic meaning. Embeddings are used in text classification, sentiment analysis, search, and more. For example, OpenAI's text-embedding-ada-002
model returns a list of 1536 numbers that represent the input text. These embeddings are stored in a vector database. In a custom engine agent, the RAG pattern can retrieve relevant data from the vector database and augment the prompt.
Example: VectraDataSource and OpenAIEmbeddings
import { DataSource, Memory, RenderedPromptSection, Tokenizer } from '@microsoft/teams-ai';
import { OpenAIEmbeddings, LocalDocumentIndex } from 'vectra';
import * as path from 'path';
import { TurnContext } from 'botbuilder';
/**
* Options for creating a `VectraDataSource`.
*/
export interface VectraDataSourceOptions {
/**
* Name of the data source and local index.
*/
name: string;
/**
* OpenAI API key to use for generating embeddings.
*/
apiKey: string;
/**
* Path to the folder containing the local index.
* @remarks
* This should be the root folder for all local indexes and the index itself
* needs to be in a subfolder under this folder.
*/
indexFolder: string;
/**
* Optional. Maximum number of documents to return.
* @remarks
* Defaults to `5`.
*/
maxDocuments?: number;
/**
* Optional. Maximum number of chunks to return per document.
* @remarks
* Defaults to `50`.
*/
maxChunks?: number;
/**
* Optional. Maximum number of tokens to return per document.
* @remarks
* Defaults to `600`.
*/
maxTokensPerDocument?: number;
}
/**
* A data source that uses a local Vectra index to inject text snippets into a prompt.
*/
export class VectraDataSource implements DataSource {
private readonly _options: VectraDataSourceOptions;
private readonly _index: LocalDocumentIndex;
/**
* Name of the data source.
* @remarks
* This is also the name of the local Vectra index.
*/
public readonly name: string;
/**
* Creates a new `VectraDataSource` instance.
* @param options Options for creating the data source.
*/
public constructor(options: VectraDataSourceOptions) {
this._options = options;
this.name = options.name;
// Create embeddings model
const embeddings = new OpenAIEmbeddings({
model: 'text-embedding-ada-002',
apiKey: options.apiKey,
});
// Create local index
this._index = new LocalDocumentIndex({
embeddings,
folderPath: path.join(options.indexFolder, options.name),
});
}
/**
* Renders the data source as a string of text.
* @param context Turn context for the current turn of conversation with the user.
* @param memory An interface for accessing state values.
* @param tokenizer Tokenizer to use when rendering the data source.
* @param maxTokens Maximum number of tokens allowed to be rendered.
*/
public async renderData(context: TurnContext, memory: Memory, tokenizer: Tokenizer, maxTokens: number): Promise<RenderedPromptSection<string>> {
// Query index
const query = memory.getValue('temp.input') as string;
const results = await this._index.queryDocuments(query, {
maxDocuments: this._options.maxDocuments ?? 5,
maxChunks: this._options.maxChunks ?? 50,
});
// Add documents until you run out of tokens
let length = 0;
let output = '';
let connector = '';
for (const result of results) {
// Start a new doc
let doc = `${connector}url: ${result.uri}\n`;
let docLength = tokenizer.encode(doc).length;
const remainingTokens = maxTokens - (length + docLength);
if (remainingTokens <= 0) {
break;
}
// Render document section
const sections = await result.renderSections(Math.min(remainingTokens, this._options.maxTokensPerDocument ?? 600), 1);
docLength += sections[0].tokenCount;
doc += sections[0].text;
// Append doc to output
output += doc;
length += docLength;
connector = '\n\n';
}
return { output, length, tooLong: length > maxTokens };
}
}
Prompts
Prompts are text segments used to create conversational experiences, such as initiating conversations, asking questions, and generating responses. The new object-based prompt system divides prompts into sections, each with its own token budget (either fixed or proportional to the remaining tokens). Prompts can be generated for both the Text Completion and Chat Completion style APIs.
Follow these guidelines to create effective prompts:
- Provide clear instructions and examples.
- Ensure high-quality, proofread data with sufficient examples.
- Adjust prompt settings using
temperature
andtop_p
to control the model’s output. Higher values (e.g., 0.8) yield random outputs; lower values (e.g., 0.2) create focused, deterministic responses.
To implement prompts:
- Create a folder named
prompts
. - Define the prompt templates and settings in dedicated files:
skprompt.txt
: Contains the prompt text with support for template variables and functions.config.json
: Contains the prompt model settings ensuring the bot's responses meet your requirements.
Example: config.json
for Prompt Settings
{
"schema": 1.1,
"description": "A bot that can turn the lights on and off",
"type": "completion",
"completion": {
"model": "gpt-3.5-turbo",
"completion_type": "chat",
"include_history": true,
"include_input": true,
"max_input_tokens": 2800,
"max_tokens": 1000,
"temperature": 0.2,
"top_p": 0.0,
"presence_penalty": 0.6,
"frequency_penalty": 0.0,
"stop_sequences": []
},
"augmentation": {
"augmentation_type": "sequence"
"data_sources": {
"teams-ai": 1200
}
}
}
Query parameters
The following table details the query parameters:
Value | Description |
---|---|
model |
ID of the model to use. |
completion_type |
The type of completion to use. The model returns one or more predicted completions and probability of alternative tokens. Supported options: chat and text . Default: chat . |
include_history |
Boolean value. Indicates whether to include history. Each prompt gets its own conversation history to avoid confusion. |
include_input |
Boolean value. If set to true, the user's input is included in the prompt. |
max_input_tokens |
Maximum number of tokens allowed for input. (Max supported tokens: 4000) |
max_tokens |
Maximum number of tokens to generate. The sum of prompt tokens and max_tokens must not exceed the model's context length. |
temperature |
Sampling temperature (range: 0 to 2). Higher values (e.g., 0.8) yield more random output; lower values (e.g., 0.2) generate focused output. |
top_p |
Alternative for sampling with temperature, known as nucleus sampling. For instance, a value of 0.1 means only tokens in the top 10% probability mass are considered. |
presence_penalty |
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text, encouraging discussion of new topics. |
frequency_penalty |
Number between -2.0 and 2.0. Positive values penalize tokens based on their frequency, reducing the likelihood of repetition. |
stop_sequences |
Up to four sequences where the API stops generating tokens. The returned text does not include the stop sequences. |
augmentation_type |
The type of augmentation. Supported values are sequence , monologue , and tools . |
Prompt management
Prompt management dynamically adjusts prompt size and content based on the token budget and available data sources. For example, for a bot with a 4,000-token limit (2,800 for input and 1,000 for output), the model reserves tokens for conversation history, input, and any augmented data from external sources.
Prompt actions
Prompt actions allow the model to perform actions or respond to user input. You can create a schema listing supported actions with corresponding parameters. The OpenAI endpoint extracts entities and passes them as arguments to the action handler.
For example:
The following is a conversation with an AI assistant.
The assistant can turn a light on or off.
context:
The lights are currently {{getLightStatus}}.
Prompt template
A prompt template defines and composes AI functions using plain text. It allows you to:
- Create natural language prompts.
- Generate responses.
- Extract information.
- Invoke other prompts.
The language supports embedding variables and functions using curly braces {{...}}
. Some key expressions include:
{{function}}
: Calls a registered function and inserts its return value.{{$input}}
: Inserts the user's message text, obtained fromstate.temp.input
.{{$state.[property]}}
: Inserts state properties.
Actions
Actions handle events triggered by AI components. The built-in FlaggedInputAction
and FlaggedOutputAction
handle moderator flags. When a message is flagged, the bot notifies the user via context.sendActivity
. To stop the action, return AI.StopCommandName
.
JavaScript Code: Registering Flagged Input and Output Actions
// Register other AI actions
app.ai.action(
AI.FlaggedInputActionName,
async (context: TurnContext, state: ApplicationTurnState, data: Record<string, any>) => {
await context.sendActivity(`I'm sorry your message was flagged: ${JSON.stringify(data)}`);
return AI.StopCommandName;
}
);
app.ai.action(AI.FlaggedOutputActionName, async (context: TurnContext, state: ApplicationTurnState, data: any) => {
await context.sendActivity(`I'm not allowed to talk about such things.`);
return AI.StopCommandName;
});
Register Action Handlers
Action handlers help the bot perform specific tasks. First, register actions in your prompt and then implement a handler for each action, including unknown actions.
In the following light bot example, the actions include LightsOn
, LightsOff
, and Pause
. Each action handler returns a string
. For actions returning time (e.g., pause duration), the PauseParameters
property ensures the time is in number format.
.NET Code: Action Handlers for LightBot
public class LightBotActions
{
[Action("LightsOn")]
public async Task<string> LightsOn([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] AppState turnState)
{
turnState.Conversation!.LightsOn = true;
await turnContext.SendActivityAsync(MessageFactory.Text("[lights on]"));
return "the lights are now on";
}
[Action("LightsOff")]
public async Task<string> LightsOff([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] AppState turnState)
{
turnState.Conversation!.LightsOn = false;
await turnContext.SendActivityAsync(MessageFactory.Text("[lights off]"));
return "the lights are now off";
}
[Action("Pause")]
public async Task<string> LightsOff([ActionTurnContext] ITurnContext turnContext, [ActionParameters] Dictionary<string, object> args)
{
// Try to parse entities returned by the model.
// Expecting "time" to be a number of milliseconds to pause.
if (args.TryGetValue("time", out object? time))
{
if (time != null && time is string timeString)
{
if (int.TryParse(timeString, out int timeInt))
{
await turnContext.SendActivityAsync(MessageFactory.Text($"[pausing for {timeInt / 1000} seconds]"));
await Task.Delay(timeInt);
}
}
}
return "done pausing";
}
[Action("LightStatus")]
public async Task<string> LightStatus([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] AppState turnState)
{
await turnContext.SendActivityAsync(ResponseGenerator.LightStatus(turnState.Conversation!.LightsOn));
return turnState.Conversation!.LightsOn ? "the lights are on" : "the lights are off";
}
[Action(AIConstants.UnknownActionName)]
public async Task<string> UnknownAction([ActionTurnContext] TurnContext turnContext, [ActionName] string action)
{
await turnContext.SendActivityAsync(ResponseGenerator.UnknownAction(action ?? "Unknown"));
return "unknown action";
}
}
}
Using sequence, monologue, or tools augmentation prevents the model from hallucinating invalid function names, action names, or parameters. Create an actions file to:
- Define actions for prompt augmentation.
- Indicate when to perform actions.
For example, in a light bot, the actions.json
file might list actions like this:
[
{
"name": "LightsOn",
"description": "Turns on the lights"
},
{
"name": "LightsOff",
"description": "Turns off the lights"
},
{
"name": "Pause",
"description": "Delays for a period of time",
"parameters": {
"type": "object",
"properties": {
"time": {
"type": "number",
"description": "The amount of time to delay in milliseconds"
}
},
"required": [
"time"
]
}
}
]
name
: Name of the action (required).description
: Description of the action (optional).parameters
: A JSON schema defining the required parameters.
A feedback loop helps validate, correct, and refine the bot’s interactions. For sequence
augmentation, disable looping by either setting allow_looping?
to false
in AIOptions
or setting max_repair_attempts
to 0
in your implementation.
Manage history
Use the MaxHistoryMessages
and MaxConversationHistoryTokens
settings to allow the AI library to automatically manage conversation history.
Feedback loop
A feedback loop monitors and improves bot interactions. It includes:
- Repair Loop: Forks the conversation history when a response is inadequate to try alternate solutions.
- Validation: Verifies the corrected response before merging it back into the conversation.
- Learning: Adjusts the bot's performance based on correct behavior examples.
- Complex Commands Handling: Enhances the model's ability to process complex commands over time.
Upgrade your conventional bot to custom engine agent
If you already have a bot on Teams, you can upgrade it to a custom engine agent that supports streaming, citations, and AI labels. This upgrade aligns your bot with the conversational AI UX paradigm and provides a consistent experience with declarative agents.
Note
Custom engine agent isn't supported in Python.
Upgrade steps:
To-Do List | Supporting docs |
---|---|
Update the AI SDK versions | • For JavaScript, update to v1.6.1. • For C#, update to v1.8.1. |
Enable streaming for the bot. | Stream bot messages |
Use AI labels to indicate AI-generated messages. | AI labels |
Use citations for source references. | Citations |
Add support for Microsoft 365 Copilot Chat
You can add support for custom engine agents in Microsoft 365 Copilot Chat. This includes support for asynchronous patterns such as follow-up messages and long-running tasks. For more details, see asynchronous patterns.
To support Microsoft 365 Copilot Chat, update your app manifest:
Add the
copilotAgents
property with a sub-propertycustomEngineAgents
to your app manifest:"copilotAgents": { "customEngineAgents": [ { "type": "bot", "id": "<Bot-Id-Guid>" } ] }
Set the
scopes
topersonal
forbots
andcommandLists
in your app manifest:"bots": [ { "botId": "<Bot-Id-Guid>", "scopes": [ "personal", "team", "groupChat" ], "commandLists": [ { "scopes": ["personal"], "commands": [ { "title": "Sample prompt title", "description": "Description of sample prompt" } ] }, { "scopes": ["personal"], "commands": [ { "title": "Sample prompt title", "description": "Description of sample prompt" } ] } ], } ],
Note
- Microsoft 365 Copilot Chat adds an AI-generated label to every custom engine agent response.
- For bots built with Microsoft 365 Agents Toolkit (formerly Teams Toolkit) wanting to support Microsoft 365 Copilot Chat, follow the step-by-step guide.
- Single-sign on (SSO) for custom engine agents is available but not supported for Outlook client. See update Microsoft Entra app registration for SSO.
Elevate your conventional bot to use AI
You can update your existing conventional bot to be powered by AI. Adding an AI layer enhances your bot with LLM-driven features. Below is an example of integrating the AI layer using the Bot Framework adapter and the app
object.
JavaScript Code: Elevating a Conventional Bot to Use AI
// Create AI components
const model = new OpenAIModel({
// OpenAI Support
apiKey: process.env.OPENAI_KEY!,
defaultModel: 'gpt-4o',
// Azure OpenAI Support
azureApiKey: process.env.AZURE_OPENAI_KEY!,
azureDefaultDeployment: 'gpt-4o',
azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT!,
azureApiVersion: '2023-03-15-preview',
// Request logging
logRequests: true
});
const prompts = new PromptManager({
promptsFolder: path.join(__dirname, '../src/prompts')
});
// Define a prompt function for getting the current status of the lights
prompts.addFunction('getLightStatus', async (context: TurnContext, memory: Memory) => {
return memory.getValue('conversation.lightsOn') ? 'on' : 'off';
});
const planner = new ActionPlanner({
model,
prompts,
defaultPrompt: 'tools'
});
// Define storage and application
const storage = new MemoryStorage();
const app = new Application<ApplicationTurnState>({
storage,
ai: {
planner
}
});
app.ai.action('LightStatus', async (context: TurnContext, state: ApplicationTurnState) => {
const status = state.conversation.lightsOn ? 'on' : 'off';
return `the lights are ${status}`;
});
// Register action handlers
app.ai.action('LightsOn', async (context: TurnContext, state: ApplicationTurnState) => {
state.conversation.lightsOn = true;
await context.sendActivity(`[lights on]`);
return `the lights are now on`;
});
app.ai.action('LightsOff', async (context: TurnContext, state: ApplicationTurnState) => {
state.conversation.lightsOn = false;
await context.sendActivity(`[lights off]`);
return `the lights are now off`;
});
interface PauseParameters {
time: number;
}
app.ai.action('Pause', async (context: TurnContext, state: ApplicationTurnState, parameters: PauseParameters) => {
await context.sendActivity(`[pausing for ${parameters.time / 1000} seconds]`);
await new Promise((resolve) => setTimeout(resolve, parameters.time));
return `done pausing`;
});
// Listen for incoming server requests.
server.post('/api/messages', async (req, res) => {
// Route received a request to adapter for processing
await adapter.process(req, res as any, async (context) => {
// Dispatch to application for routing
await app.run(context);
});
});
Migrate your bot to use Teams AI library
If you built your bot using the Bot Framework SDK, you can migrate to the Teams AI library to unlock advanced AI features. Migrating offers these benefits:
- Advanced AI system for building complex Teams applications powered by LLM.
- Integrated user authentication for accessing third-party user data.
- Leverages familiar Bot Framework SDK tools and concepts.
- Supports the latest LLM tools and APIs.
Choose the relevant migration guide for your bot's language:
Migrate a Bot Framework SDK app ... | To use Teams AI library ... |
---|---|
A bot app built using JavaScript | Migrate |
A bot app built using C# | Migrate |
A bot app using Python | Migrate |
Code sample
Sample name | Description | .NET | Node.js |
---|---|---|---|
Action mapping lightbot | Demonstrates how LightBot understands user intent and controls the light bot based on commands. | View | View |
Next step
If you want to try creating a scenario-based custom engine agent using the Agents Toolkit and Teams AI library, select the following:
Advanced step-by-step guide
If you want to learn about the core capabilities of Teams AI library, select the following:
Understand Teams AI library
Platform Docs