Teams AI library capabilities
Teams AI library supports JavaScript and is designed to simplify the process of building bots that can interact with Microsoft Teams, and facilitates the migration of existing bots. The AI library supports the migration of messaging capabilities, Message extension (ME) capabilities, and Adaptive Cards capabilities to the new format. It's also possible to upgrade existing Teams apps with these features.
Earlier, you were using BotBuilder SDK directly to create bots for Microsoft Teams. Teams AI library is designed to facilitate the construction of bots that can interact with Microsoft Teams. While one of the key features of Teams AI library is the AI support that customers can utilize, the initial objective might be to upgrade their current bot without AI. After you upgrade, the bot can connect to AI or Large Language Models (LLMs) available in the AI library.
Teams AI library supports the following capabilities:
You need to use the AI library to scaffold bot and Adaptive Card handlers to the source file.
In the following section, we've used the samples from the AI library to explain each capability and the path to migration:
Send or receive message
You can send and receive messages using the Bot Framework. The app listens for the user to send a message, and when it receives this message, it deletes the conversation state and sends a message back to the user. The app also keeps track of the number of messages received in a conversation and echoes back the user’s message with a count of messages received so far.
// Listen for user to say "/reset" and then delete conversation state
app.OnMessage("/reset", ActivityHandlers.ResetMessageHandler);
// Listen for ANY message to be received. MUST BE AFTER ANY OTHER MESSAGE HANDLERS
app.OnActivity(ActivityTypes.Message, ActivityHandlers.MessageHandler);
return app;
Message extensions
In the Bot Framework SDK's TeamsActivityHandler
, you needed to set up the Message extensions query handler by extending handler methods. The app listens for search actions and item taps, and formats the search results as a list of HeroCards displaying package information. The result is used to display the search results in the messaging extension.
// Listen for search actions
app.MessageExtensions.OnQuery("searchCmd", activityHandlers.QueryHandler);
// Listen for item tap
app.MessageExtensions.OnSelectItem(activityHandlers.SelectItemHandler);
return app;
// Format search results in ActivityHandlers.cs
List<MessagingExtensionAttachment> attachments = packages.Select(package => new MessagingExtensionAttachment
{
ContentType = HeroCard.ContentType,
Content = new HeroCard
{
Title = package.Id,
Text = package.Description
},
Preview = new HeroCard
{
Title = package.Id,
Text = package.Description,
Tap = new CardAction
{
Type = "invoke",
Value = package
}
}.ToAttachment()
}).ToList();
// Return results as a list
return new MessagingExtensionResult
{
Type = "result",
AttachmentLayout = "list",
Attachments = attachments
};
Adaptive Cards capabilities
You can register Adaptive Card action handlers using the app.adaptiveCards
property. The app listens for messages containing the keywords static
or dynamic
and returns an Adaptive Card using the StaticMessageHandler
or DynamicMessageHandler
methods. The app also listens for queries from a dynamic search card, submit buttons on the Adaptive Cards.
// Listen for messages that trigger returning an adaptive card
app.OnMessage(new Regex(@"static", RegexOptions.IgnoreCase), activityHandlers.StaticMessageHandler);
app.OnMessage(new Regex(@"dynamic", RegexOptions.IgnoreCase), activityHandlers.DynamicMessageHandler);
// Listen for query from dynamic search card
app.AdaptiveCards.OnSearch("nugetpackages", activityHandlers.SearchHandler);
// Listen for submit buttons
app.AdaptiveCards.OnActionSubmit("StaticSubmit", activityHandlers.StaticSubmitHandler);
app.AdaptiveCards.OnActionSubmit("DynamicSubmit", activityHandlers.DynamicSubmitHandler);
// Listen for ANY message to be received. MUST BE AFTER ANY OTHER HANDLERS
app.OnActivity(ActivityTypes.Message, activityHandlers.MessageHandler);
return app;
Core capabilities
Bot logic for handling an action
The Bot responds to the user's input with the action LightsOn
to turn the lights on.
The following example illustrates how Teams AI library makes it possible to manage the bot logic for handling an action LightsOn
or LightsOff
and connect it to the prompt used with OpenAI:
/ Create AI Model
if (!string.IsNullOrEmpty(config.OpenAI?.ApiKey))
{
builder.Services.AddSingleton<OpenAIModel>(sp => new(
new OpenAIModelOptions(config.OpenAI.ApiKey, "gpt-3.5-turbo")
{
LogRequests = true
},
sp.GetService<ILoggerFactory>()
));
}
else if (!string.IsNullOrEmpty(config.Azure?.OpenAIApiKey) && !string.IsNullOrEmpty(config.Azure.OpenAIEndpoint))
{
builder.Services.AddSingleton<OpenAIModel>(sp => new(
new AzureOpenAIModelOptions(
config.Azure.OpenAIApiKey,
"gpt-35-turbo",
config.Azure.OpenAIEndpoint
)
{
LogRequests = true
},
sp.GetService<ILoggerFactory>()
));
}
else
{
throw new Exception("please configure settings for either OpenAI or Azure");
}
// Create the bot as transient. In this case the ASP Controller is expecting an IBot.
builder.Services.AddTransient<IBot>(sp =>
{
// Create loggers
ILoggerFactory loggerFactory = sp.GetService<ILoggerFactory>()!;
// Create Prompt Manager
PromptManager prompts = new(new()
{
PromptFolder = "./Prompts"
});
// Adds function to be referenced in the prompt template
prompts.AddFunction("getLightStatus", async (context, memory, functions, tokenizer, args) =>
{
bool lightsOn = (bool)(memory.GetValue("conversation.lightsOn") ?? false);
return await Task.FromResult(lightsOn ? "on" : "off");
});
// Create ActionPlanner
ActionPlanner<AppState> planner = new(
options: new(
model: sp.GetService<OpenAIModel>()!,
prompts: prompts,
defaultPrompt: async (context, state, planner) =>
{
PromptTemplate template = prompts.GetPrompt("sequence");
return await Task.FromResult(template);
}
)
{ LogRepairs = true },
loggerFactory: loggerFactory
);
return new TeamsLightBot(new()
{
Storage = sp.GetService<IStorage>(),
AI = new(planner),
LoggerFactory = loggerFactory,
TurnStateFactory = () =>
{
return new AppState();
}
});
});
// LightBotActions defined in LightBotActions.cs
[Action("LightsOn")]
public async Task<string> LightsOn([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] AppState turnState)
{
turnState.Conversation.LightsOn = true;
await turnContext.SendActivityAsync(MessageFactory.Text("[lights on]"));
return "the lights are now on";
}
[Action("LightsOff")]
public async Task<string> LightsOff([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] AppState turnState)
{
turnState.Conversation.LightsOn = false;
await turnContext.SendActivityAsync(MessageFactory.Text("[lights off]"));
return "the lights are now off";
}
[Action("Pause")]
public async Task<string> LightsOff([ActionTurnContext] ITurnContext turnContext, [ActionParameters] Dictionary<string, object> args)
{
// Try to parse entities returned by the model.
// Expecting "time" to be a number of milliseconds to pause.
if (args.TryGetValue("time", out object? time))
{
if (time != null && time is string timeString)
{
if (int.TryParse(timeString, out int timeInt))
{
await turnContext.SendActivityAsync(MessageFactory.Text($"[pausing for {timeInt / 1000} seconds]"));
await Task.Delay(timeInt);
}
}
}
return "done pausing";
}
Message extension query
The Teams AI library offers you a more intuitive approach to create handlers for various message-extension query commands when compared to previous iterations of Teams Bot Framework SDK. The new SDK works alongside the existing Teams Bot Framework SDK.
The following is an example of how you can structure their code to handle a message-extension query for the searchCmd
command.
// Listen for search actions
app.MessageExtensions.OnQuery("searchCmd", activityHandlers.QueryHandler);
// Listen for item tap
app.MessageExtensions.OnSelectItem(activityHandlers.SelectItemHandler);
return app;
// Format search results
List<MessagingExtensionAttachment> attachments = packages.Select(package => new MessagingExtensionAttachment
{
ContentType = HeroCard.ContentType,
Content = new HeroCard
{
Title = package.Id,
Text = package.Description
},
Preview = new HeroCard
{
Title = package.Id,
Text = package.Description,
Tap = new CardAction
{
Type = "invoke",
Value = package
}
}.ToAttachment()
}).ToList();
return new MessagingExtensionResult
{
Type = "result",
AttachmentLayout = "list",
Attachments = attachments
};
Intents to actions
A simple interface for actions and predictions allows bots to react when they have high confidence for taking action. Ambient presence lets bots learn intent, use prompts based on business logic, and generate responses.
Thanks to our AI library, the prompt needs only to outline the actions supported by the bot, and supply a few-shot example of how to employ those actions. Conversation history helps with a natural dialogue between the user and bot, such as add cereal to groceries list, followed by also add coffee, which should indicate that coffee is to be added to the groceries list.
The following is a conversation with an AI assistant. The AI assistant is capable of managing lists and recognizes the following commands:
- DO
<action> <optional entities>
- SAY
<response>
The following actions are supported:
addItem list="<list name>" item="<text>"
removeItem list="<list name>" item="<text>"
summarizeLists
All entities are required parameters to actions.
Current list names:
{{conversation.listNames}}
Examples: Human: remind me to buy milk AI: DO addItem list="groceries" item="milk" THEN SAY Ok I added milk to your groceries list Human: we already have milk AI: DO removeItem list="groceries" item="milk" THEN SAY Ok I removed milk from your groceries list Human: buy ingredients to make margaritas AI: DO addItem list="groceries" item="tequila" THEN DO addItem list="groceries" item="orange liqueur" THEN DO addItem list="groceries" item="lime juice" THEN SAY Ok I added tequila, orange liqueur, and lime juice to your groceries list Human: do we have have milk AI: DO findItem list="groceries" item="milk" Human: what's in my grocery list AI: DO summarizeLists Human: what's the contents of all my lists? AI: DO summarizeLists Human: show me all lists but change the title to Beach Party AI: DO summarizeLists Human: show me all lists as a card and sort the lists alphabetically AI: DO summarizeLists
Conversation history:
{{conversation.(history}}
Current query:
Human: {{activity.text}}
Current list names:
{{conversation.listNames}}
AI: The bot logic is streamlined to include handlers for actions such as
addItem
andremoveItem
. This distinct separation between actions and the prompts guiding the AI on how to execute the actions and prompts serves as a powerful tool.
[Action("AddItem")]
public string AddItem([ActionTurnState] ListState turnState, [ActionParameters] Dictionary<string, object> parameters)
{
ArgumentNullException.ThrowIfNull(turnState);
ArgumentNullException.ThrowIfNull(parameters);
string listName = GetParameterString(parameters, "list");
string item = GetParameterString(parameters, "item");
IList<string> items = GetItems(turnState, listName);
items.Add(item);
SetItems(turnState, listName, items);
return "item added. think about your next action";
}
[Action("RemoveItem")]
public async Task<string> RemoveItem([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] ListState turnState, [ActionParameters] Dictionary<string, object> parameters)
{
ArgumentNullException.ThrowIfNull(turnContext);
ArgumentNullException.ThrowIfNull(turnState);
ArgumentNullException.ThrowIfNull(parameters);
string listName = GetParameterString(parameters, "list");
string item = GetParameterString(parameters, "item");
IList<string> items = GetItems(turnState, listName);
if (!items.Contains(item))
{
await turnContext.SendActivityAsync(ResponseBuilder.ItemNotFound(listName, item)).ConfigureAwait(false);
return "item not found. think about your next action";
}
items.Remove(item);
SetItems(turnState, listName, items);
return "item removed. think about your next action";
}