Templatizing your semantic functions

pink circles of semantic kernel

In the previous article we created a semantic function that could be used to get the intent of the user. This function, however, is not very reusable. For example, if we wanted to run specific code based on the user intent, it would be difficult to use the output of the GetIntent function to choose which code to actually run.

We need to find a way to constrain the output of our function so that we can later use the output in a switch statement inside of a native function.

By following this example, you'll learn how to templatize a semantic function. If you want to see the final solution, you can check out the following samples in the public documentation repository. Use the link to the previous solution if you want to follow along.

Language Link to previous solution Link to final solution
C# Open solution in GitHub Open solution in GitHub
Python Open solution in GitHub Open solution in GitHub

Adding variables to the prompt

One way to constrain the output of a semantic function is to provide a list of options for it to choose from. A naive approach would be to hard code these options into the prompt, but this would be difficult to maintain and would not scale well. Instead, we can use Semantic Kernel's templating language to dynamically generate the prompt.

The prompt template syntax article in the prompt engineering section of the documentation provides a detailed overview of how to use the templating language. In this article, we'll show you just enough to get started.

To begin, open the skprompt.txt file in the GetIntent folder from the previous solution and update it to the following prompt.

[History]
{{$history}}

User: {{$input}}

---------------------------------------------

Provide the intent of the user. The intent should be one of the following: {{$options}}

INTENT:

The new prompt uses the options variable to provide a list of options for the LLM to choose from. We've also added a history variable to the prompt so that the previous conversation is included.

By including these variables, we are able to help the LLM choose the correct intent by providing it with more context and a constrained list of options to choose from.

Consuming context variables within a semantic function

When you add a new variable to the prompt, you must also update the config.json file to include the new variables and their descriptions. While these properties aren't used now, it's good to get into the practice of adding them so they can be used by the planner later. The following configuration adds the options and history variable to the input section of the configuration.

{
     "schema": 1,
     "type": "completion",
     "description": "Gets the intent of the user.",
     "completion": {
          "max_tokens": 500,
          "temperature": 0.0,
          "top_p": 0.0,
          "presence_penalty": 0.0,
          "frequency_penalty": 0.0
     },
     "input": {
          "parameters": [
               {
                    "name": "input",
                    "description": "The user's request.",
                    "defaultValue": ""
               },
               {
                    "name": "history",
                    "description": "The history of the conversation.",
                    "defaultValue": ""
               },
               {
                    "name": "options",
                    "description": "The options to choose from.",
                    "defaultValue": ""
               }
          ]
     }
}

Passing in context variables

You can now update your Program.cs or main.py file to provide a list of options to the GetIntent function. To do this, you'll need to complete the following steps:

  1. Create a ContextVariables object that will store the variables,
  2. Set the input, history, and options variables,
  3. And finally pass the object into the kernel's RunAsync function.

You can see how to do this in the code snippets below.

Initialize the kernel and import the plugins.

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Orchestration;

IKernel kernel = new KernelBuilder()
    // Add a text or chat completion service using either:
    // .WithAzureTextCompletionService()
    // .WithAzureChatCompletionService()
    // .WithOpenAITextCompletionService()
    // .WithOpenAIChatCompletionService()
    .Build();

var pluginsDirectory = Path.Combine(System.IO.Directory.GetCurrentDirectory(), "plugins");

// Import the OrchestratorPlugin and SummarizePlugin from the plugins directory.
var orchestrationPlugin = kernel.ImportSemanticFunctionsFromDirectory(pluginsDirectory, "OrchestratorPlugin");

Create a new context and set the input, history, and options variables.

var variables = new ContextVariables
{
    ["input"] = "Yes",
    ["history"] = @"Bot: How can I help you?
User: My team just hit a major milestone and I would like to send them a message to congratulate them.
Bot:Would you like to send an email?",
    ["options"] = "SendEmail, ReadEmail, SendMeeting, RsvpToMeeting, SendChat"
};

Run the GetIntent function with the context variables.

var result = (await kernel.RunAsync(variables, orchestrationPlugin["GetIntent"])).Result;

Console.WriteLine(result);

Now, instead of getting an output like Send congratulatory email, we'll get an output like SendEmail. This output could then be used within a switch statement in native code to execute the next appropriate step.

Take the next step

Now that you can templatize your semantic function, you can now learn how to call functions from within a semantic function to help break up the prompt into smaller pieces.