Templatizing your prompts

In the previous article we created a prompt that could be used to get the intent of the user. This function, however, is not very reusable. Because the options are hard coded in. We could dynamically create the prompts string, but there's a better way: prompt templates. function.

By following this example, you'll learn how to templatize a prompt. If you want to see the final solution, you can check out the following samples in the public documentation repository. Use the link to the previous solution if you want to follow along.

Language Link to previous solution Link to final solution
C# Open example in GitHub Open solution in GitHub
Java Open solution in GitHub
Python Open solution in GitHub Open solution in GitHub

Adding variables to the prompt

With Semantic Kernel's templating language, we can add tokens that will be automatically replaced with input parameters. To begin, let's build a super simple prompt that uses the Semantic Kernel template syntax language to include enough information for an agent to respond back to the user.

var chat = kernel.CreateFunctionFromPrompt(
    @"{{$history}}
    User: {{$request}}
    Assistant: ");

The new prompt uses the request and history variables so that we can include these values when we run our prompt. To test our prompt, we can create a chat loop so we can begin talking back-and-forth with our agent. When we invoke the prompt, we can pass in the request and history variables as arguments.

ChatHistory history = [];

// Start the chat loop
while (true)
{
    // Get user input
    Console.Write("User > ");
    var request = Console.ReadLine();

    // Get chat response
    var chatResult = kernel.InvokeStreamingAsync<StreamingChatMessageContent>(
        chat,
        new()
        {
            { "request", request },
            { "history", string.Join("\n", history.Select(x => x.Role + ": " + x.Content)) }
        }
    );

    // Stream the response
    string message = "";
    await foreach (var chunk in chatResult)
    {
        if (chunk.Role.HasValue)
        {
            Console.Write(chunk.Role + " > ");
        }

        message += chunk;
        Console.Write(chunk);
    }
    Console.WriteLine();

    // Append to history
    history.AddUserMessage(request!);
    history.AddAssistantMessage(message);
}

Using the Handlebars template engine

In addition to the core template syntax, Semantic Kernel also comes with support for the Handlebars templating language in the C# and Java SDK. To use Handlebars, you'll first want to add the Handlebars package to your project.

dotnet add package Microsoft.SemanticKernel.PromptTemplate.Handlebars --prerelease

Then import the Handlebars template engine package.

using Microsoft.SemanticKernel.PromptTemplates.Handlebars;

Afterwards, you can create a new prompt using the HandlebarsPromptTemplateFactory. Because Handlebars supports loops, we can use it to loop over elements like examples and chat history. This makes it a great fit for the getIntent prompt we created in the previous article.

var getIntent = kernel.CreateFunctionFromPrompt(
    new()
    {
        Template = """
                   <message role="system">Instructions: What is the intent of this request?
                   Do not explain the reasoning, just reply back with the intent. If you are unsure, reply with {{choices[0]}}.
                   Choices: {{choices}}.</message>

                   {{#each fewShotExamples}}
                       {{#each this}}
                           <message role="{{role}}">{{content}}</message>
                       {{/each}}
                   {{/each}}

                   {{#each chatHistory}}
                       <message role="{{role}}">{{content}}</message>
                   {{/each}}

                   <message role="user">{{request}}</message>
                   <message role="system">Intent:</message>
                   """,
        TemplateFormat = "handlebars"
    },
    new HandlebarsPromptTemplateFactory()
);

We can then create the choice and example objects that will be used by the template. In this example, we can use our prompt to end the conversation once it's over. To do this, we'll just provide two valid intents: ContinueConversation and EndConversation.

// Create choices
List<string> choices = ["ContinueConversation", "EndConversation"];

// Create few-shot examples
List<ChatHistory> fewShotExamples =
[
    [
        new ChatMessageContent(AuthorRole.User, "Can you send a very quick approval to the marketing team?"),
        new ChatMessageContent(AuthorRole.System, "Intent:"),
        new ChatMessageContent(AuthorRole.Assistant, "ContinueConversation")
    ],
    [
        new ChatMessageContent(AuthorRole.User, "Thanks, I'm done for now"),
        new ChatMessageContent(AuthorRole.System, "Intent:"),
        new ChatMessageContent(AuthorRole.Assistant, "EndConversation")
    ]
];

Finally, you can run the prompt using the kernel. Add the following code within your main chat loop so the loop can be terminated once the intent is EndConversation.

ChatHistory history = [];

// Start the chat loop
while (true)
{
    // Get user input
    Console.Write("User > ");
    var request = Console.ReadLine();

    // Invoke prompt
    var intent = await kernel.InvokeAsync(
        getIntent,
        new()
        {
            { "request", request },
            { "choices", choices },
            { "history", history },
            { "fewShotExamples", fewShotExamples }
        }
    );

    // End the chat if the intent is "Stop"
    if (intent.ToString() == "EndConversation")
    {
        break;
    }

    // Get chat response
    var chatResult = kernel.InvokeStreamingAsync<StreamingChatMessageContent>(
        chat,
        new()
        {
            { "request", request },
            { "history", string.Join("\n", history.Select(x => x.Role + ": " + x.Content)) }
        }
    );

    // Stream the response
    string message = "";
    await foreach (var chunk in chatResult)
    {
        if (chunk.Role.HasValue)
        {
            Console.Write(chunk.Role + " > ");
        }

        message += chunk;
        Console.Write(chunk);
    }
    Console.WriteLine();

    // Append to history
    history.AddUserMessage(request!);
    history.AddAssistantMessage(message);
}

Take the next step

Now that you can templatize your prompt, you can now learn how to call functions from within a prompt to help break up the prompt into smaller pieces.