Calling functions within a prompt

In the previous article we demonstrated how to templatize a prompt to make it more reusable. In this article, we'll show you how to call other functions within a prompt to help break up the prompt into smaller pieces. This helps keep LLMs focused on a single task, helps avoid hitting token limits, and allows you to add native code directly into your prompt.

If you want to see the final solution, you can check out the following samples in the public documentation repository. Use the link to the previous solution if you want to follow along.

Language Link to previous solution Link to final solution
C# Open example in GitHub Open solution in GitHub
Java Open solution in GitHub
Python Open solution in GitHub Open solution in GitHub

Calling a nested function

In the previous example, we created a prompt that chats with the user. This function used the previous conversation history to determine what the agent should say next.

Putting the entire history into a single prompt, however, may result in using too many tokens. To avoid this, we can summarize the conversation history before asking for the intent. To do this, we can leverage the ConversationSummaryPlugin that's part of the core plugins package.

Below, we show how we can update our original prompt to use the SummarizeConversation function in the ConversationSummaryPlugin to summarize the conversation history before asking for the intent.

        var chat = kernel.CreateFunctionFromPrompt(
@"{{ConversationSummaryPlugin.SummarizeConversation $history}}
User: {{$request}}
Assistant: "
        );

Testing the updated prompt

After adding the nested function, you must ensure that you load the plugin with the required function into the kernel.

var builder = Kernel.CreateBuilder()
                    .AddAzureOpenAIChatCompletion(modelId, endpoint, apiKey);
builder.Plugins.AddFromType<ConversationSummaryPlugin>();
Kernel kernel = builder.Build();

Afterwards, we can test the prompt by creating a chat loop that makes the history progressively longer.

// Create chat history
ChatHistory history = [];

// Start the chat loop
while (true)
{
    // Get user input
    Console.Write("User > ");
    var request = Console.ReadLine();

    // Invoke handlebars prompt
    var intent = await kernel.InvokeAsync(
        getIntent,
        new()
        {
            { "request", request },
            { "choices", choices },
            { "history", history },
            { "fewShotExamples", fewShotExamples }
        }
    );

    // End the chat if the intent is "Stop"
    if (intent.ToString() == "EndConversation")
    {
        break;
    }

    // Get chat response
    var chatResult = kernel.InvokeStreamingAsync<StreamingChatMessageContent>(
        chat,
        new()
        {
            { "request", request },
            { "history", string.Join("\n", history.Select(x => x.Role + ": " + x.Content)) }
        }
    );

    // Stream the response
    string message = "";
    await foreach (var chunk in chatResult)
    {
        if (chunk.Role.HasValue)
        {
            Console.Write(chunk.Role + " > ");
        }
        message += chunk;
        Console.Write(chunk);
    }
    Console.WriteLine();

    // Append to history
    history.AddUserMessage(request!);
    history.AddAssistantMessage(message);
}

Calling nested functions in Handlebars

In the previous article, we showed how to use the Handlebars template engine to create the getIntent prompt. In this article, we'll show you how to update this prompt with the same nested function.

Similar to the previous example, we can use the SummarizeConversation function to summarize the conversation history before asking for the intent. The only difference is that we'll need to use the Handlebars syntax to call the function which requires us to use an - between the plugin name and function name instead of a ..

var getIntent = kernel.CreateFunctionFromPrompt(
    new()
    {
        Template = """
                    <message role="system">Instructions: What is the intent of this request?
                    Do not explain the reasoning, just reply back with the intent. If you are unsure, reply with {{choices[0]}}.
                    Choices: {{choices}}.</message>

                    {{#each fewShotExamples}}
                        {{#each this}}
                            <message role="{{role}}">{{content}}</message>
                        {{/each}}
                    {{/each}}

                    {{ConversationSummaryPlugin-SummarizeConversation history}}

                    <message role="user">{{request}}</message>
                    <message role="system">Intent:</message>
                    """,
        TemplateFormat = "handlebars"
    },
    new HandlebarsPromptTemplateFactory()
);