Prompting AI models with Semantic Kernel

Prompts are core to getting the correct results from AI models. In this article, we'll demonstrate how to use common prompt engineering techniques while using Semantic Kernel.

If you want to see the final solution to this tutorial, you can check out the following samples in the public documentation repository.

Language Link to final solution
C# Open example in GitHub
Java Open example in GitHub
Python Open solution in GitHub

Creating a prompt that detects the intent of a user

If you've ever used ChatGPT or Microsoft Copilot, you're already familiar with prompting. Given a request, an LLM will attempt to predict the most likely response. For example, if you sent the prompt "I want to go to the ", an AI service might return back "beach" to complete the sentence. This is a very simple example, but it demonstrates the basic idea of how text generation prompts work.

With the Semantic Kernel SDK, you can easily run prompts from your own applications. This allows you to leverage the power of AI models in your own applications.

One common scenario is to detect the intent of a user so you could run some automation afterwards, so in this article, we'll show how you can create a prompt that detects a user's intent. Additionally, we'll demonstrate how to progressively improve the prompt by using prompt engineering techniques.

Tip

Many of the recommendations in this article are based on the Prompt Engineering Guide. If you want to become an expert at writing prompts, we highly recommend reading it and leveraging their prompt engineering techniques.

Running your first prompt with Semantic Kernel

If we wanted an AI to detect the intent of a user's input, we could simply ask what the intent is. In Semantic Kernel, we could create a string that does just that with the following code:

Console.Write("Your request: ");
string request = ReadLine()!;
string prompt = $"What is the intent of this request? {request}";

To run this prompt, we now need to create a kernel with an AI service.

Kernel kernel = Kernel.CreateBuilder()
                      .AddAzureOpenAIChatCompletion(modelId, endpoint, apiKey)
                      .Build();

Finally, we can invoke our prompt using our new kernel.

Console.WriteLine(await kernel.InvokePromptAsync(prompt));

If we run this code with the input "I want to send an email to the marketing team celebrating their recent milestone.", we should get an output that looks like the following:

The intent of this request is to seek guidance or clarification on how to effectively compose an email to the marketing team in order to celebrate their recent milestone.

Improving the prompt with prompt engineering

While this prompt "works", it's not very usable since you cannot use the result to predictably trigger automation. Every time you run the prompt, you may get a very different response.

To make the result more predictable, we can perform the following improvements:

  1. Make the prompt more specific
  2. Add structure to the output with formatting
  3. Provide examples with few-shot prompting
  4. Tell the AI what to do to avoid doing something wrong
  5. Provide context to the AI
  6. Using message roles in chat completion prompts
  7. Give your AI words of encouragement

1) Make the prompt more specific

The first thing we can do is be more specific with our prompt. Instead of just asking "What is the intent of this request?", we can provide the AI with a list of intents to choose from. This will make the prompt more predictable since the AI will only be able to choose from the list of intents we provide.

prompt = @$"What is the intent of this request? {request}
You can choose between SendEmail, SendMessage, CompleteTask, CreateDocument.";

Now when you run the prompt with the same input, you should get a more usable result, but it's still not perfect since the AI responds with additional information.

The intent of the request is to send an email. Therefore, the appropriate action would be to use the SendEmail function.

2) Add structure to the output with formatting

While the result is more predictable, there's a chance that the LLM responds in such a way that you cannot easily parse the result. For example, if the LLM responded with "The intent is SendEmail", you may have a hard time extracting the intent since it's not in a predictable location.

To make the result more predictable, we can add structure to the prompt by using formatting. In this case, we can define the different parts of our prompt like so:

prompt = @$"Instructions: What is the intent of this request?
Choices: SendEmail, SendMessage, CompleteTask, CreateDocument.
User Input: {request}
Intent: ";

By using this formatting, the AI is less likely to respond with a result that is more than just the intent.

In other prompts, you may also want to experiment with using Markdown, XML, JSON, YAML or other formats to add structure to your prompts and their outputs. Since LLMs have a tendency to generate text that looks like the prompt, it's recommended that you use the same format for both the prompt and the output.

For example, if you wanted the LLM to generate a JSON object, you could use the following prompt:

prompt = $$"""
         ## Instructions
         Provide the intent of the request using the following format:
         
         ```json
         {
             "intent": {intent}
         }
         ```
         
         ## Choices
         You can choose between the following intents:
         
         ```json
         ["SendEmail", "SendMessage", "CompleteTask", "CreateDocument"]
         ```
         
         ## User Input
         The user input is:
         
         ```json
         {
             "request": "{{request}}"
         }
         ```
         
         ## Intent
         """;

This would result in the following output:

{
    "intent": "SendEmail"
}

3) Provide examples with few-shot prompting

So far, we've been using zero-shot prompting, which means we're not providing any examples to the AI. While this is ok for getting started, it's not recommended for more complex scenarios since the AI may not have enough training data to generate the correct result.

To add examples, we can use few-shot prompting. With few-shot prompting, we provide the AI with a few examples of what we want it to do. For example, we could provide the following examples to help the AI distinguish between sending an email and sending an instant message.

        prompt = @$"Instructions: What is the intent of this request?
Choices: SendEmail, SendMessage, CompleteTask, CreateDocument.

User Input: Can you send a very quick approval to the marketing team?
Intent: SendMessage

User Input: Can you send the full update to the marketing team?
Intent: SendEmail

User Input: {request}
Intent: ";

4) Tell the AI what to do to avoid doing something wrong

Often when an AI starts responding incorrectly, it's tempting to simply tell the AI to stop doing something. Unfortunately, this can often lead to the AI doing something even worse. For example, if you told the AI to stop returning back a hallucinated intent, it may start returning back an intent that is completely unrelated to the user's request.

Instead, it's recommended that you tell the AI what it should do instead. For example, if you wanted to tell the AI to stop returning back a hallucinated intent, you might write the following prompt.

prompt = $"""
         Instructions: What is the intent of this request?
         If you don't know the intent, don't guess; instead respond with "Unknown".
         Choices: SendEmail, SendMessage, CompleteTask, CreateDocument, Unknown.

         User Input: Can you send a very quick approval to the marketing team?
         Intent: SendMessage

         User Input: Can you send the full update to the marketing team?
         Intent: SendEmail

         User Input: {request}
         Intent: 
         """;

5) Provide context to the AI

In some cases, you may want to provide the AI with context so it can better understand the user's request. This is particularly important for long running chat scenarios where the intent of the user may require context from previous messages.

Take for example, the following conversation:

User: I hate sending emails, no one ever reads them.
AI: I'm sorry to hear that. Messages may be a better way to communicate.
User: I agree, can you send the full status update to the marketing team that way?

If the AI was only given the last message, it may incorrectly respond with "SendEmail" instead of "SendMessage". However, if the AI was given the entire conversation, it may be able to understand the intent of the user.

To provide this context, we can simply add the previous messages to the prompt. For example, we could update our prompt to look like the following:

string history = """
                 User input: I hate sending emails, no one ever reads them.
                 AI response: I'm sorry to hear that. Messages may be a better way to communicate.
                 """;

prompt = $"""
         Instructions: What is the intent of this request?
         If you don't know the intent, don't guess; instead respond with "Unknown".
         Choices: SendEmail, SendMessage, CompleteTask, CreateDocument, Unknown.
         
         User Input: Can you send a very quick approval to the marketing team?
         Intent: SendMessage
         
         User Input: Can you send the full update to the marketing team?
         Intent: SendEmail
         
         {history}
         User Input: {request}
         Intent: 
         """;

6) Using message roles in chat completion prompts

As your prompts become more complex, you may want to use message roles to help the AI differentiate between system instructions, user input, and AI responses. This is particularly important as we start to add the chat history to the prompt. The AI should know that some of the previous messages were sent by itself and not the user.

In Semantic Kernel, a special syntax is used to define message roles. To define a message role, you simply wrap the message in <message> tag with the role name as an attribute. This is currently only available in the C# and Java SDKs.

history = """
          <message role="user">I hate sending emails, no one ever reads them.</message>
          <message role="assistant">I'm sorry to hear that. Messages may be a better way to communicate.</message>
          """;

prompt = $"""
         <message role="system">Instructions: What is the intent of this request?
         If you don't know the intent, don't guess; instead respond with "Unknown".
         Choices: SendEmail, SendMessage, CompleteTask, CreateDocument, Unknown.</message>
         
         <message role="user">Can you send a very quick approval to the marketing team?</message>
         <message role="system">Intent:</message>
         <message role="assistant">SendMessage</message>
         
         <message role="user">Can you send the full update to the marketing team?</message>
         <message role="system">Intent:</message>
         <message role="assistant">SendEmail</message>
         
         {history}
         <message role="user">{request}</message>
         <message role="system">Intent:</message>
         """;

7) Give your AI words of encouragement

Finally, research has shown that giving your AI words of encouragement can help it perform better. For example, offering bonuses or rewards for good results can yield better results.

history = """
          <message role="user">I hate sending emails, no one ever reads them.</message>
          <message role="assistant">I'm sorry to hear that. Messages may be a better way to communicate.</message>
          """;

prompt = $"""
         <message role="system">Instructions: What is the intent of this request?
         If you don't know the intent, don't guess; instead respond with "Unknown".
         Choices: SendEmail, SendMessage, CompleteTask, CreateDocument, Unknown.
         Bonus: You'll get $20 if you get this right.</message>
        
         <message role="user">Can you send a very quick approval to the marketing team?</message>
         <message role="system">Intent:</message>
         <message role="assistant">SendMessage</message>
        
         <message role="user">Can you send the full update to the marketing team?</message>
         <message role="system">Intent:</message>
         <message role="assistant">SendEmail</message>
        
         {history}
         <message role="user">{request}</message>
         <message role="system">Intent:</message>
         """;

Next steps

Now that you know how to write prompts, you can learn how to templatize them to make them more flexible and powerful.