Understanding the kernel in Semantic Kernel

pink circles of semantic kernel

Similar to operating system, the kernel is responsible for managing resources that are necessary to run "code" in an AI application. This includes managing the configuration, services, and plugins that are necessary for both native code and AI services to run together.

If you want to see the code demonstrated in this article in a complete solution, check out the following samples in the public documentation repository.

Language Link to final solution
C# Open solution in GitHub
Python Open solution in GitHub

Using native and AI services together

Semantic Kernel makes it easy to run AI services alongside native code by treating calls to AI services as their own first-class citizens called "semantic functions."

Semantic functions are discussed more deeply in their own section, but for now, it is important to understand that both native and semantic functions behave the same way within Semantic Kernel. This is thanks to the fact that both are expressed as SKFunction objects. Because of this, they can each be triggered the same way by the kernel.

This is important because it means both semantic and native functions behave the same way within the kernel to maximize interoperability.

For example, you can see in the image below how the kernel is able to run both native and semantic functions together in a single pipeline. This allows the developer to use Semantic Kernel to 1) get the current time, 2) generate a poem about the time, before finally 3) translating it into a different language.

SKFunctions run inside the kernel

Running functions using the kernel

To run SKFunction objects, Semantic Kernel provides the RunAsync method within the Kernel class. This method takes one or more SKFunction objects and executes them sequentially. For example, the following code runs a single native function from the Time plugin and returns back the result:

Import the necessary packages:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Plugins.Core;

Run the today function from the time plugin:

var kernel = Kernel.Builder
    .Build();
var time = kernel.ImportFunctions(new TimePlugin());
var result = await kernel.RunAsync(time["Today"]);

Console.WriteLine(result);

After running this, you should see today's date printed to the console.

Creating the kernel runtime environment

To run anything more complex than a simple native function, however, you must ensure the kernel's runtime is appropriately configured. This is particularly important for semantic functions that require access to AI services.

Runtime properties managed by the kernel

By investigating the constructor of the Kernel class, you can see that you can configure multiple settings that are necessary to run both native and semantic functions. These include:

  • The default AI service that will power your semantic functions.
  • The template engine used to render prompt templates.
  • The logger used to log messages from functions.
  • The plugins available to be executed by the kernel
  • Additional configuration used by the kernel via the KernelConfig class.

Configuring the kernel

Depending on your language of choice, you can configure the kernel in different ways. For example, in C#, you can use the Kernel.Builder class to create a kernel, whereas with Python, you can iteratively add properties to the Kernel object directly.

In the following examples, you can see how to add a chat completion service and a logger to the kernel.

If you are using a Azure OpenAI, you can use the WithAzureChatCompletionService method.

var kernelWithConfiguration = Kernel.Builder
    .WithLoggerFactory(loggerFactory)
    .WithAzureChatCompletionService(
        AzureOpenAIDeploymentName,  // The name of your deployment (e.g., "gpt-35-turbo")
        AzureOpenAIEndpoint,        // The endpoint of your Azure OpenAI service
        AzureOpenAIApiKey           // The API key of your Azure OpenAI service
    )
    .Build();

If you are using OpenAI, you can use the WithOpenAIChatCompletionService method.

var kernelWithConfiguration = Kernel.Builder
    .WithLoggerFactory(loggerFactory)
    .WithOpenAIChatCompletionService(
        OpenAIModelId,              // The name of your deployment (e.g., "gpt-35-turbo")
        OpenAIApiKey,               // The API key of your Azure OpenAI service
        OpenAIOrgId                 // The endpoint of your Azure OpenAI service
    )
    .Build();

Going further with the kernel

For more details on how to configure and leverage these properties, please refer to the following articles:

Article Description
Adding services Learn how to add services from OpenAI, Azure OpenAI, Hugging Face, and more to the kernel.
Adding telemetry and logs Gain visibility into what Semantic Kernel is doing by adding telemetry to the kernel.

Next steps

Once you're done configuring the kernel, you can start creating custom functions by developing your own plugins.