Събитие
Създаване на интелигентни приложения
17.03, 23 ч. - 21.03, 23 ч.
Присъединете се към поредицата срещи, за да изградите мащабируеми AI решения, базирани на реални случаи на употреба с колеги разработчици и експерти.
Регистрирайте се сегаТози браузър вече не се поддържа.
Надстройте до Microsoft Edge, за да се възползвате от най-новите функции, актуализации на защитата и техническа поддръжка.
In this article, you explore essential prompt engineering concepts. Many AI models are prompt-based, meaning they respond to user input text (a prompt) with a response generated by predictive algorithms (a completion). Newer models also often support completions in chat form, with messages based on roles (system, user, assistant) and chat history to preserve conversations.
Consider this text generation example where prompt is the user input and completion is the model output:
Prompt: "The president who served the shortest term was "
Completion: "Pedro Lascurain."
The completion appears correct, but what if your app is supposed to help U.S. history students? Pedro Lascurain's 45-minute term is the shortest term for any president, but he served as the president of Mexico. The U.S. history students are probably looking for "William Henry Harrison". Clearly, the app could be more helpful to its intended users if you gave it some context.
Prompt engineering adds context to the prompt by providing instructions, examples, and cues to help the model produce better completions.
Models that support text generation often don't require any specific format, but you should organize your prompts so it's clear what's an instruction and what's an example. Models that support chat-based apps use three roles to organize completions: a system role that controls the chat, a user role to represent user input, and an assistant role for responding to users. Divide your prompts into messages for each role:
An instruction is text that tells the model how to respond. An instruction can be a directive or an imperative:
Directives are more open-ended and flexible than imperatives:
You can provide content to add more context to instructions.
Primary content is text that you want the model to process with an instruction. Whatever action the instruction entails, the model will perform it on the primary content to produce a completion.
Supporting content is text that you refer to in an instruction, but which isn't the target of the instruction. The model uses the supporting content to complete the instruction, which means that supporting content also appears in completions, typically as some kind of structure (such as in headings or column labels).
Use labels with your instructional content to help the model figure out how to use it with the instruction. Don't worry too much about precision—labels don't have to match instructions exactly because the model will handle things like word form and capitalization.
Suppose you use the instruction "Summarize US Presidential accomplishments" to produce a list. The model might organize and order it in any number of ways. But what if you want the list to group the accomplishments by a specific set of categories? Use supporting content to add that information to the instruction.
Adjust your instruction so the model groups by category, and append supporting content that specifies those categories:
prompt = """
Instructions: Summarize US Presidential accomplishments, grouped by category.
Categories: Domestic Policy, US Economy, Foreign Affairs, Space Exploration.
Accomplishments: 'George Washington
- First president of the United States.
- First president to have been a military veteran.
- First president to be elected to a second term in office.
- Received votes from every presidential elector in an election.
- Filled the entire body of the United States federal judges; including the Supreme Court.
- First president to be declared an honorary citizen of a foreign country, and an honorary citizen of France.
John Adams ...' ///Text truncated
""";
An example is text that shows the model how to respond by providing sample user input and model output. The model uses examples to infer what to include in completions. Examples can come either before or after the instructions in an engineered prompt, but the two shouldn't be interspersed.
An example starts with a prompt and can optionally include a completion. A completion in an example doesn't have to include the verbatim response—it might just contain a formatted word, the first bullet in an unordered list, or something similar to indicate how each completion should start.
Examples are classified as zero-shot learning or few-shot learning based on whether they contain verbatim completions.
A cue is text that conveys the desired structure or format of output. Like an instruction, a cue isn't processed by the model as if it were user input. Like an example, a cue shows the model what you want instead of telling it what to do. You can add as many cues as you want, so you can iterate to get the result you want. Cues are used with an instruction or an example and should be at the end of the prompt.
Suppose you use an instruction to tell the model to produce a list of presidential accomplishments by category, along with supporting content that tells the model what categories to use. You decide that you want the model to produce a nested list with all caps for categories, with each president's accomplishments in each category listed on one line that begins with their name, with presidents listed chronologically. After your instruction and supporting content, you could add three cues to show the model how to structure and format the list:
prompt = """
Instructions: Summarize US Presidential accomplishments, grouped by category.
Categories: Domestic Policy, US Economy, Foreign Affairs, Space Exploration.
Accomplishments: George Washington
First president of the United States.
First president to have been a military veteran.
First president to be elected to a second term in office.
First president to receive votes from every presidential elector in an election.
First president to fill the entire body of the United States federal judges; including the Supreme Court.
First president to be declared an honorary citizen of a foreign country, and an honorary citizen of France.
John Adams ... /// Text truncated
DOMESTIC POLICY
- George Washington:
- John Adams:
""";
.NET provides various tools to prompt and chat with different AI models. Use Semantic Kernel to connect to a wide variety of AI models and services, as well as other SDKs such as the official OpenAI .NET library. Semantic Kernel includes tools to create prompts with different roles and maintain chat history, as well as many other features.
Consider the following code example:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
// Create a kernel with OpenAI chat completion
#pragma warning disable SKEXP0010
Kernel kernel = Kernel.CreateBuilder()
.AddOpenAIChatCompletion(
modelId: "phi3:mini",
endpoint: new Uri("http://localhost:11434"),
apiKey: "")
.Build();
var aiChatService = kernel.GetRequiredService<IChatCompletionService>();
var chatHistory = new ChatHistory();
chatHistory.Add(
new ChatMessageContent(AuthorRole.System, "You are a helpful AI Assistant."));
while (true)
{
// Get user prompt and add to chat history
Console.WriteLine("Your prompt:");
chatHistory.Add(new ChatMessageContent(AuthorRole.User, Console.ReadLine()));
// Stream the AI response and add to chat history
Console.WriteLine("AI Response:");
var response = "";
await foreach (var item in
aiChatService.GetStreamingChatMessageContentsAsync(chatHistory))
{
Console.Write(item.Content);
response += item.Content;
}
chatHistory.Add(new ChatMessageContent(AuthorRole.Assistant, response));
Console.WriteLine();
}
The preceding code provides examples of the following concepts:
AuthorRole.System
message.AuthorRole.User
.You can also increase the power of your prompts with more advanced prompt engineering techniques that are covered in depth in their own articles.
Обратна връзка за .NET
.NET е проект с отворен код. Изберете връзка, за да предоставите обратна връзка:
Събитие
Създаване на интелигентни приложения
17.03, 23 ч. - 21.03, 23 ч.
Присъединете се към поредицата срещи, за да изградите мащабируеми AI решения, базирани на реални случаи на употреба с колеги разработчици и експерти.
Регистрирайте се сегаОбучение
Модул
Apply prompt engineering with Azure OpenAI Service - Training
In this module, learn how prompt engineering can help to create and fine-tune prompts for natural language processing models. Prompt engineering involves designing and testing various prompts to optimize the performance of the model in generating accurate and relevant responses.
Сертифициране
Microsoft Certified: Azure AI Engineer Associate - Certifications
Design and implement an Azure AI solution using Azure AI services, Azure AI Search, and Azure Open AI.
Документация
Integrate Your Data into AI Apps with Retrieval-Augmented Generation - .NET
Learn how retrieval-augmented generation lets you use your data with LLMs to generate better completions in .NET.
How Embeddings Extend Your AI Model's Reach - .NET
Learn how embeddings extend the limits and capabilities of AI models in .NET.
Using Vector Databases to Extend LLM Capabilities - .NET
Learn how vector databases extend LLM capabilities by storing and processing embeddings in .NET.