Catatan
Akses ke halaman ini memerlukan otorisasi. Anda dapat mencoba masuk atau mengubah direktori.
Akses ke halaman ini memerlukan otorisasi. Anda dapat mencoba mengubah direktori.
Microsoft Agent Framework mendukung penambahan kemampuan Retrieval Augmented Generation (RAG) ke agen dengan mudah dengan menambahkan Penyedia Konteks AI ke agen.
Untuk pola percakapan/sesi bersama pengambilan, lihat Percakapan & Gambaran umum memori.
Menggunakan TextSearchProvider
Kelas TextSearchProvider ini adalah implementasi out-of-the-box dari penyedia konteks RAG.
Ini dapat dengan mudah dilampirkan ke ChatClientAgentAIContextProviders menggunakan opsi untuk memberikan kemampuan RAG kepada agen.
// Configure the options for the TextSearchProvider.
TextSearchProviderOptions textSearchOptions = new()
{
SearchTime = TextSearchProviderOptions.TextSearchBehavior.BeforeAIInvoke,
};
// Create the AI agent with the TextSearchProvider as the AI context provider.
AIAgent agent = azureOpenAIClient
.GetChatClient(deploymentName)
.AsAIAgent(new ChatClientAgentOptions
{
ChatOptions = new() { Instructions = "You are a helpful support specialist. Answer questions using the provided context and cite the source document when available." },
AIContextProviders = [new TextSearchProvider(SearchAdapter, textSearchOptions)]
});
TextSearchProvider memerlukan fungsi yang menyediakan hasil pencarian yang diberikan kueri. Ini dapat diimplementasikan menggunakan teknologi pencarian apa pun, misalnya Azure AI Search, atau mesin pencari web.
Berikut adalah contoh fungsi pencarian tiruan yang mengembalikan hasil yang telah ditentukan sebelumnya berdasarkan kueri.
SourceName dan SourceLink bersifat opsional, tetapi jika disediakan akan digunakan oleh agen untuk mengutip sumber informasi saat menjawab pertanyaan pengguna.
static Task<IEnumerable<TextSearchProvider.TextSearchResult>> SearchAdapter(string query, CancellationToken cancellationToken)
{
// The mock search inspects the user's question and returns pre-defined snippets
// that resemble documents stored in an external knowledge source.
List<TextSearchProvider.TextSearchResult> results = new();
if (query.Contains("return", StringComparison.OrdinalIgnoreCase) || query.Contains("refund", StringComparison.OrdinalIgnoreCase))
{
results.Add(new()
{
SourceName = "Contoso Outdoors Return Policy",
SourceLink = "https://contoso.com/policies/returns",
Text = "Customers may return any item within 30 days of delivery. Items should be unused and include original packaging. Refunds are issued to the original payment method within 5 business days of inspection."
});
}
return Task.FromResult<IEnumerable<TextSearchProvider.TextSearchResult>>(results);
}
Opsi TextSearchProvider
TextSearchProvider dapat disesuaikan melalui TextSearchProviderOptions kelas . Berikut adalah contoh membuat opsi untuk menjalankan pencarian sebelum setiap pemanggilan model dan menyimpan jendela bergulir pendek konteks percakapan.
TextSearchProviderOptions textSearchOptions = new()
{
// Run the search prior to every model invocation and keep a short rolling window of conversation context.
SearchTime = TextSearchProviderOptions.TextSearchBehavior.BeforeAIInvoke,
RecentMessageMemoryLimit = 6,
};
Kelas TextSearchProvider mendukung opsi berikut melalui TextSearchProviderOptions kelas .
| Option | Tipe | Deskripsi | Bawaan |
|---|---|---|---|
| SearchTime | TextSearchProviderOptions.TextSearchBehavior |
Menunjukkan kapan pencarian harus dijalankan. Ada dua opsi, setiap kali agen dipanggil, atau sesuai permintaan melalui panggilan fungsi. | TextSearchProviderOptions.TextSearchBehavior.BeforeAIInvoke |
| FunctionToolName | string |
Nama alat pencarian yang diekspos saat beroperasi dalam mode sesuai permintaan. | "Cari" |
| FunctionToolDescription | string |
Deskripsi alat pencarian yang diekspos saat beroperasi dalam mode sesuai permintaan. | "Memungkinkan pencarian informasi tambahan untuk membantu menjawab pertanyaan pengguna." |
| ContextPrompt | string |
Perintah konteks diawali untuk menghasilkan saat beroperasi dalam BeforeAIInvoke mode. |
"## Konteks Tambahan\nPertimbangkan informasi berikut dari dokumen sumber saat merespons pengguna:" |
| CitationsPrompt | string |
Instruksi ditambahkan setelah hasil untuk meminta kutipan saat beroperasi dalam BeforeAIInvoke mode. |
"Sertakan kutipan ke dokumen sumber dengan nama dokumen dan tautan jika nama dokumen dan tautan tersedia." |
| ContextFormatter | Func<IList<TextSearchProvider.TextSearchResult>, string> |
Delegasi opsional untuk sepenuhnya menyesuaikan pemformatan daftar hasil saat beroperasi dalam BeforeAIInvoke mode. Jika disediakan, ContextPrompt dan CitationsPrompt diabaikan. |
null |
| RecentMessageMemoryLimit | int |
Jumlah pesan percakapan terbaru (pengguna dan asisten) untuk disimpan dalam memori dan disertakan saat membuat input pencarian untuk BeforeAIInvoke pencarian. |
0 (dinonaktifkan) |
| RecentMessageRolesIncluded | List<ChatRole> |
Daftar jenis ChatRole untuk memfilter pesan terbaru saat memutuskan pesan terbaru mana yang akan disertakan saat membuat input pencarian. |
ChatRole.User |
Petunjuk / Saran
Lihat sampel .NET untuk contoh lengkap yang dapat dijalankan.
Agent Framework mendukung penggunaan koleksi VectorStore Semantic Kernel untuk menyediakan kemampuan RAG kepada agen. Ini dicapai melalui fungsionalitas jembatan yang mengonversi fungsi pencarian Kernel Semantik menjadi alat Agent Framework.
Membuat Alat Pencarian dari VectorStore
Metode create_search_function dari koleksi Semantic Kernel VectorStore mengembalikan KernelFunction yang dapat dikonversi ke alat Agent Framework menggunakan .as_agent_framework_tool().
Gunakan dokumentasi konektor penyimpanan vektor untuk mempelajari cara menyiapkan koleksi penyimpanan vektor yang berbeda.
from semantic_kernel.connectors.ai.open_ai import OpenAITextEmbedding
from semantic_kernel.connectors.azure_ai_search import AzureAISearchCollection
from semantic_kernel.functions import KernelParameterMetadata
from agent_framework.openai import OpenAIResponsesClient
# Define your data model
class SupportArticle:
article_id: str
title: str
content: str
category: str
# ... other fields
# Create an Azure AI Search collection
collection = AzureAISearchCollection[str, SupportArticle](
record_type=SupportArticle,
embedding_generator=OpenAITextEmbedding()
)
async with collection:
await collection.ensure_collection_exists()
# Load your knowledge base articles into the collection
# await collection.upsert(articles)
# Create a search function from the collection
search_function = collection.create_search_function(
function_name="search_knowledge_base",
description="Search the knowledge base for support articles and product information.",
search_type="keyword_hybrid",
parameters=[
KernelParameterMetadata(
name="query",
description="The search query to find relevant information.",
type="str",
is_required=True,
type_object=str,
),
KernelParameterMetadata(
name="top",
description="Number of results to return.",
type="int",
default_value=3,
type_object=int,
),
],
string_mapper=lambda x: f"[{x.record.category}] {x.record.title}: {x.record.content}",
)
# Convert the search function to an Agent Framework tool
search_tool = search_function.as_agent_framework_tool()
# Create an agent with the search tool
agent = OpenAIResponsesClient(model_id="gpt-4o").as_agent(
instructions="You are a helpful support specialist. Use the search tool to find relevant information before answering questions. Always cite your sources.",
tools=search_tool
)
# Use the agent with RAG capabilities
response = await agent.run("How do I return a product?")
print(response.text)
Penting
Fitur ini memerlukan semantic-kernel versi 1.38 atau yang lebih tinggi.
Menyesuaikan Perilaku Pencarian
Anda dapat menyesuaikan fungsi pencarian dengan berbagai opsi:
# Create a search function with filtering and custom formatting
search_function = collection.create_search_function(
function_name="search_support_articles",
description="Search for support articles in specific categories.",
search_type="keyword_hybrid",
# Apply filters to restrict search scope
filter=lambda x: x.is_published == True,
parameters=[
KernelParameterMetadata(
name="query",
description="What to search for in the knowledge base.",
type="str",
is_required=True,
type_object=str,
),
KernelParameterMetadata(
name="category",
description="Filter by category: returns, shipping, products, or billing.",
type="str",
type_object=str,
),
KernelParameterMetadata(
name="top",
description="Maximum number of results to return.",
type="int",
default_value=5,
type_object=int,
),
],
# Customize how results are formatted for the agent
string_mapper=lambda x: f"Article: {x.record.title}\nCategory: {x.record.category}\nContent: {x.record.content}\nSource: {x.record.article_id}",
)
Untuk detail lengkap tentang parameter yang tersedia untuk create_search_function, lihat dokumentasi Kernel Semantik.
Menggunakan Beberapa Fungsi Pencarian
Anda dapat menyediakan beberapa alat pencarian kepada agen untuk domain pengetahuan yang berbeda:
# Create search functions for different knowledge bases
product_search = product_collection.create_search_function(
function_name="search_products",
description="Search for product information and specifications.",
search_type="semantic_hybrid",
string_mapper=lambda x: f"{x.record.name}: {x.record.description}",
).as_agent_framework_tool()
policy_search = policy_collection.create_search_function(
function_name="search_policies",
description="Search for company policies and procedures.",
search_type="keyword_hybrid",
string_mapper=lambda x: f"Policy: {x.record.title}\n{x.record.content}",
).as_agent_framework_tool()
# Create an agent with multiple search tools
agent = chat_client.as_agent(
instructions="You are a support agent. Use the appropriate search tool to find information before answering. Cite your sources.",
tools=[product_search, policy_search]
)
Anda juga dapat membuat beberapa fungsi pencarian dari koleksi yang sama dengan deskripsi dan parameter yang berbeda untuk menyediakan kemampuan pencarian khusus:
# Create multiple search functions from the same collection
# Generic search for broad queries
general_search = support_collection.create_search_function(
function_name="search_all_articles",
description="Search all support articles for general information.",
search_type="semantic_hybrid",
parameters=[
KernelParameterMetadata(
name="query",
description="The search query.",
type="str",
is_required=True,
type_object=str,
),
],
string_mapper=lambda x: f"{x.record.title}: {x.record.content}",
).as_agent_framework_tool()
# Detailed lookup for specific article IDs
detail_lookup = support_collection.create_search_function(
function_name="get_article_details",
description="Get detailed information for a specific article by its ID.",
search_type="keyword",
top=1,
parameters=[
KernelParameterMetadata(
name="article_id",
description="The specific article ID to retrieve.",
type="str",
is_required=True,
type_object=str,
),
],
string_mapper=lambda x: f"Title: {x.record.title}\nFull Content: {x.record.content}\nLast Updated: {x.record.updated_date}",
).as_agent_framework_tool()
# Create an agent with both search functions
agent = chat_client.as_agent(
instructions="You are a support agent. Use search_all_articles for general queries and get_article_details when you need full details about a specific article.",
tools=[general_search, detail_lookup]
)
Pendekatan ini memungkinkan agen untuk memilih strategi pencarian yang paling tepat berdasarkan kueri pengguna.
Contoh lengkap
# Copyright (c) Microsoft. All rights reserved.
import asyncio
from collections.abc import MutableSequence, Sequence
from typing import Any
from agent_framework import Agent, BaseContextProvider, Context, Message, SupportsChatGetResponse
from agent_framework.azure import AzureAIClient
from azure.identity.aio import AzureCliCredential
from pydantic import BaseModel
class UserInfo(BaseModel):
name: str | None = None
age: int | None = None
class UserInfoMemory(BaseContextProvider):
def __init__(self, client: SupportsChatGetResponse, user_info: UserInfo | None = None, **kwargs: Any):
"""Create the memory.
If you pass in kwargs, they will be attempted to be used to create a UserInfo object.
"""
self._chat_client = client
if user_info:
self.user_info = user_info
elif kwargs:
self.user_info = UserInfo.model_validate(kwargs)
else:
self.user_info = UserInfo()
async def invoked(
self,
request_messages: Message | Sequence[Message],
response_messages: Message | Sequence[Message] | None = None,
invoke_exception: Exception | None = None,
**kwargs: Any,
) -> None:
"""Extract user information from messages after each agent call."""
# Check if we need to extract user info from user messages
user_messages = [msg for msg in request_messages if hasattr(msg, "role") and msg.role == "user"] # type: ignore
if (self.user_info.name is None or self.user_info.age is None) and user_messages:
try:
# Use the chat client to extract structured information
result = await self._chat_client.get_response(
messages=request_messages, # type: ignore
instructions="Extract the user's name and age from the message if present. "
"If not present return nulls.",
options={"response_format": UserInfo},
)
# Update user info with extracted data
try:
extracted = result.value
if self.user_info.name is None and extracted.name:
self.user_info.name = extracted.name
if self.user_info.age is None and extracted.age:
self.user_info.age = extracted.age
except Exception:
pass # Failed to extract, continue without updating
except Exception:
pass # Failed to extract, continue without updating
async def invoking(self, messages: Message | MutableSequence[Message], **kwargs: Any) -> Context:
"""Provide user information context before each agent call."""
instructions: list[str] = []
if self.user_info.name is None:
instructions.append(
"Ask the user for their name and politely decline to answer any questions until they provide it."
)
else:
instructions.append(f"The user's name is {self.user_info.name}.")
if self.user_info.age is None:
instructions.append(
"Ask the user for their age and politely decline to answer any questions until they provide it."
)
else:
instructions.append(f"The user's age is {self.user_info.age}.")
# Return context with additional instructions
return Context(instructions=" ".join(instructions))
def serialize(self) -> str:
"""Serialize the user info for thread persistence."""
return self.user_info.model_dump_json()
async def main():
async with AzureCliCredential() as credential:
client = AzureAIClient(credential=credential)
# Create the memory provider
memory_provider = UserInfoMemory(client)
# Create the agent with memory
async with Agent(
client=client,
instructions="You are a friendly assistant. Always address the user by their name.",
context_providers=[memory_provider],
) as agent:
# Create a new thread for the conversation
thread = agent.create_session()
print(await agent.run("Hello, what is the square root of 9?", session=thread))
print(await agent.run("My name is RuaidhrĂ", session=thread))
print(await agent.run("I am 20 years old", session=thread))
# Access the memory component and inspect the memories
user_info_memory = memory_provider
if user_info_memory:
print()
print(f"MEMORY - User Name: {user_info_memory.user_info.name}") # type: ignore
print(f"MEMORY - User Age: {user_info_memory.user_info.age}") # type: ignore
if __name__ == "__main__":
asyncio.run(main())
# Copyright (c) Microsoft. All rights reserved.
import asyncio
from collections.abc import MutableSequence, Sequence
from typing import Any
from agent_framework import Agent, BaseContextProvider, Context, Message, SupportsChatGetResponse
from agent_framework.azure import AzureAIClient
from azure.identity.aio import AzureCliCredential
from pydantic import BaseModel
class UserInfo(BaseModel):
name: str | None = None
age: int | None = None
class UserInfoMemory(BaseContextProvider):
def __init__(self, client: SupportsChatGetResponse, user_info: UserInfo | None = None, **kwargs: Any):
"""Create the memory.
If you pass in kwargs, they will be attempted to be used to create a UserInfo object.
"""
self._chat_client = client
if user_info:
self.user_info = user_info
elif kwargs:
self.user_info = UserInfo.model_validate(kwargs)
else:
self.user_info = UserInfo()
async def invoked(
self,
request_messages: Message | Sequence[Message],
response_messages: Message | Sequence[Message] | None = None,
invoke_exception: Exception | None = None,
**kwargs: Any,
) -> None:
"""Extract user information from messages after each agent call."""
# Check if we need to extract user info from user messages
user_messages = [msg for msg in request_messages if hasattr(msg, "role") and msg.role == "user"] # type: ignore
if (self.user_info.name is None or self.user_info.age is None) and user_messages:
try:
# Use the chat client to extract structured information
result = await self._chat_client.get_response(
messages=request_messages, # type: ignore
instructions="Extract the user's name and age from the message if present. "
"If not present return nulls.",
options={"response_format": UserInfo},
)
# Update user info with extracted data
try:
extracted = result.value
if self.user_info.name is None and extracted.name:
self.user_info.name = extracted.name
if self.user_info.age is None and extracted.age:
self.user_info.age = extracted.age
except Exception:
pass # Failed to extract, continue without updating
except Exception:
pass # Failed to extract, continue without updating
async def invoking(self, messages: Message | MutableSequence[Message], **kwargs: Any) -> Context:
"""Provide user information context before each agent call."""
instructions: list[str] = []
if self.user_info.name is None:
instructions.append(
"Ask the user for their name and politely decline to answer any questions until they provide it."
)
else:
instructions.append(f"The user's name is {self.user_info.name}.")
if self.user_info.age is None:
instructions.append(
"Ask the user for their age and politely decline to answer any questions until they provide it."
)
else:
instructions.append(f"The user's age is {self.user_info.age}.")
# Return context with additional instructions
return Context(instructions=" ".join(instructions))
def serialize(self) -> str:
"""Serialize the user info for thread persistence."""
return self.user_info.model_dump_json()
async def main():
async with AzureCliCredential() as credential:
client = AzureAIClient(credential=credential)
# Create the memory provider
memory_provider = UserInfoMemory(client)
# Create the agent with memory
async with Agent(
client=client,
instructions="You are a friendly assistant. Always address the user by their name.",
context_providers=[memory_provider],
) as agent:
# Create a new thread for the conversation
thread = agent.create_session()
print(await agent.run("Hello, what is the square root of 9?", session=thread))
print(await agent.run("My name is RuaidhrĂ", session=thread))
print(await agent.run("I am 20 years old", session=thread))
# Access the memory component and inspect the memories
user_info_memory = memory_provider
if user_info_memory:
print()
print(f"MEMORY - User Name: {user_info_memory.user_info.name}") # type: ignore
print(f"MEMORY - User Age: {user_info_memory.user_info.age}") # type: ignore
if __name__ == "__main__":
asyncio.run(main())
Konektor VectorStore yang Didukung
Pola ini berfungsi dengan konektor Semantic Kernel VectorStore apa pun, termasuk:
- Pencarian Azure AI (
AzureAISearchCollection) - Qdrant (
QdrantCollection) - Pinecone (
PineconeCollection) - Redis (
RedisCollection) - Weaviate (
WeaviateCollection) - In-Memory (
InMemoryVectorStoreCollection) - Dan banyak lagi
Setiap konektor menyediakan metode yang sama create_search_function yang dapat dijembatani ke alat Agent Framework, memungkinkan Anda memilih database vektor yang paling sesuai dengan kebutuhan Anda. Lihat daftar lengkapnya di sini.