共用方式為


情境提供者

上下文提供者在每次調用時運行,以便在執行前添加上下文,並在執行後處理數據。

內建模式

建立代理時,透過建構器選項配置提供者。 AIContextProvider 是記憶體/上下文豐富功能的內建擴充點。

AIAgent agent = new OpenAIClient("<your_api_key>")
    .GetChatClient(modelName)
    .AsAIAgent(new ChatClientAgentOptions()
    {
        ChatOptions = new() { Instructions = "You are a helpful assistant." },
        AIContextProviders = [
            new MyCustomMemoryProvider()
        ],
    });

AgentSession session = await agent.CreateSessionAsync();
Console.WriteLine(await agent.RunAsync("Remember my name is Alice.", session));

小提示

關於預先建構 AIContextProvider 的實作列表,請參見 整合

建立代理時的常見模式是透過context_providers=[...]設定提供者。

InMemoryHistoryProvider 是用於本地對話記憶的內建歷史資料提供者。

from agent_framework import InMemoryHistoryProvider
from agent_framework.openai import OpenAIChatClient

agent = OpenAIChatClient().as_agent(
    name="MemoryBot",
    instructions="You are a helpful assistant.",
    context_providers=[InMemoryHistoryProvider("memory", load_messages=True)],
)

session = agent.create_session()
await agent.run("Remember that I prefer vegetarian food.", session=session)

RawAgent 在特定情況下可能會自動加上 InMemoryHistoryProvider("memory") ,但若想要確定局部記憶體行為時,請明確加上它。

自訂上下文提供者

當你需要注入動態指令/訊息/工具或執行後擷取狀態時,使用自訂上下文提供者。

上下文提供者的基底類別為 Microsoft.Agents.AI.AIContextProvider。 情境提供者參與代理管線,能貢獻或覆寫代理輸入訊息,並能從新訊息中擷取資訊。 AIContextProvider 有多種虛擬方法,可以覆寫以實作你自己的自訂上下文提供者。 請參閱下方不同的實作方式,了解有關哪些要覆寫的更多資訊。

AIContextProvider 狀態

一個 AIContextProvider 實例會附加到代理,所有會話都會使用同一個實例。 這表示 在 AIContextProvider 提供者實例中不應儲存任何特定會話狀態。 欄位 AIContextProvider 中可能有記憶體服務客戶端的參考,但不應該有欄位中特定記憶體集合的 ID。

相反的,AIContextProvider 可以儲存任何會話特定的值,例如記憶體 ID、訊息,或者任何與 AgentSession 本身相關的資料。 虛擬 AIContextProvider 方法都會獲得當前 AIAgentAgentSession 的參考。

為了方便地在 AgentSession 中儲存類型化狀態,提供了一個實用類別。

// First define a type containing the properties to store in state
internal class MyCustomState
{
    public string? MemoryId { get; set; }
}

// Create the helper
var sessionStateHelper = new ProviderSessionState<MyCustomState>(
    // stateInitializer is called when there is no state in the session for this AIContextProvider yet
    stateInitializer: currentSession => new MyCustomState() { MemoryId = Guid.NewGuid().ToString() },
    // The key under which to store state in the session for this provider. Make sure it does not clash with the keys of other providers.
    stateKey: this.GetType().Name,
    // An optional jsonSerializerOptions to control the serialization/deserialization of the custom state object
    jsonSerializerOptions: myJsonSerializerOptions);

// Using the helper you can read state:
MyCustomState state = sessionStateHelper.GetOrInitializeState(session);
Console.WriteLine(state.MemoryId);

// And write state:
sessionStateHelper.SaveState(session, state);

簡單 AIContextProvider 實作

AIContextProvider 簡單的實作通常會覆蓋兩種方法:

  • AIContextProvider.ProvideAIContextAsync - 載入相關資料並回傳額外指令、訊息或工具。
  • AIContextProvider.StoreAIContextAsync - 從新訊息中擷取相關資料並儲存。

這是一個整合記憶體服務的簡單範例 AIContextProvider

internal sealed class SimpleServiceMemoryProvider : AIContextProvider
{
    private readonly ProviderSessionState<State> _sessionState;
    private readonly ServiceClient _client;

    public SimpleServiceMemoryProvider(ServiceClient client, Func<AgentSession?, State>? stateInitializer = null)
        : base(null, null)
    {
        this._sessionState = new ProviderSessionState<State>(
            stateInitializer ?? (_ => new State()),
            this.GetType().Name);
        this._client = client;
    }

    public override string StateKey => this._sessionState.StateKey;

    protected override ValueTask<AIContext> ProvideAIContextAsync(InvokingContext context, CancellationToken cancellationToken = default)
    {
        var state = this._sessionState.GetOrInitializeState(context.Session);

        if (state.MemoriesId == null)
        {
            // No stored memories yet.
            return new ValueTask<AIContext>(new AIContext());
        }

        // Find memories that match the current user input.
        var memories = this._client.LoadMemories(state.MemoriesId, string.Join("\n", context.AIContext.Messages?.Select(x => x.Text) ?? []));

        // Return a new message that contains the text from any memories that were found.
        return new ValueTask<AIContext>(new AIContext
        {
            Messages = [new ChatMessage(ChatRole.User, "Here are some memories to help answer the user question: " + string.Join("\n", memories.Select(x => x.Text)))]
        });
    }

    protected override async ValueTask StoreAIContextAsync(InvokedContext context, CancellationToken cancellationToken = default)
    {
        var state = this._sessionState.GetOrInitializeState(context.Session);
        // Create a memory container in the service for this session
        // and save the returned id in the session.
        state.MemoriesId ??= this._client.CreateMemoryContainer();
        this._sessionState.SaveState(context.Session, state);

        // Use the service to extract memories from the user input and agent response.
        await this._client.StoreMemoriesAsync(state.MemoriesId, context.RequestMessages.Concat(context.ResponseMessages ?? []), cancellationToken);
    }

    public class State
    {
        public string? MemoriesId { get; set; }
    }
}

進階 AIContextProvider 實作

更進階的實作可以選擇覆寫以下方法:

  • AIContextProvider.InvokingCoreAsync - 在代理呼叫 LLM 前呼叫,允許修改請求訊息清單、工具與指令。
  • AIContextProvider.InvokedCoreAsync - 代理程式呼叫 LLM 後呼叫,允許存取所有請求與回應訊息。

AIContextProvider提供InvokingCoreAsyncInvokedCoreAsync的基礎實作。

InvokingCoreAsync基礎實作的做法如下:

  • 它會過濾輸入訊息清單,只篩選出呼叫者傳送到代理的訊息。 請注意,這個過濾器可以透過AIContextProvider建構子上的provideInputMessageFilter參數來覆寫。
  • ProvideAIContextAsync以篩選過的請求訊息、現有的工具和指示來進行呼叫。
  • 會將所有由 ProvideAIContextAsync 返回的訊息加上來源資訊,表示這些訊息來自該上下文提供者。
  • 將回 ProvideAIContextAsync 傳的訊息、工具與指令與現有的合併,產生代理將使用的輸入。 訊息、工具和指示會附加在現有的指令上。

InvokedCoreAsync底座執行以下功能:

  • 檢查執行是否失敗,若失敗則返回,無需進一步處理。
  • 它會過濾輸入訊息清單,只篩選出呼叫者傳送到代理的訊息。 請注意,此過濾器可以透過構造函數上的storeInputMessageFilter參數AIContextProvider來覆寫。
  • 將過濾過的請求訊息及所有回應訊息傳送至 StoreAIContextAsync 儲存。

你可以覆寫這些方法來實作一個AIContextProvider,不過這需要實作者自行適當地實作基礎功能。 這裡有一個此類實作的範例。

internal sealed class AdvancedServiceMemoryProvider : AIContextProvider
{
    private readonly ProviderSessionState<State> _sessionState;
    private readonly ServiceClient _client;

    public AdvancedServiceMemoryProvider(ServiceClient client, Func<AgentSession?, State>? stateInitializer = null)
        : base(null, null)
    {
        this._sessionState = new ProviderSessionState<State>(
            stateInitializer ?? (_ => new State()),
            this.GetType().Name);
        this._client = client;
    }

    public override string StateKey => this._sessionState.StateKey;

    protected override async ValueTask<AIContext> InvokingCoreAsync(InvokingContext context, CancellationToken cancellationToken = default)
    {
        var state = this._sessionState.GetOrInitializeState(context.Session);

        if (state.MemoriesId == null)
        {
            // No stored memories yet.
            return new AIContext();
        }

        // We only want to search for memories based on user input, and exclude chat history or other AI context provider messages.
        var filteredInputMessages = context.AIContext.Messages?.Where(m => m.GetAgentRequestMessageSourceType() == AgentRequestMessageSourceType.External);

        // Find memories that match the current user input.
        var memories = this._client.LoadMemories(state.MemoriesId, string.Join("\n", filteredInputMessages?.Select(x => x.Text) ?? []));

        // Create a message for the memories, and stamp it to indicate where it came from.
        var memoryMessages =
            [new ChatMessage(ChatRole.User, "Here are some memories to help answer the user question: " + string.Join("\n", memories.Select(x => x.Text)))]
            .Select(m => m.WithAgentRequestMessageSource(AgentRequestMessageSourceType.AIContextProvider, this.GetType().FullName!));

        // Return a new merged AIContext.
        return new AIContext
        {
            Instructions = context.AIContext.Instructions,
            Messages = context.AIContext.Messages.Concat(memoryMessages),
            Tools = context.AIContext.Tools
        };
    }

    protected override async ValueTask InvokedCoreAsync(InvokedContext context, CancellationToken cancellationToken = default)
    {
        if (context.InvokeException is not null)
        {
            return;
        }

        var state = this._sessionState.GetOrInitializeState(context.Session);
        // Create a memory container in the service for this session
        // and save the returned id in the session.
        state.MemoriesId ??= this._client.CreateMemoryContainer();
        this._sessionState.SaveState(context.Session, state);

        // We only want to store memories based on user input and agent output, and exclude messages from chat history or other AI context providers to avoid feedback loops.
        var filteredRequestMessages = context.RequestMessages.Where(m => m.GetAgentRequestMessageSourceType() == AgentRequestMessageSourceType.External);

        // Use the service to extract memories from the user input and agent response.
        await this._client.StoreMemoriesAsync(state.MemoriesId, filteredRequestMessages.Concat(context.ResponseMessages ?? []), cancellationToken);
    }

    public class State
    {
        public string? MemoriesId { get; set; }
    }
}
from typing import Any

from agent_framework import AgentSession, BaseContextProvider, SessionContext


class UserPreferenceProvider(BaseContextProvider):
    def __init__(self) -> None:
        super().__init__("user-preferences")

    async def before_run(
        self,
        *,
        agent: Any,
        session: AgentSession,
        context: SessionContext,
        state: dict[str, Any],
    ) -> None:
        if favorite := state.get("favorite_food"):
            context.extend_instructions(self.source_id, f"User's favorite food is {favorite}.")

    async def after_run(
        self,
        *,
        agent: Any,
        session: AgentSession,
        context: SessionContext,
        state: dict[str, Any],
    ) -> None:
        for message in context.input_messages:
            text = (message.text or "") if hasattr(message, "text") else ""
            if isinstance(text, str) and "favorite food is" in text.lower():
                state["favorite_food"] = text.split("favorite food is", 1)[1].strip().rstrip(".")

自訂歷史提供者

歷史提供者是專門用於載入/儲存訊息的上下文提供者。

from collections.abc import Sequence
from typing import Any

from agent_framework import BaseHistoryProvider, Message


class DatabaseHistoryProvider(BaseHistoryProvider):
    def __init__(self, db: Any) -> None:
        super().__init__("db-history", load_messages=True)
        self._db = db

    async def get_messages(
        self,
        session_id: str | None,
        *,
        state: dict[str, Any] | None = None,
        **kwargs: Any,
    ) -> list[Message]:
        key = (state or {}).get(self.source_id, {}).get("history_key", session_id or "default")
        rows = await self._db.load_messages(key)
        return [Message.from_dict(row) for row in rows]

    async def save_messages(
        self,
        session_id: str | None,
        messages: Sequence[Message],
        *,
        state: dict[str, Any] | None = None,
        **kwargs: Any,
    ) -> None:
        if not messages:
            return
        if state is not None:
            key = state.setdefault(self.source_id, {}).setdefault("history_key", session_id or "default")
        else:
            key = session_id or "default"
        await self._db.save_messages(key, [m.to_dict() for m in messages])

這很重要

在 Python 中,你可以設定多個歷史提供者,但 只有一個 應該使用 load_messages=True。 使用額外的提供者進行診斷/評估 load_messages=Falsestore_context_messages=True,以便他們在輸入/輸出中同時捕捉來自其他提供者的上下文。

範例模式:

primary = DatabaseHistoryProvider(db)
audit = InMemoryHistoryProvider("audit", load_messages=False, store_context_messages=True)
agent = OpenAIChatClient().as_agent(context_providers=[primary, audit])

後續步驟