Edit

Share via


Agents in Workflows

This tutorial demonstrates how to integrate AI agents into workflows using Agent Framework. You'll learn to create workflows that leverage the power of specialized AI agents for content creation, review, and other collaborative tasks.

What You'll Build

You'll create a workflow that:

  • Uses Azure Foundry Agent Service to create intelligent agents
  • Implements a French translation agent that translates input to French
  • Implements a Spanish translation agent that translates French to Spanish
  • Implements an English translation agent that translates Spanish back to English
  • Connects agents in a sequential workflow pipeline
  • Streams real-time updates as agents process requests
  • Demonstrates proper resource cleanup for Azure Foundry agents

Concepts Covered

Prerequisites

Step 1: Install NuGet packages

First, install the required packages for your .NET project:

dotnet add package Azure.AI.Agents.Persistent --prerelease
dotnet add package Azure.Identity
dotnet add package Microsoft.Agents.AI.AzureAI --prerelease
dotnet add package Microsoft.Agents.AI.Workflows --prerelease

Step 2: Set Up Azure Foundry Client

Configure the Azure Foundry client with environment variables and authentication:

using System;
using System.Threading.Tasks;
using Azure.AI.Agents.Persistent;
using Azure.Identity;
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Workflows;
using Microsoft.Extensions.AI;

public static class Program
{
    private static async Task Main()
    {
        // Set up the Azure Foundry client
        var endpoint = Environment.GetEnvironmentVariable("AZURE_FOUNDRY_PROJECT_ENDPOINT") ?? throw new Exception("AZURE_FOUNDRY_PROJECT_ENDPOINT is not set.");
        var model = Environment.GetEnvironmentVariable("AZURE_FOUNDRY_PROJECT_MODEL_ID") ?? "gpt-4o-mini";
        var persistentAgentsClient = new PersistentAgentsClient(endpoint, new DefaultAzureCredential());

Warning

DefaultAzureCredential is convenient for development but requires careful consideration in production. In production, consider using a specific credential (e.g., ManagedIdentityCredential) to avoid latency issues, unintended credential probing, and potential security risks from fallback mechanisms.

Step 3: Create Agent Factory Method

Implement a helper method to create Azure Foundry agents with specific instructions:

    /// <summary>
    /// Creates a translation agent for the specified target language.
    /// </summary>
    /// <param name="targetLanguage">The target language for translation</param>
    /// <param name="persistentAgentsClient">The PersistentAgentsClient to create the agent</param>
    /// <param name="model">The model to use for the agent</param>
    /// <returns>A ChatClientAgent configured for the specified language</returns>
    private static async Task<ChatClientAgent> GetTranslationAgentAsync(
        string targetLanguage,
        PersistentAgentsClient persistentAgentsClient,
        string model)
    {
        var agentMetadata = await persistentAgentsClient.Administration.CreateAgentAsync(
            model: model,
            name: $"{targetLanguage} Translator",
            instructions: $"You are a translation assistant that translates the provided text to {targetLanguage}.");

        return await persistentAgentsClient.GetAIAgentAsync(agentMetadata.Value.Id);
    }
}

Step 4: Create Specialized Azure Foundry Agents

Create three translation agents using the helper method:

        // Create agents
        AIAgent frenchAgent = await GetTranslationAgentAsync("French", persistentAgentsClient, model);
        AIAgent spanishAgent = await GetTranslationAgentAsync("Spanish", persistentAgentsClient, model);
        AIAgent englishAgent = await GetTranslationAgentAsync("English", persistentAgentsClient, model);

Step 5: Build the Workflow

Connect the agents in a sequential workflow using the WorkflowBuilder:

        // Build the workflow by adding executors and connecting them
        var workflow = new WorkflowBuilder(frenchAgent)
            .AddEdge(frenchAgent, spanishAgent)
            .AddEdge(spanishAgent, englishAgent)
            .Build();

Step 6: Execute with Streaming

Run the workflow with streaming to observe real-time updates from all agents:

        // Execute the workflow
        await using StreamingRun run = await InProcessExecution.StreamAsync(workflow, new ChatMessage(ChatRole.User, "Hello World!"));

        // Must send the turn token to trigger the agents.
        // The agents are wrapped as executors. When they receive messages,
        // they will cache the messages and only start processing when they receive a TurnToken.
        await run.TrySendMessageAsync(new TurnToken(emitEvents: true));
        await foreach (WorkflowEvent evt in run.WatchStreamAsync().ConfigureAwait(false))
        {
            if (evt is AgentResponseUpdateEvent executorComplete)
            {
                Console.WriteLine($"{executorComplete.ExecutorId}: {executorComplete.Data}");
            }
        }

Step 7: Resource Cleanup

Properly clean up the Azure Foundry agents after use:

        // Cleanup the agents created for the sample.
        await persistentAgentsClient.Administration.DeleteAgentAsync(frenchAgent.Id);
        await persistentAgentsClient.Administration.DeleteAgentAsync(spanishAgent.Id);
        await persistentAgentsClient.Administration.DeleteAgentAsync(englishAgent.Id);
    }

How It Works

  1. Azure Foundry Client Setup: Uses PersistentAgentsClient with Azure CLI credentials for authentication
  2. Agent Creation: Creates persistent agents on Azure Foundry with specific instructions for translation
  3. Sequential Processing: French agent translates input first, then Spanish agent, then English agent
  4. Turn Token Pattern: Agents cache messages and only process when they receive a TurnToken
  5. Streaming Updates: AgentResponseUpdateEvent provides real-time token updates as agents generate responses
  6. Resource Management: Proper cleanup of Azure Foundry agents using the Administration API

Key Concepts

  • Azure Foundry Agent Service: Cloud-based AI agents with advanced reasoning capabilities
  • PersistentAgentsClient: Client for creating and managing agents on Azure Foundry
  • WorkflowEvent: Output events (type="output") contain agent output data (AgentResponseUpdate for streaming, AgentResponse for non-streaming)
  • TurnToken: Signal that triggers agent processing after message caching
  • Sequential Workflow: Agents connected in a pipeline where output flows from one to the next

Complete Implementation

For the complete working implementation of this Azure Foundry agents workflow, see the FoundryAgent Program.cs sample in the Agent Framework repository.

What You'll Build

You'll create a workflow that:

  • Uses AzureOpenAIResponsesClient to create intelligent agents
  • Implements a Writer agent that creates content based on prompts
  • Implements a Reviewer agent that provides feedback on the content
  • Connects agents in a sequential workflow pipeline
  • Streams real-time updates as agents process requests
  • Demonstrates an optional shared-session pattern for agents created from the same client

Concepts Covered

Prerequisites

  • Python 3.10 or later
  • Agent Framework installed: pip install agent-framework --pre
  • Azure OpenAI Responses configured with proper environment variables
  • Azure CLI authentication: az login

Step 1: Import Required Dependencies

Start by importing the necessary components for workflows and Azure OpenAI Responses agents:

import asyncio
import os

from agent_framework import AgentResponseUpdate, WorkflowBuilder
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential

Step 2: Create Azure OpenAI Responses Client

Create one shared client that you can use to construct multiple agents:

async def main() -> None:
    client = AzureOpenAIResponsesClient(
        project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
        deployment_name=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
        credential=AzureCliCredential(),
    )

Step 3: Create Specialized Agents

Create two specialized agents for content creation and review:

    # Create a Writer agent that generates content
    writer = client.as_agent(
        name="Writer",
        instructions=(
            "You are an excellent content writer. You create new content and edit contents based on the feedback."
        ),
    )

    # Create a Reviewer agent that provides feedback
    reviewer = client.as_agent(
        name="Reviewer",
        instructions=(
            "You are an excellent content reviewer. "
            "Provide actionable feedback to the writer about the provided content. "
            "Provide the feedback in the most concise manner possible."
        ),
    )

Step 4: Build the Workflow

Connect the agents in a sequential workflow using the builder:

        # Build the workflow with agents as executors
        workflow = WorkflowBuilder(start_executor=writer).add_edge(writer, reviewer).build()

Optional: Share one session across AzureOpenAIResponsesClient agents

By default, each agent executor uses its own session state. For agents created from the same AzureOpenAIResponsesClient, you can wire a shared session explicitly:

- Basic familiarity with agents, workflows, and executors in the agent framework.
"""


@executor(id="intercept_agent_response")
async def intercept_agent_response(
    agent_response: AgentExecutorResponse, ctx: WorkflowContext[AgentExecutorRequest]
) -> None:
    """This executor intercepts the agent response and sends a request without messages.

    This essentially prevents duplication of messages in the shared thread. Without this
        deployment_name=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
        credential=AzureCliCredential(),
    )

    # set the same context provider (same default source_id) for both agents to share the thread
    writer = client.as_agent(
        instructions=("You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt."),
        name="writer",
        context_providers=[InMemoryHistoryProvider()],
    )

    reviewer = client.as_agent(
        instructions=("You are a thoughtful reviewer. Give brief feedback on the previous assistant message."),
        name="reviewer",
        context_providers=[InMemoryHistoryProvider()],
    )

    # Create the shared session
    shared_session = writer.create_session()
    writer_executor = AgentExecutor(writer, session=shared_session)
    reviewer_executor = AgentExecutor(reviewer, session=shared_session)

Step 5: Execute with Streaming

Run the workflow with streaming to observe real-time updates from both agents:

    last_executor_id: str | None = None

    events = workflow.run("Create a slogan for a new electric SUV that is affordable and fun to drive.", stream=True)
    async for event in events:
        if event.type == "output" and isinstance(event.data, AgentResponseUpdate):
            # Handle streaming updates from agents
            eid = event.executor_id
            if eid != last_executor_id:
                if last_executor_id is not None:
                    print()
                print(f"{eid}:", end=" ", flush=True)
                last_executor_id = eid
            print(event.data, end="", flush=True)
        elif event.type == "output":
            print("\n===== Final output =====")
            print(event.data)

Step 6: Complete Main Function

Wrap everything in the main function with proper async execution:

if __name__ == "__main__":
    asyncio.run(main())

How It Works

  1. Client Setup: Uses one AzureOpenAIResponsesClient with Azure CLI credentials for authentication.
  2. Agent Creation: Creates Writer and Reviewer agents from the same client configuration.
  3. Sequential Processing: Writer agent generates content first, then passes it to the Reviewer agent.
  4. Streaming Updates: Output events (type="output") with AgentResponseUpdate data provide real-time token updates as agents generate responses.
  5. Shared Sessions (Optional): A shared session can be wired when both agents are created from the same AzureOpenAIResponsesClient.

Key Concepts

  • AzureOpenAIResponsesClient: Shared client used to create workflow agents with consistent configuration.
  • WorkflowEvent: Output events (type="output") contain agent output data (AgentResponseUpdate for streaming, AgentResponse for non-streaming).
  • Sequential Workflow: Agents connected in a pipeline where output flows from one to the next.
  • Shared Session Pattern: Optional configuration for shared memory/thread across selected agents in a workflow.

Complete Implementation

For complete working implementations, see azure_ai_agents_streaming.py and azure_ai_agents_with_shared_session.py in the Agent Framework repository.

Next Steps