Share via

Best practices while invoking an AI agent

Meenakshi Sankar 60 Reputation points
2026-02-23T08:37:59.7533333+00:00

Hello,

I am hoping one of you can help me with a design question.

Lets say that I have an ai agent in foundry, that actually takes a text input and categorise it based on certain rules.

If I have to process a file (.csv for eg) on azure blob (that has column that contains the text that need to be categorised) , using this ai agent, how should I go about implementing the solution.

Should I write a Python script to read from the file, pass each record to the ai agent and write the results into a result file? (I can probably look at automating it using Logic Apps) or should I actually add the source file as part of the Knowledge base of the agent and ask the agent to read from the file and write the output to a file?

What would be the best practice here?

Regards,

Meena

Foundry Tools
Foundry Tools

Formerly known as Azure AI Services or Azure Cognitive Services is a unified collection of prebuilt AI capabilities within the Microsoft Foundry platform

{count} votes

Answer accepted by question author
  1. Sridhar M 5,335 Reputation points Microsoft External Staff Moderator
    2026-02-23T11:33:16.0933333+00:00

    Hi Meenakshi Sankar

    For processing a CSV in Azure Blob where each row’s text must be categorized by rules using an Azure AI Foundry agent, the best practice is to use an external orchestrator (Python/Azure Function/Logic Apps/Data Factory) to read the file, call the agent per record (or in controlled batches), and write the outputs to a result file or database.

    This aligns with Foundry’s design where enterprise customers commonly bring their own storage (including Azure Blob Storage) and run agents as part of broader application workflows rather than letting the agent “own” the batch pipeline.

    Adding the CSV as “knowledge base” is generally not the right pattern for row‑by‑row processing, because knowledge/file integration is meant to support an agent’s reasoning with retrieval and tool usage, not to act as a deterministic ETL engine. Foundry agent training material emphasizes an agent model where data can be stored and referenced (including files in Blob), but not that the agent is responsible for doing reliable, complete, ordered processing of every CSV row on its own. [Building A...ation Deck | PowerPoint], [Deep Dive...0_20250801 | PowerPoint], [Understand...soft Learn | Learn.Microsoft.com]

    (Recommended architecture pattern): Use a pipeline like: Blob (CSV) → Orchestrator (Python/Function/Logic Apps/ADF) → Invoke Foundry Agent → Write results (Blob/SQL/ADLS/Fabric). This keeps your processing scalable and auditable, and it matches how Foundry supports enterprise integration patterns and storage choices (BYO Blob storage is a common pattern in the agent service architecture). [Building A...ation Deck | PowerPoint], [Understand...soft Learn | Learn.Microsoft.com]

    If you implement this in Python, you typically create a client and invoke the agent through the supported SDK patterns. Microsoft’s GA migration guidance highlights the modern pattern of connecting using the Foundry Project endpoint and the Agents client approach (this is relevant when you automate agent calls from code). [AzureAIAge...soft Learn | Learn.Microsoft.com]

    Logic Apps can be a good low‑code orchestrator when volume is moderate and you want managed connectors for Blob triggers, looping, and writing outputs. The key is: Logic Apps should do the file I/O and control flow, and the agent should do the classification; this aligns with the general positioning of Foundry agent service as a callable capability inside enterprise workflows, rather than the workflow itself. [Understand...soft Learn | Learn.Microsoft.com]

    For production, you should treat this as a pipeline and ensure you have monitoring/observability around agent calls (success/failures, latency, drift), because agents are part of a broader operational system.

    Microsoft’s agent observability guidance reinforces the importance of reliable monitoring and evaluation practices for production‑grade agent deployments. [Agent Fact...Azure Blog | Azure.Microsoft.com]

    So the practical best practice is: don’t ask the agent to “read the CSV and write a new file” as its primary job; instead your code/Logic App reads the CSV, sends each text row to the agent, receives a structured category output, and writes the result file back to Blob (or another store). This leverages Foundry agent service strengths (tooling + enterprise integration) while keeping batch processing reliable. [Building A...ation Deck | PowerPoint], [AzureAIAge...soft Learn | Learn.Microsoft.com], [Understand...soft Learn | Learn.Microsoft.com]

    References:

    If you have any remaining questions or additional details to share, feel free to let us know. We’ll be glad to provide further clarification or guidance.

    Thank you!

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.