Formerly known as Azure AI Services or Azure Cognitive Services is a unified collection of prebuilt AI capabilities within the Microsoft Foundry platform
For processing a CSV in Azure Blob where each row’s text must be categorized by rules using an Azure AI Foundry agent, the best practice is to use an external orchestrator (Python/Azure Function/Logic Apps/Data Factory) to read the file, call the agent per record (or in controlled batches), and write the outputs to a result file or database.
This aligns with Foundry’s design where enterprise customers commonly bring their own storage (including Azure Blob Storage) and run agents as part of broader application workflows rather than letting the agent “own” the batch pipeline.
Adding the CSV as “knowledge base” is generally not the right pattern for row‑by‑row processing, because knowledge/file integration is meant to support an agent’s reasoning with retrieval and tool usage, not to act as a deterministic ETL engine. Foundry agent training material emphasizes an agent model where data can be stored and referenced (including files in Blob), but not that the agent is responsible for doing reliable, complete, ordered processing of every CSV row on its own. [Building A...ation Deck | PowerPoint], [Deep Dive...0_20250801 | PowerPoint], [Understand...soft Learn | Learn.Microsoft.com]
(Recommended architecture pattern): Use a pipeline like: Blob (CSV) → Orchestrator (Python/Function/Logic Apps/ADF) → Invoke Foundry Agent → Write results (Blob/SQL/ADLS/Fabric). This keeps your processing scalable and auditable, and it matches how Foundry supports enterprise integration patterns and storage choices (BYO Blob storage is a common pattern in the agent service architecture). [Building A...ation Deck | PowerPoint], [Understand...soft Learn | Learn.Microsoft.com]
If you implement this in Python, you typically create a client and invoke the agent through the supported SDK patterns. Microsoft’s GA migration guidance highlights the modern pattern of connecting using the Foundry Project endpoint and the Agents client approach (this is relevant when you automate agent calls from code). [AzureAIAge...soft Learn | Learn.Microsoft.com]
Logic Apps can be a good low‑code orchestrator when volume is moderate and you want managed connectors for Blob triggers, looping, and writing outputs. The key is: Logic Apps should do the file I/O and control flow, and the agent should do the classification; this aligns with the general positioning of Foundry agent service as a callable capability inside enterprise workflows, rather than the workflow itself. [Understand...soft Learn | Learn.Microsoft.com]
For production, you should treat this as a pipeline and ensure you have monitoring/observability around agent calls (success/failures, latency, drift), because agents are part of a broader operational system.
Microsoft’s agent observability guidance reinforces the importance of reliable monitoring and evaluation practices for production‑grade agent deployments. [Agent Fact...Azure Blog | Azure.Microsoft.com]
So the practical best practice is: don’t ask the agent to “read the CSV and write a new file” as its primary job; instead your code/Logic App reads the CSV, sends each text row to the agent, receives a structured category output, and writes the result file back to Blob (or another store). This leverages Foundry agent service strengths (tooling + enterprise integration) while keeping batch processing reliable. [Building A...ation Deck | PowerPoint], [AzureAIAge...soft Learn | Learn.Microsoft.com], [Understand...soft Learn | Learn.Microsoft.com]
References:
- Tool best practices for Microsoft Foundry Agent Service
- Use Azure Functions with Azure AI Foundry Agent Service
- How to use Logic Apps with Azure AI Foundry Agent Service
- What are tools in Azure AI Foundry Agent Service?
If you have any remaining questions or additional details to share, feel free to let us know. We’ll be glad to provide further clarification or guidance.
Thank you!