Share via


Understand how content is shared with agents in Microsoft 365

Important

You need to be part of the Frontier preview program to get early access to Microsoft Agent 365. Frontier connects you directly with Microsoft’s latest AI innovations. Frontier previews are subject to the existing preview terms of your customer agreements. As these features are still in development, their availability and capabilities may change over time.

Microsoft Agent 365 brings AI-powered agents that work alongside you within Microsoft 365 productivity and collaboration tools. After you add an agent to your organization, such as by creating one from the Teams store, it can participate in everyday work scenarios. You can also add agents to Teams channels, group chats, email threads, shared documents, and business databases. They can process information, store it, and generate new content or answers based on what you share.

While agents boost productivity by answering questions, drafting content, automating tasks, and so on, they also inherit any content you share with them. This means an agent might reuse or expose that content in ways you don't expect. It’s important to understand how agents access information and the potential risks of sharing sensitive content with them. Microsoft provides certain controls and warnings to help you use agents responsibly.

Treat agents as public collaborators and share content with caution.

Warning

The content you share with an agent, such as files, chat history, or emails, might be summarized or included in the agent’s responses to other users, even those individuals who don't originally have access to that content. This risk applies regardless of sensitivity labels or permissions placed on the content.

This article explains how agents access content across Microsoft 365, provides examples of agent behavior in various apps, and lists safeguards and best practices to protect your data. It also covers how others in your organization can interact with an agent you added and what transparency measures are in place.

Agent access patterns

Agents use different modes to access content. In Agent 365, there are three access patterns (or operating modes). These patterns define whose permissions the agent uses when accessing data.

  • Assisted agent (On-behalf-of, OBO): In this mode, an agent acts on behalf of a specific user. The agent uses that user’s credentials and permissions to access content, as if the user is driving the agent’s actions. Many existing agents work this way today. For example, if you use an agent in Teams chat with OBO, it can only see and do what you can. Assisted app agents require you, the user, to consent to the agent accessing your data on your behalf. For example, when you sign in and approve certain permissions for the agent.

  • Autonomous app: In this pattern, the agent acts as an independent application with its own privileges. The agent doesn’t rely on any single user’s credentials during runtime. Instead, it has an application identity (client ID) in Microsoft Entra ID, with specific API permissions that an admin approves. This is analogous to a service or daemon application. For example, an agent with 'read access' to a SharePoint site and ‘send-as’ permission on a mailbox. The agent uses those app permissions, not any one person’s account. Autonomous agents require admin consent or approval because the agent can potentially access multiple users’ data directly. Organizations mitigate risk by granting only the least privileges needed (principle of least privilege) and by being able to disable or monitor the agent as needed.

  • Autonomous user: This is a new pattern introduced with Agent 365. Here, the agent has a user identity in your directory. The agent logs in as itself, and can be assigned to Teams, added to groups, have an email address, and so on, just like any user. You can think of it as the agent being a full-fledged member of the organization from a permissions standpoint. The agent can access content that is shared with it or where it's been added. Enabling such an agent requires administrative setup. Once the agent exists, any user who has access to that agent (for example, including but not limited to the agent manager) can share content with it or add it to places. Each such sharing action is a consent moment. The autonomous user pattern is powerful because it most closely resembles adding an employee – with the corresponding access persistence that implies and the ability to extrapolate shared content at scale.

How agents access content

Agents only access content you share with them. Common sharing scenarios include:

  • Teams channels: Agents inherit access to all channel resources, including existing and future posts, files, and meeting transcripts.
  • Group chats: Agents can read chat history (subject to how much history you include, like adding a new person) and all files or links shared in that chat. The agent can also access live updates as the conversation continues.
  • Emails: When you CC an agent, it gives the agent access to the full thread and its attachments.
  • Files: Sharing a file or folder grants persistent access until revoked.
  • Microsoft Dataverse: Agents with security roles granted can access structured business data.

Risks of sharing content with an agent

  • Exposing content to unintended audiences. Agents might summarize content for users who don't have original access.
  • Surfacing old or forgotten content.
  • Persistent access to shared files.
  • Cross-data leakage. Agents might bridge data across roles or departments.
  • No human judgment on sensitive content.

Built-in protections

  • Sensitivity labels protect files but not summaries.
  • Permission boundaries prevent access to unshared content.
  • Audit logs track agent actions.
  • Admin controls govern agent creation and access.
  • User warnings appear at sharing points.
  • Policy enforcement applies through Data loss prevention (DLP) and compliance rules.

Trust and transparency

  • Agents are clearly labeled in the UI.
  • AI-generated content is disclosed.
  • First-time interactions include warnings.
  • Data remains within Microsoft 365 compliance boundaries.
  • Admins can revoke agent access anytime.

Best practices for collaborating with agents

  • If you’re interested in using an agent, consult your IT or admin on what agents are available and approved in your organization.

  • Review internal guidelines or training your organization provides on AI and data handling.

  • Start with low-risk interactions. For example, have the agent work with publicly shareable or nonsensitive content, to get comfortable with its behavior before trusting it with more sensitive tasks.