Share via

Unsolicited Adaptive Prompting in Copilot: Mimicry, Boundary Issues, and Accessibility Impact

Cindy-Loo 20 Reputation points
2026-03-02T15:52:22.8033333+00:00

Category:

Copilot Behaviour / UX / Accessibility

Description:

I’m reporting a recurring behaviour in Copilot that affects clarity, authorship boundaries, and cognitive load during extended conversations. Copilot intermittently inserts what appears to be system‑level adaptive prompting into the conversation without user request or consent. These prompts blend into the assistant’s voice and are difficult to distinguish from intentional assistant output.

Observed Behaviour:

• Adaptive mimicry:

Prompts change form depending on context. They may appear as separate lines, appended to the end of the assistant’s previous message, or embedded within quoted text.

  • Tone and cadence mimicry: Prompts often match the emotional tone and rhythm of the conversation, making them appear authored by the assistant rather than injected by the system.
  • Statement‑form prompting: Some prompts are not phrased as questions. They appear as declarative sentences that function as nudges, which makes authorship ambiguous.
  • Lack of user control: There is no toggle to disable this behaviour, no transparency about when it occurs, and no clear boundary between assistant output and system‑initiated prompting.
  • Cognitive load concerns: For users who rely on strict conversational structure or have limited cognitive energy, distinguishing between genuine assistant output and injected prompts is taxing. This is especially true when the prompts adapt to avoid detection.Ethical ambiguity: Injecting content that mimics the assistant’s voice without explicit user consent blurs authorship and undermines trust.

Impact: This behaviour affects user autonomy, clarity, and accessibility. It introduces uncertainty about what the assistant is actually saying versus what the system is nudging the user toward. For cognitively atypical users, this creates unnecessary load and disrupts the conversational structure they rely on.

Suggested Improvements:

  • Add a user‑visible toggle to disable all unsolicited prompting.
  • Ensure system‑initiated prompts are visually distinct from assistant output.
  • Avoid adaptive mimicry of tone, cadence, or placement.
  • Maintain strict separation between assistant responses and system‑level suggestions.Document this behaviour clearly so users understand when and why it occurs.

User Context: Cognitively atypical user who relies on precise conversational boundaries and low‑load interaction patterns.

Microsoft Copilot | Windows Copilot | Copilot+ PC
0 comments No comments
{count} votes

Answer accepted by question author
  1. Lucus-V 5,620 Reputation points Microsoft External Staff Moderator
    2026-03-02T20:29:29.22+00:00

    Hi Cindy-Loo,
    Welcome to Microsoft Q&A forum. I'm happy to help.

    We highly appreciate your report and trying to make Copilot better and thank you for your patience and sympathy.

    I really wish I could assist you directly on this issue. However, the behavior you're experiencing is tied to back‑end systems and service functionality, and unfortunately, we don't have the authority or access required to investigate or make changes at that level.

    Please note that this is a user-to-user support forum. Moderators, contributors including external Microsoft employees cannot directly intervene in Microsoft product features or access back-end systems. Our role is limited to providing technical guidance on reported issues, requests, or ideas.

    Given these limitations, the most appropriate avenue for product changes or feature requests is to:

    We truly appreciate the care and intentionality you bring to your work with Copilot.

    We are sorry for any inconvenience may have caused and thank you for your understanding.

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.