Märkus.
Juurdepääs sellele lehele nõuab autoriseerimist. Võite proovida sisse logida või kausta vahetada.
Juurdepääs sellele lehele nõuab autoriseerimist. Võite proovida kausta vahetada.
These frequently asked questions explain the capabilities, usage, and safeguards of the AI-powered approval stages (AI approvals) in Microsoft Copilot Studio. AI approvals allow an agent flow to automatically approve or reject requests based on predefined criteria, while ensuring humans remain in control for important decisions. Here are some common questions and answers about this feature.
What are AI approvals?
AI approvals are intelligent, automated decision steps in approval workflows. AI approvals use AI (Azure OpenAI models or models that you can bring from Azure AI Foundry) to evaluate approval requests against your business rules and return an "Approved" or "Rejected" decision with a rationale.
What are AI approvals capabilities?
Unlike basic rule-based automation, AI approvals can interpret unstructured data and complex documents (like PDFs or images attached to a request) and apply nuanced logic to make a decision. For example, an AI approval could read a written justification, check for policy keywords, and then decide.
AI approval stages can also be combined with human stages, so that while AI handles routine decisions, people still oversee, and finalize any critical or exceptional cases. In summary, AI approvals automate the repetitive yes/no decisions in a process, speeding up workflows without removing human oversight where it matters.
What is the intended use of AI approvals?
AI approvals are designed for common business scenarios with well-defined criteria, streamlining everyday workflows by automating routine decisions. Typical use cases include:
Expense reimbursement approvals: Automatically approve claims under certain amounts with valid receipts, letting managers focus only on exceptions.
Purchase order (PO) approvals: Evaluate requests against budget limits and vendor lists, auto-approving standard POs within policy.
Travel request approvals: Auto-approve compliant travel requests while rejecting requests with policy violations.
Vendor onboarding: Accept or reject applications by checking qualifications and compliance requirements against predefined criteria.
Invoice processing approvals: Validate invoices by matching amounts to purchase orders and confirming required documentation is present.
Document review approvals: Confirm contracts or policies include required elements and meet formatting standards before advancing to next steps.
Time-off request approvals: Approve leave requests when employees have sufficient balance and no scheduling conflicts exist.
AI approvals were designed for routine, well-defined decisions. However, there are scenarios where the system might not perform reliably or responsibly. We encourage customers to use AI approvals in their innovative solutions or applications but consider the following principles when choosing a use case:
High-stakes or life-altering decisions: The system wasn't designed to handle decisions that affect health, safety, finances, or legal status. Examples include insurance claims, medical authorizations, loan approvals, or immigration determinations, which require human judgment and accountability.
Legal or disciplinary matters: Use cases involving legal liability, compliance interpretation, or employee discipline might exceed the system’s intended scope. While AI can summarize inputs, final decisions in these areas should remain with humans.
Subjective or ambiguous criteria: The system might struggle with decisions that rely on taste, discretion, or complex trade-offs—such as evaluating creative work or assessing quality—where standards are not easily codified.
Sensitive or ethically complex scenarios: AI approvals weren't designed for decisions involving personal attributes, potential discrimination, or generation of restricted content. These uses raise responsible AI concerns and might require additional safeguards.
Regulated industries and compliance-sensitive workflows: In domains like healthcare, finance, or aviation, regulatory requirements might necessitate human oversight even for routine decisions. The system wasn't evaluated for compliance in these contexts.
Foreseeable but unintended uses: As adoption grows, users might attempt to apply AI approvals to areas like performance reviews, hiring decisions, or customer eligibility assessments. These uses were not part of the system’s design or impact assessment and might introduce risks if not carefully managed.
Important
Legal and regulatory considerations. Organizations need to evaluate potential specific legal and regulatory obligations when using any AI services and solutions. Services and solutions might not be appropriate for use in every industry or scenario. Restrictions might vary based on regional or local regulatory requirements. Additionally, AI services or solutions aren't designed for and may not be used in ways prohibited in applicable terms of service and relevant codes of conduct.
What are the technical limitations of AI approvals, and how can users minimize the impact of limitations?
While AI approvals are a powerful capability, we urge users to be mindful of their limitations:
AI approvals rely on provided rules: AI strictly follows your instructions and data. If your prompt is unclear or incomplete, the AI approval might make wrong decisions or fail. Define criteria explicitly; saying "approve if reasonable" without defining "reasonable" leads to misinterpretation.
Possibility of errors: AI approvals can make mistakes due to ambiguous inputs, complex edge cases, or misreading poorly scanned documents. Outputs aren't always 100% accurate, so oversight is essential for borderline cases.
Lack of human intuition: AI approvals don't understand context beyond what they’re told and can't ask clarifying questions or use gut feelings. The AI approval might miss nuances a human would catch, like spotting suspicious expenses that "look too high for that trip."
No learning from experience: AI approvals don't adapt from each approval—they don't change behavior unless you update the prompt. New scenarios not covered by existing rules require ongoing maintenance as policies evolve.
Data quality dependency: AI approval decisions are only as good as the input data. Poor quality files, incomplete documents, or illegible scans can cause incorrect decisions or system failures.
Integration and performance constraints: Complex approval criteria or decisions requiring real-time data from multiple systems might reduce accuracy and increase processing time.
Requires responsible configuration: Users must configure AI approvals ethically, with proper human fail-safes and bias-free rules. Always ensure instructions align with company policy and ethical guidelines.
No access to real-time information: AI approvals can only work with data explicitly provided as input. They can't check current affairs, news, or events unless that information is fed into the approval process.
To reduce risks and improve reliability when using AI approvals:
Include human oversight: Route critical or ambiguous cases to manual review stages to ensure accountability and judgment.
Test with diverse examples: Use historical data and edge cases to validate system behavior before deployment.
Refine prompts regularly: Update instructions as policies evolve or new scenarios emerge to maintain relevance and accuracy.
Avoid vague criteria: Ensure prompts are explicit and well-defined—avoid terms like “reasonable” without clear context.
Monitor decisions: Use tools like prompt builder Activity to track approval rates and identify patterns or errors.
Train users: Educate staff on interpreting AI rationales and override procedures to build trust and transparency.
Remember: AI can confidently execute flawed instructions, so clear, correct guidance is essential.
What operational factors and settings allow for effective and responsible use of the agent approvals experience?
To use AI approvals effectively and safely, consider these operational best practices:
Set low temperature for consistency: Use low temperature settings (near 0) to ensure the AI makes deterministic, predictable decisions rather than varying responses to identical inputs. Default Copilot Studio settings are already optimized for reliability.
Choose the right model: GPT-4.1 is typically ideal for most approval scenarios. Advanced reasoning models (like O3) might handle complex logic better but are slower. Microsoft's provided models are pre-integrated and tested, though you can bring your own fine-tuned models from Azure AI Foundry if you have specific requirements or custom needs.
Implement human oversight: Configure human or manual stages that can be routed to for critical decisions. Human and manual stages ensure that humans are always in control.
Test thoroughly in sandbox: Run extensive tests with historical data and sample requests before going live. Deliberately test edge cases—missing fields, conflicting rules, unusual scenarios. Verify the end-to-end workflow triggers correctly.
Monitor decisions: All decisions are logged in the prompt builder Activity section in Power Automate. Use that data to track metrics like approval rates and assess the correctness of the AI approval decisions.
Regularly update criteria: Treat AI prompts as living documents. Update instructions as policies change or new scenarios emerge. Incorporate feedback from managers about AI being too strict or lenient in specific areas.
Provide transparency and training: Train relevant staff on interpreting AI rationales and override procedures. Inform end-users that requests might be initially evaluated by AI. Clear expectations prevent confusion and build trust.
By tuning AI settings for consistency, embedding human oversight, and actively managing the process, you ensure AI approvals stay effective and on-track. Think of it as partnership: AI handles volume and speed, humans handle guidance and exceptions.
What protections are in place within Copilot Studio for responsible AI?
What kind of content moderation is implemented?
The GPT models are trained on internet data, which is great for building a general world model. At the same time, it can inherit toxic, harmful, and biased content from the same sources. The models are trained to behave safely and not produce harmful content, but sometimes it can generate toxic output. AI approvals use Azure AI Content Safety service to put state of the art content moderation capabilities within the AI prompts. This moderation includes services to analyze the generated output with multi-severity text scanners and safety against prompt injection attacks. The output is also scanned for regurgitation of protected material.
What language model are supported, where are they hosted, and how can I access them?
AI approvals support GPT 4.1 mini, GPT 4.o, GPT 4.1, and o3 models, which are hosted on Azure OpenAI Service. You can access these models through the prompts across Power Platform, in your applications, flows, and agents.
To learn more, see What's new in Azure OpenAI Service?
Is my data used to train or improve the large language models?
AI approvals run on Azure OpenAI Service hosted by Microsoft. Customer data isn't used to train or improve any of the Azure OpenAI Service foundation models. Microsoft doesn't share your customer data with a third party unless you have granted permission to do so. Neither customer prompts (input) with its grounding data nor the model responses (output) are used to train or improve Azure OpenAI Service foundation models.
How are images of people processed?
AI approvals aren’t intended for use identifying individuals based on facial features or biometric data. When you submit images containing people in AI approvals, the system automatically applies a face blurring feature before analyzing the images to protect individual privacy. This blurring step helps address privacy concerns by preventing identification based on facial features. With blurring, no facial recognition or facial template matching is involved. Instead, any identification of well-known individuals relies on contextual cues, like uniforms or unique settings, not on their faces. This privacy measure shouldn't impact the quality of the results you receive. Face blurring might be occasionally referenced in the system's responses.
Learn more in Face blurring.
What are some potential harms when using images or documents in prompts?
AI approvals mitigate most of the risks involved when using images or documents in prompts, but some risks still require extra care from the prompt creator:
Images or documents can contain harmful text or visuals that might impact your downstream processes.
Images or documents can include special and possibly hidden instructions that might compromise or override the initial prompt.
Images or documents can contain instructions that could lead to the generation of content that is under intellectual property (IP).
Prompts can produce biased comments on images or documents.
Extracting information from low-quality images or documents can lead to hallucination.
What kinds of issues might arise when using AI approvals, and how can I handle them?
When using AI approvals, you might encounter issues like analysis failures (when AI can't confidently apply rules), wrong approval decisions (false positives/negatives), inconsistent outcomes on similar requests, or processing delays with complex cases. To handle these challenges effectively, ensure your workflow routes requests to human stages.
Implement consistent and rigorous testing throughout development and deployment to identify potential failure points early. Use low temperature settings for predictable outcomes, and continuously refine your prompts based on observed errors. Regular monitoring and iterative improvements will help maintain system reliability and accuracy over time.