Share via


FAQ for deep reasoning

These frequently asked questions (FAQ) describe the AI impact of the deep reasoning feature in Copilot Studio.

What is deep reasoning?

Deep reasoning models are advanced large language models designed to solve complex problems. They carefully consider each question, generating a detailed internal chain of thought before providing a response back to the user.

How can you use deep reasoning models in Copilot Studio?

Deep reasoning models in Copilot Studio offer powerful capabilities for creating sophisticated agents. Models like Azure OpenAI o3 use deep reasoning to enhance agent decision making and return more accurate responses.

When building agents, you can add instructions that define the agent's tasks and how it accomplishes them. These tasks can range from simple to highly complex, requiring thorough analysis.

Makers can apply reasoning models to specific steps in the agent's instructions, enhancing the agent's ability to perform advanced reasoning and deliver more accurate and insightful results. You can add deep reasoning models for tasks requiring scientific research, complex questions, and in depth analysis of unstructured data. These models provide insights beyond the capabilities of simpler models.

To use reasoning models, add the keyword reason to specific steps of agent instructions. For example: Use reason to determine the next item in a mathematical series, such as 2, 5, 10, 17. This triggers the reasoning model during the agent's runtime for that specific step. Copilot Studio currently uses the Azure OpenAI o3 model for its advanced reasoning capabilities.

What are the intended uses of deep reasoning models?

 Deep reasoning models are designed to handle complex tasks that require logical reasoning, problem-solving, and step-by-step analysis. For example, you can use deep reasoning models to:

  • Evaluate market trends and recommend best investment opportunities. Deep reasoning models can break down market data into smaller, manageable steps; analyze trends; and recommend the best investment opportunities. They can consider various factors such as historical data, current market conditions, and future projections to provide well informed investment recommendations.

  • Analyze increased demand and recommend strategies to manage inventory. Models can analyze patterns in demand and supply, predict future inventory needs, and recommend strategies to manage inventory effectively. By considering factors like seasonal trends, market fluctuations, and supply chain dynamics, deep reasoning models can help businesses optimize their inventory management.

  • Solve differential equations and provide step by step explanations. Models can solve complex mathematical problems, such as differential equations, and provide step by step explanations of the solution. By breaking down the problem into smaller steps and applying logical reasoning, deep reasoning models can offer clear and detailed solutions to mathematical challenges.

How were deep reasoning models evaluated and what metrics are used to measure performance?

Deep reasoning models used in Copilot Studio are evaluated for groundedness, responsible AI, and accuracy. Groundedness is making sure the model only returns content that is grounded in a specific, real-world context. Responsible AI checks for protection against harms like jailbreak attacks, cross-domain prompt injection attacks, and harmful content.

To measure against these dimensions, models are tested against a diverse set of scenarios and scored along each of these dimensions. All deep reasoning models are evaluated before being released.

What are the limitations of deep reasoning models? How can makers minimize the impact of these limitations?

  • Use of reasoning models: An agent can only use deep reasoning models if deep reasoning model capabilities are turned on in the agent's settings.

  • Response time: Due to the time required for analysis, responses from reasoning models tend to be slower compared to other non-deep reasoning language models.

To minimize the impact of these limitations, you can:

  • Ensure that deep reasoning models capabilities are turned on only for agents that need them.

  • Use the keyword reason in agent instructions only for steps that benefit from deep reasoning models.

  • Use deep reasoning models for tasks that allow for longer response times. If necessary, let users know that some agent responses might take longer.

What operational factors and settings allow for effective and responsible use of deep reasoning models?

Deep reasoning models include various protections to ensure admins, makers, and users enjoy a safe, compliant experience:

  • Only allow deep reasoning models for agents that require complex reasoning steps. This ensures that the models are applied where they can provide the most value.

  • Include the keyword reason in the instructions to trigger the model at runtime for specific tasks, not all tasks that might not require complex reasoning.

  • Thoroughly test the agent to ensure the accuracy and reliability of the output provided by the deep reasoning model. Testing also helps identify any potential issues and ensures that the model performs as expected.

  • Use the activity map to review where your agent uses deep reasoning models in a session. Expand the deep reasoning node in the map to review the steps the model took and the model's output. This helps you determine if the reasoning model is delivering the intended functionality.

  • Compare the outputs with and without using a deep reasoning model by updating your instructions during testing.