Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
The prompt node makes a single AI call from inside an agent flow or workflow. Provide an instruction, optionally pass dynamic content from earlier steps, and the model returns text or structured output that downstream actions consume. Use the prompt node when the step needs the model to transform or generate text, without tool orchestration or knowledge retrieval.
By using the prompt node, you can:
- Send a prompt instruction to a model and use the response in later steps.
- Insert dynamic content from earlier steps directly into the prompt template.
- Select the model that powers the call.
- Choose how the response is shaped, including text or structured output.
Add a prompt node
In Copilot Studio, go to Flows and open an existing workflow, or create a new one.
- New workflow: You land on the designer to configure a trigger.
- Existing workflow: Open the workflow and go to the Build tab.
Select the Prompt icon on the Add panel. The configuration panel opens for the prompt node.
Create your prompt. Craft instructions, add knowledge resources, configure a connection and select the language model you want to use, and choose the output shape that best fits your scenario.
When you're done, close the configuration panel.
The sections that follow go into more detail on how to create a prompt in the prompt node, how to choose the output shape, and how to use the prompt response in your workflow. If you're new to building with prompts in Copilot Studio, we recommend you read through these sections to get the most out of the prompt node.
Write the prompt instruction
In the Instructions field, write the prompt instruction the node runs every time the workflow reaches this step. Use the dynamic content picker to insert tokens from earlier steps so the template fills in with real run-time data.
For example, in a workflow that triggers when a Forms response is submitted, the instructions might be Rewrite the following customer feedback into a one-line summary: followed by the Comments token from the form.
Be specific about the task and the format of the response. If you need a list, ask for a list. If you need a one-paragraph summary, say so.
To pick the model that powers the call, use the model dropdown in the upper right of the Instructions box. Select a more capable model when the task needs careful interpretation or longer responses. Select a faster model when the task is simple and runs at high volume.
Choose the output shape
Use the Output dropdown to control the shape of what the prompt node returns. The shape determines how downstream workflow steps consume the result.
| Output type | What you get | When to use it |
|---|---|---|
| Text | A single string. | The downstream step just inserts the model's answer (for example, into an email body or a Teams message). |
| Structured output | A predefined object with named fields. | You want consistent fields without writing a schema. For example, a category plus a sentiment label. |
| Custom structured output | An object that matches a JSON schema you define. | The downstream workflow needs strict, machine-readable fields to branch on, write to columns, or send to an API. |
When you pick a structured output, each field becomes its own dynamic-content token that downstream actions can reference directly.
Use the prompt response in your workflow
When the prompt node finishes, its result is exposed as dynamic content for any downstream step.
Select the next action where you want to use the result (for example, Send an email or Update a row).
Open the dynamic content picker on the field you want to fill.
Select the output from the earlier prompt node step:
- Response output: A single prompt response token.
- Structured or Custom structured output: One token per field.
Automation scenarios
The prompt node is at its best as one focused step in a longer workflow. Earlier steps gather the inputs, the prompt node generates or transforms text, and later steps push the result into the systems people use.
Translate inbound customer messages
A workflow triggers when an email arrives in a global support inbox. An earlier step extracts the email body; the prompt node sends Translate the following customer message into English. Preserve product names and reference numbers exactly: followed by the Body token from the trigger. Later steps route the translated message to the right regional team and store the original alongside the translation in Dataverse.
Extract structured fields from free-form support tickets
A workflow triggers when a support ticket is created in Dynamics 365 with a free-form description. The prompt node, with Output set to Custom structured output, sends Extract the following fields from this customer description: urgency (low|medium|high), category (billing|technical|account), and primary intent (one short phrase). followed by the Description token. Later steps route the ticket based on category, set the priority based on urgency, and prefill the Primary intent field on the case record.
Generate a clean lead summary from raw call notes
A workflow triggers when a sales rep saves call notes to a Dynamics 365 lead. The prompt node sends Rewrite the following raw call notes into a three-bullet executive summary, focusing on the customer's stated problem, the timeline they mentioned, and any next steps: followed by the Notes token from the trigger. Later steps update the lead's Description field with the summary and post it as a comment on the related opportunity.
Frequently asked questions
When should I use a prompt node vs. an agent node?
Both nodes call AI from a workflow, but they're built for different jobs.
| Capability | Prompt node | Agent node |
|---|---|---|
| Tool orchestration | Code interpreter only | Full access to MCP servers and connectors |
| Knowledge sources | Dataverse only | SharePoint, public websites, and more |
| Human in the loop | No | Yes |
| Task complexity | Single-turn text generation | Multi-turn reasoning across tools and sources |
Use the prompt node when you just need the model to transform or generate text, like rewriting, summarizing, translating, or extracting fields. Use the agent node when the step needs reasoning, tool orchestration, or grounded knowledge.
The prompt node is faster and simpler than the agent node. There's no agent to configure, no tools to attach, and no knowledge sources to wire up. Just a prompt instruction and a model.
Does the prompt node have access to my data?
The prompt node only sees what's in the Instructions field, including any dynamic content tokens you insert. It doesn't browse files, search SharePoint, or read mail on its own. If the model needs to ground its response in external data, fetch the data with an earlier step and pass it into the template, or use an agent node instead.