Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This article explains prompts and prompt engineering as key concepts to help you create powerful generative AI capabilities that can be used in Copilot Studio.
Important
- Prompts use models powered by Azure Foundry.
- This capability might be subject to usage limits or capacity throttling.
Prerequisites
- Your environment is in the list of available regions.
- You need Copilot Credits.
- Microsoft Dataverse is installed on the environment.
What is a prompt
A prompt is mainly composed of a natural language instruction that tells a generative AI model to perform a task. The model follows the prompt to determine the structure and content of the text it needs to generate. Prompt engineering is the process of creating and refining the prompt used by the model.
A prompt builder experience allows makers to build, test, and save reusable prompts. In this experience, you can also use input variables and knowledge data to provide dynamic context data at runtime. You can share these prompts with others and use them agents, workflows, or apps.
These prompts can be employed for many tasks or business scenarios, such as summarizing content, categorizing data, extracting entities, translating languages, assessing sentiment, or formulating a response to a complaint. For instance, you could make a prompt to pick out action items from your company emails and use it in a Power Automate workflow to build an email processing automation.
In Copilot Studio, prompts can be used as agent tools to improve chat experience or enable advanced AI automations or workflow nodes to infuse AI actions in deterministic automations.
Human oversight
Human oversight is an important step when working with content generated from a generative AI model. Such models are trained on huge amounts of data and can contain errors and biases. A human should review it before you post it online, send it to a customer, or use it to inform a business decision. Human oversight helps you to identify potential errors and biases. It also makes sure the content is relevant to the intended use case and aligns with the company's values.
Human review can also help to identify any issues with the model itself. For example, if the model is generating content that isn't relevant to the intended use case, then you might need to adjust the prompt.
Responsible AI
We're committed to creating responsible AI by design. Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We're putting these principles into practice across the company to develop and deploy AI that has a positive impact on society. We take a comprehensive approach, combining innovative research, exceptional engineering, and responsible governance. Alongside OpenAI's leading research on AI alignment, we're advancing a framework for the safe deployment of our own AI technologies aimed to help guide the industry toward more responsible outcomes.
Learn more about transparency in Transparency note for Azure OpenAI.