Copilot Glossary
copilot – Copilots are natural language assistants that can help with creative tasks, generate insights, execute automated workflows, and more. Copilots are composed of workflows, actions, knowledge, and triggers, powered by one or more foundation models and an orchestrator that oversees and synchronizes operations of the copilot. Copilots can power generative AI capabilities in apps, web services and can be published as copilot extensions to extend and customize Microsoft Copilot.
Custom copilot – is a custom version of Microsoft Copilot that combines instructions, extra and/or custom knowledge, and any combination of skills.
Azure OpenAI Service – an API service that allows a developer to query the best of OpenAI’s LLMs with the guarantees that end users expect from Microsoft.
Azure AI Studio – a pro-code development platform offering full customization and control over generative AI applications and models with flexible and integrated visual and code-first tooling and prebuilt quick-start templates.
Microsoft Copilot – an accessible, cohesive AI interface that provides users with access to AI capabilities based on their needs and preferences, while also integrating with Microsoft products to maximize value. Microsoft Copilot is your everyday AI companion.
Microsoft Copilot Studio – A low/no code tool that allows users to easily integrate artificial intelligence into any M365 or Power Platform product, offering prebuilt and custom AI models and systems for tasks like form processing, object detection, prediction and more.
Copilot extensions – a copilot extension customize and enhance Microsoft Copilot with custom copilots, enabling new actions and customized knowledge for grounding within Copilot. With Copilot extensions, users can get a Microsoft Copilot experience that is tailored with the data, systems and workflows they use every day. Plugins – a type of copilot extension. Microsoft has defined a new plugin manifest that unlocks the ability to write a plugin once and run it anywhere on any copilot surface. Plugins should be considered an atomic, functional extensibility artifact that can be composed with any other copilot extension.
Microsoft Copilot connectors – a type of Copilot extension for low and no code experiences via Microsoft Copilot Studio. Copilot connectors bundle up capabilities and data from Microsoft Graph connectors, Power Platform connectors, and Microsoft Fabric.
Microsoft Graph connectors – either custom-built by developers or prebuilt connectors enabled by IT admins that index data from LoB, on-premises, and SaaS services into Microsoft Graph, where it can enhance and augment the capabilities of intelligent services like Microsoft Copilot, Search, and ContextIQ alongside M365 data and content.
Power Platform Connectors - Connectors that allow Microsoft Power Platform to interact with external data sources and services.
Teams Message Extension - a feature of Microsoft Teams that allows users to search or initiate actions in a web service / external system through a simple UX element called an Adaptive Card. These are all now usable as plugins.
Prompt – the input to a generative AI model from which it generates an output (often called an “answer” or “completion”). Usually text, but multimodal models can use text, images, audio, or a combination of these as the prompt.
Responsible AI (RAI) – is the set of norms and standards that Microsoft seeks to define to help advance the safe and secure use of AI for the benefit of society at large through governance, internal policy, enablement, external engagement, and thought leadership.
Foundation model - an AI model that is trained on broad data such that it can be applied across a wide range of use cases to support tasks like language processing, visual comprehension, text generation, code writing, and more. See also: LLM, SLM.
Generative AI – a form of AI characterized by its ability to create natural language/more human-like content suggested by input prompts, including prose, verse, music, and images. GPT – (generative pretrained transformer) a class of foundation models created by OpenAI and hosted by OpenAI and Azure. A recent model in this class is “GPT-4 Turbo”.
Grounding – is the process of linking abstract knowledge in AI systems to specific, real-world content. It increases the accuracy of AI agent comprehension of and interaction with real world data.
LLMs (Large Language Models) - generative AI models that are trained on a massive trove of data to produce human-like responses to natural language queries, typically through a chatbot. See also: Foundation model.
LLMOps - streamlined flow for end-to-end development of LLM-powered applications from ideation to operationalization.
Low-Code – typically involves graphical/visual interfaces and minimal coding to allow rapid, accessible application development. Unlike pro-code tools, most if not all underlying concepts and technologies are abstracted away from the user experience.
MLOps – streamlined flow for end-to-end development of a machine learning application from ideation to operationalization. MLOps is differentiated from LLMOps in audience, focus, and specifically in challenges raised by natural language processing requirements and assets. Pro-Code – includes the ability to deeply customize and control model and application performance. This could include GUI-based configuration and management capabilities in addition to code-first interface, requiring a deep level of understanding of the underlying concepts and technologies. RAG – (Retrieval-Augmented Generation) is a process that enables AI models to retrieve relevant information from a knowledge source and incorporate it into generated text. This is an artificial intelligence framework for improving the quality of responses generated by models by grounding the model on external sources of knowledge to supplement its internal representation of information.