Responsible use of GitHub Copilot coding agent on GitHub.com
Learn how to use Copilot coding agent on GitHub.com responsibly by understanding its purposes, capabilities, and limitations.
By the end of this unit, you will be able to:
- Understand the purpose, capabilities, and limits of the Copilot coding agent on GitHub.com.
- Apply responsible-use practices: scoping tasks, securing environments, and validating outcomes.
- Recognize security measures, risks, and mitigations, and where to improve performance.
About Copilot coding agent on GitHub.com
Copilot coding agent is an autonomous and asynchronous software development agent integrated into GitHub. The agent can pick up a task from an issue or from Copilot Chat, create a pull request, and then iterate on the pull request in response to comments.
Copilot coding agent can generate tailored changes based on your description and configurations, including tasks like bug fixes, implementing incremental new features, prototyping, documentation, and codebase maintenance. After the initial pull request is created, the agent can iterate with you, based on your feedback and reviews.
While working on your task, the agent has access to its own ephemeral development environment where it can make changes to your code, execute automated tests, and run linters. The agent has been evaluated across a variety of programming languages, with English as the primary supported language.
How the agent works (end-to-end)
Prompt processing
The task provided to Copilot through an issue, pull request comment or Copilot Chat message is combined with other relevant, contextual information to form a prompt. Inputs can take the form of plain natural language, code snippets, or images.
Language model analysis
The prompt is then passed through a large language model, which analyzes the input to help the agent reason on the task and leverage necessary tools.
Response generation
The language model generates a response based on its analysis of the prompt. This response can take the form of natural language suggestions and code suggestions.
Output formatting
Once the agent completes its first run, it will update the pull request description with the changes it made. The agent may include supplemental information about resources it couldn't access and provide suggestions on the steps to resolve.
You may provide feedback to the agent by commenting within the pull request or explicitly mentioning the agent (@copilot). The agent will then resubmit that feedback to the language model for further analysis. Once the agent completes changes based on feedback, it will respond to your comment with updated changes.
Copilot is intended to provide you with the most relevant solution for task resolution. However, it may not always provide the answer you are looking for. You are responsible for reviewing and validating responses generated by Copilot to ensure they are accurate and appropriate. Additionally, as part of our product development process, GitHub undertakes red teaming (testing) to understand and improve the safety of the agent.
Use cases for Copilot coding agent
- Codebase maintenance: Security fixes, dependency upgrades, and targeted refactoring.
- Documentation: Updating and creating new documentation.
- Feature development: Implementing incremental feature requests.
- Improving test coverage: Developing additional test suites for quality management.
- Prototyping new projects: Green fielding new concepts.
Improving performance for Copilot coding agent
To enhance performance and address limitations, use these measures:
Ensure your tasks are well-scoped by providing:
- A clear description of the problem to be solved or the work required.
- Complete acceptance criteria on what a good solution looks like (for example, should there be unit tests?).
- Hints or pointers on what files need to be changed.
Customize your experience with additional context
Copilot coding agent leverages your prompt, comments, and the repository's code as context when generating suggested changes. Improve results by adding custom Copilot instructions so the agent understands how to build, test, and validate its changes.
Other helpful customizations:
- Customizing the development environment for GitHub Copilot coding agent
- Customizing or disabling the firewall for GitHub Copilot coding agent
- Extending GitHub Copilot coding agent with the Model Context Protocol (MCP)
Use Copilot coding agent as a tool, not a replacement
Always review and test the content generated by the agent to ensure it meets your requirements and is free of errors or security concerns prior to merging.
Use secure coding and code review practices
Although Copilot coding agent can generate syntactically correct code, it may not always be secure. Continue to follow best practices for secure coding (avoid hard-coded secrets, prevent injection vulnerabilities) and apply rigorous testing, IP scanning, and vulnerability checks.
Provide feedback
If you encounter any issues or limitations, use the thumbs-down icon below an agent response or share feedback in the community discussion forum.
Stay up to date
Copilot coding agent is evolving. Monitor new security risks and best practices as they emerge.
Security measures for Copilot coding agent
Avoiding privilege escalation
- Copilot coding agent will only respond to interactions from users with write access.
- Actions workflows triggered by agent PRs require approval from a user with write access before they run.
- Hidden characters (not rendered on GitHub.com) are filtered to reduce prompt-injection risks.
Constraining Copilot's permissions
- The agent only accesses the repository where it is creating a PR; it cannot access other repositories.
- Pushes are limited to branches with names beginning with copilot/ (not your default branch).
- The agent does not have access to org/repo Actions secrets or variables at runtime. Only secrets/variables added to the copilot environment are passed to the agent.
Preventing data exfiltration
A firewall is enabled by default to prevent accidental or malicious exfiltration of code or sensitive data. See Customizing or disabling the firewall for GitHub Copilot coding agent.
Limitations of Copilot coding agent
Depending on your codebase and inputs, performance can vary. Keep these constraints in mind:
- Limited scope & quality: The LLM may not handle certain code structures or obscure languages; quality varies by language coverage.
- Potential biases: Training data and retrieved context may include biases; the agent may lean toward certain languages or styles.
- Security risks: Generated code is based on repository context and could expose sensitive info if not reviewed; thorough review is required.
- Inaccurate code: Code may appear correct but be semantically/syntactically wrong or misaligned with intent. Validate fit, patterns, and style.
- Public code: The agent may produce matches/near-matches to public code even if "Block" is set; references may not be provided.
- Legal/regulatory: Ensure compliance with applicable obligations; avoid prohibited uses under terms of service and codes of conduct.