You should have subject matter expertise in operating, integrating, supervising, and governing AI agents inside production-grade SDLC workflows and development environments, ensuring reliability, safety, and velocity using GitHub as the system of record and control plane.
Your responsibilities for this role include:
- Operating agent workflows inside the SDLC
- Supervising autonomous behavior with GitHub controls
- Evaluating and tuning agent outputs using scans and artifacts
- Configuring custom agents
- Coordinating multi-agent execution safely
You work closely with architects, platform engineers, DevOps engineers, application developers, product managers, and security engineers to develop, deploy, operate, and manage agents that operate within the GitHub platform.
You should have experience with the software development lifecycle (SDLC), workflows in GitHub and controls, and code quality, security, and review practices. You should also have experience with coding agents including GitHub Copilot, MCP servers and agent customization such as custom instructions, custom agents, tools, and Copilot setup steps.
EXAM SANDBOX
Experience demo
Experience the look and feel of the exam before taking it. You'll be able to interact with different question types in the same user interface you'll use during the exam.
Launch the sandbox
Important
Learn more about the beta exam GH-600.
The GitHub Agentic AI Developer exam is currently in beta. As a result, you will not receive your results immediately. Scores will be released approximately eight weeks after the beta period concludes. Thank you for participating and helping us improve the certification!
This exam is provided by Microsoft, but the exam and associated certification are maintained by GitHub. Learn more about GitHub’s privacy policy.