重要事項
本文中的部分資訊與發行前版本產品有關,在產品正式發行前可能會大幅度修改。 Microsoft 對此處提供的資訊,不提供任何明確或隱含的瑕疵擔保。
Microsoft Sentinel 模型內容通訊協定 (MCP) 伺服器中的代理程式建立工具集合,可讓開發人員使用自然語言在所選的 MCP 相容 IDE 中建置Security Copilot代理程式。
在本入門課程中,您將瞭解如何:
設定並驗證 MCP 伺服器
啟用 GitHub Copilot 代理程式模式
管理 MCP 工具的內容
必要條件
若要使用 Microsoft Sentinel MCP 伺服器並存取工具,您必須上線至 Microsoft Sentinel 資料湖。 如需詳細資訊,請參閱上線至 Microsoft Sentinel 資料湖 和 Microsoft Sentinel 圖形 ( 預覽) 。
支援的程式碼編輯器
Microsoft Sentinel 對 Security Copilot 代理程式建立 MCP 工具的支援適用於下列 AI 支援的程式碼編輯器:
設定並驗證 MCP 伺服器
安裝MCP伺服器的步驟如下:
啟動Visual Studio Code (VS Code) 。
在 VS Code 中新增 MCP 伺服器連線。 鍵入 Press
Ctrl + Shift + P以開啟「指令面板」。 鍵入符號>,後接文字MCP: Add server。選取
HTTP (HTTP or Server-Sent Events)。輸入下列伺服器 URL,然後選取 Enter。 此 URL 區分大小寫。
https://sentinel.microsoft.com/mcp/security-copilot-agent-creation輸入易記的伺服器 ID。
系統會提示您 信任 伺服器。
當提示驗證伺服器定義時,選取 允許。
選擇是否要讓伺服器在所有 VS Code 工作區中可用,或僅在現行工作區中使用。
驗證之後,伺服器應該會啟動 [執行中] ,而且您應該會看到名為 [VS Code ] 工作區中 MCP 伺服器設定的檔案
mcp.json。
啟用 GitHub Copilot 代理程式模式
開啟 VS Code 的聊天 >檢視 功能表 >聊天 或按
CRTL + ALT + I。將聊天設定為客服專員模式。
選取提示列中的工具圖示。
您可以看到 GitHub Copilot 所使用的工具清單。 展開您剛才新增的 MCP 伺服器的資料列,以查看用於代理程式建置的五個工具:
管理 MCP 工具的內容
透過提供正確的內容,您可以從 VS Code 中的 AI 獲得協助,以提供相關且準確的回應。 本節涵蓋兩個選項來管理上下文並確保 AI 助理按預期使用MCP工具,並具有更高的一致性。
您可以選擇下列其中一個選項來管理內容:
自訂指示
自訂指示可讓您在 Markdown 檔案中定義常見的指導方針或規則,以描述應該如何執行工作。 無需在每個聊天提示中手動包含上下文,而是在 Markdown 文件中指定自定義指令,以確保一致的 AI 響應符合您的項目要求。
您可以設定自訂指示,以自動套用至所有聊天要求或僅套用至特定檔案。
使用自訂指示檔案
在工作區根目錄的單一 .github/copilot-instructions.md Markdown 檔案中定義您的自訂指示。 VS Code 會自動將此檔案中的指示套用至此工作區內的所有聊天要求。
使用 .github/copilot-instructions.md 檔案的步驟:
啟用設定
github.copilot.chat.codeGeneration.useInstructionFiles。在工作區的根目錄建立檔案
.github/copilot-instructions.md。 如有需要,請先建立目錄.github。使用自然語言和 Markdown 格式描述您的指示。
若要開始使用,請將內容檔案
scp-mcp-context.md的內容複製到檔案中copilot-instructions.md。 請參閱 MCP 內容。
新增內容檔案
為了協助確保 AI 助理可以如預期使用MCP工具,並具有更高的一致性,請將此內容檔案新增至您的IDE。 請確定 AI 助理在提示時參考此檔案。
將內容
scp-mcp-context.md新增至 VS Code 或直接貼到您的工作區。 使用內容檔案,請參閱 MCP 內容。 您的工作區看起來像這樣:在提示列中選取新增 內容, 然後選取內容檔案。
MCP 工具的內容檔案
複製 以 scp-mcp-context.md 與快速入門搭配使用。
# MCP Tools Available for Agent Building
1. **start_agent_creation**
- **Purpose**: Creates a new Security Copilot session and starts the agent building process.
- The userQuery input will be the user's problem statement (what they want the agent to do).
- The output of the tool should be returned IN FULL WITHOUT EDITS.
- The tool will return an initial agent YAML definition.
2. **compose_agent**
- **Purpose**: Continues the session and agent building process created by *start_agent_creation*. Outputs agent definition YAML or can ask clarifying questions to the user.
- The sessionId input is obtained from the output of *start_agent_creation*
- The existingDefinition input is optional. If an agent definition YAML has not been created yet, this should be blank (can be an empty string).
3. **search_for_tools**
- **Purpose: Discover relevant skills (tools) based on the user's query
- This will create a new Security Copilot session, but it should not be included in the start_agent/continue_agent flow.
- A user might want to know about Security Copilot skills they have access to without wanting to create an agent
- The session ID created should NOT be reused in any capacity
4. **get_evaluation**
- **Purpose: Get the results of the evaluations triggered by each of the above tools. You MUST repeatedly activate this tool until the property of the result "state" is equal to "Completed" in order to get the fully processed result. The "state" may equal "Created" or "Running" but again, you must repeat the process until the state is "Completed". There is NO MAXIMUM amount of times you might call this tool in a row.
5. **deploy_agent**
- **Purpose: Deploy an agent to Security Copilot.
- The user must provide the scope as either "User" or "Workspace".
- Unless they already have an AGENT definition yaml provided, *start_agent_creation* must be run before to generate an agentDefinition
- "agentSkillsetName" should be COPIED EXACTLY from the value of "Descriptor: Name:" in the agent definition YAML, including any special characters like ".". This will NOT work if the two do not match EXACTLY.
- DO NOT use *get_evaluation* after this tool.
# Agent Building Execution Flow
## Step 1: Problem Statement Check
- If the user did **not** provide a problem statement, prompt them to do so.
- If the user **did** provide a problem statement, proceed to Step 2.
## Step 2: Start Agent Creation
- Use the `start_agent_creation` tool with `userQuery = <problem statement>`.
- **DO NOT** include any quotation marks in the userQuery
- Then, use `get_evaluation` to retrieve the initial response.
- **DO** repeatedly call `get_evaluation` until the `"state"` property of the result equals `"Completed"`.
- **DO NOT** require the user to ask again to get the results.
- **DO NOT** edit or reword the response content.
## Step 2.5: Output Handling
- **DO NOT** reformat, summarize, or describe the YAML output.
- **DO** return the YAML output **verbatim**.
- **DO** return the output in **AGENT FORMAT**.
## Step 3: Agent Refinement
- Ask the user if they would like to edit the agent or if they would like to deploy the agent. If they want to deploy, skip to **Step 4**.
- If the user wants to edit the agent definition:
- If they respond with edits directly, use `compose_agent` with:
- `sessionId` from `start_agent_creation`
- `existingDefinition = <previous AGENT YAML>`
- `\n` MUST be rewritten as `\\n`
- `userQuery = <user’s new input>`
- **DO NOT** include any quotation marks in the userQuery
- If they attach a manually edited YAML file to the context, use the file content as `existingDefinition`.
- **DO NOT** edit the file directly, you MUST use `compose_agent`
- `\n` MUST be rewritten as `\\n`
## Step 4: Agent Deployment
- If the user asks to deploy the agent, use `deploy_agent`.
- You **must confirm the scope**: either `"User"` or `"Workspace"`.
- If not provided, ask the user to specify.
- `agentSkillsetName` must **exactly match** the value of `Descriptor: Name:` in the YAML.
- This includes any special characters.
- Leave existing instances of `\n` inside `agentDefinition` as-is
- **DO NOT** run `get_evaluation` after deployment.
- **DO** include all of these things in the tool response to the user:
1. Confirm successful deployment to the user
2. Direct the user to the Security Copilot portal to test and view the agent with this link: https://securitycopilot.microsoft.com/agents
3. Direct the user to read more on how to test their agent in Security Copilot with this link: https://learn.microsoft.com/en-us/copilot/security/developer/mcp-quickstart#test-agent
## Step 5: Further Agent Refinement and Redeployment
- After deployment, the user may still want to **edit the agent definition**.
- If so, you must support calling `compose_agent` again.
- Follow the same process as described in **Step 3**:
- If the user asks for edits directly, use the previous AGENT YAML as `existingDefinition`.
- If the user uploads a manually edited YAML file, use the file content as `existingDefinition`.
- The user may also want to **redeploy the agent** after making refinements.
- You must run `deploy_agent` again using the updated YAML.
- Ensure the `agentSkillsetName` matches **exactly** the value of `Descriptor: Name:` in the latest YAML, including any special characters.
- Leave existing instances of `\n` inside `agentDefinition` as-is
- Confirm the deployment scope: either `"User"` or `"Workspace"`.
- If the scope is not provided, prompt the user to specify.
- Do **not** run `get_evaluation` after deployment.
- Confirm successful redeployment to the user.
- Alternatively, the user may want to **create a new agent**.
- Restart the procedure from **Step 1**.
- When using `start_agent_creation`, a new session ID will be created.
- **DO** keep track of which session IDs correspond to which problem statements or agents so the user can return to previous sessions if needed.
## Additional Rules
- Only call `compose_agent` **after** the user has provided a response. Do not proceed automatically.
- Agent creation must remain **user-driven**. Do not initiate steps without explicit user input.
- Wait for the user to respond before continuing to the next step.
- Tool responses must be returned **directly to the user** in full.
- Do **not** alter, reformat, summarize, or reword the content of any tool response.
- This applies specifically to the `"result": "content"` field in the JSON returned by tool executions.
- LEAVE OUT any "Grounding Notes"
## Error Handling
- If any tool call fails:
- Inform the user of the failure.
- If it is a client error, make an attempt to retry the tools, rewriting inputs based on the error message.
- Example: If the error indicates invalid JSON characters, escape or remove those characters from the input and retry. Always attempt escaping first.