Responsible AI FAQ

Important

The information in this article only applies to the Microsoft Security Copilot Early Access Program, an invite-only paid preview program for commercial customers. Some information in this article relates to a prereleased product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.

What is Microsoft Security Copilot? 

Security Copilot is a natural language, AI-powered security analysis tool that assists security professionals respond to threats quickly, processing signals at machine speed, and assessing risk exposure in minutes. It draws context from plugins and data to answer security related prompts so that security professionals can help keep their organizations secure. Users can collect the responses that they find useful from Security Copilot and pin them to the pinboard for future reference.

What can Security Copilot do? 

Security Copilot brings the full power of OpenAI architecture and combines it with security-specific models developed and powered by Microsoft Security’s leading expertise, global threat intelligence and comprehensive security products to help security professionals better detect threats, harden defenses, and respond to security incidents faster.

Microsoft Security Copilot helps boost security operations by:  

  • Discovering vulnerabilities, prioritizing risks, and providing guided recommendations for threat prevention and remediation

  • Surfacing incidents, assessing its scale, and providing recommendations for remediation steps 

  • Summarizing events, incidents, and threats and creating customizable reports

  • Assisting security professionals collaboration through built-in capabilities

What is Security Copilot’s intended use? 

Security Copilot is intended to help security analysts investigate and resolve incidents, summarize and explain security events, and share findings with stakeholders.  

How was Security Copilot evaluated? What metrics are used to measure performance? 

Security Copilot underwent substantial testing prior to being released, including red teaming, which is the practice of rigorously testing the product to identify failure modes and scenarios that might cause Security Copilot to do or say things outside of its intended uses or that don't support the Microsoft AI Principles.

Now that it's been released, user feedback is critical in helping Microsoft improve the system. You have the option of providing feedback whenever you receive output from Security Copilot. When a response is inaccurate, incomplete, or unclear using the "Off-target", and use the "Report" button to flag any objectionable output. You can also confirm when responses are useful and accurate using the "Confirm" button. These buttons appear at the bottom of every Security Copilot response and your feedback goes directly to Microsoft to help us improve the platform's performance.  

What are the limitations of Security Copilot? How can users minimize the impact of Security Copilot’s limitations when using the system? 

  • The Early Access Program is designed to give customers the opportunity to get early access to Security Copilot and provide feedback about the platform. Preview features aren’t meant for production use and might have limited functionality. 

  • Like any AI-powered technology, Security Copilot doesn’t get everything right. However, you can help improve its responses by providing your observations using the feedback tool, which is built into the platform.  

  • The system might generate stale responses if it isn’t given the most current data through user input or plugins. To get the best results, verify that the right plugins are enabled.

  • The system is designed to respond to prompts related to the security domain, such as incident investigation and threat intelligence. Prompts outside the scope of security might result in responses that lack accuracy and comprehensiveness.

  • Security Copilot might generate code or include code in responses, which could potentially expose sensitive information or vulnerabilities if not used carefully. Responses might appear to be valid but might not actually be semantically or syntactically correct or might not accurately reflect the intent of the developer. Users should always take the same precautions as they would with any code they write that uses material users didn't independently originate, including precautions to ensure its suitability. These include rigorous testing, IP scanning, and checking for security vulnerabilities.

  • Matches with Public Code: Security Copilot is capable of generating new code, which it does in a probabilistic way. While the probability that it might produce code that matches code in the training set is low, a Security Copilot suggestion might contain some code snippets that match code in the training set. Users should always take the same precautions as they would with any code they write that uses material developers didn't independently originate, including precautions to ensure its suitability. These include rigorous testing, IP scanning, and checking for security vulnerabilities.

  • The system might not be able to process extremely long prompts, such as hundreds of thousands of characters.

  • Using the platform might be subject to usage limits or capacity throttling. Even with shorter prompts, choosing a plugin, making API calls, generating responses, and checking them before displaying them to the user can take time (up to several minutes) and require high GPU capacity. 

  • To minimize errors, users are advised to follow the prompting guidance

What operational factors and settings allow for effective and responsible use of Security Copilot? 

  • You can use everyday words to describe what you’d like Security Copilot to do. For example: 

    • "Tell me about my latest incidents" or "Summarize this incident".
  • As the system generates the response, you start to see the steps the system is taking in the process log, providing opportunities to double-check its processes and sources.  

  • At any time during the prompt formation, you can cancel, edit, rerun, or delete a prompt. 

  • You can provide feedback about a response's quality, including reporting anything unacceptable to Microsoft. 

  • Responses can be pinned, shared, and exported – helping security professionals collaborate and share observations. 

  • Administrators have control over the plugins that connect to Security Copilot.   

  • You can choose, personalize, and manage plugins that work with Security Copilot.

  • Security Copilot has created promptbooks, which are a group of prompts that run in sequence to complete a specific workflow.

How is Microsoft approaching responsible AI for Security Copilot?

At Microsoft, we take our commitment to responsible AI seriously. Security Copilot is being developed in accordance with our AI principles. We're working with OpenAI to deliver an experience that encourages responsible use. For example, we have and will continue to collaborate with OpenAI on foundational model work. We have designed the Security Copilot user experience to keep humans at the center, and we have developed a safety system that is designed to mitigate failures and prevent misuse with things like harmful content annotation, operational monitoring, and other safeguards. The invite-only early access program is also a part of our approach to responsible AI. We are taking user feedback from those with early access to Security Copilot to improve the tool before making it broadly available.

Responsible AI is a journey, and we'll continually improve our systems along the way. We're committed to making our AI more reliable and trustworthy, and your feedback will help us do so.