Düzenle

Aracılığıyla paylaş


Responsible AI FAQ

What is Microsoft Copilot for Security? 

Copilot for Security is a natural language, is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders to improve security outcomes at machine speed and scale. It draws context from plugins and data to answer prompts so that security professionals and IT admins can help keep their organizations secure.

What can Copilot for Security do? 

Copilot for Security helps answer questions in natural language so that you can receive actionable responses to common security and IT tasks in seconds.

Microsoft Copilot for Security helps in the following scenarios:  

  • Security operations

    Manage vulnerabilities and emerging threats, accelerate incident response with guided investigation, and leverage advanced capabilities such as script analysis and query assistance.

  • Device management

    Generate policies and simulate their outcomes, gather device information for forensics, and configure devices with best practices from similar deployments. 

  • Identity management

    Discover overprivileged access, generate access reviews for incidents, generate and describe access policies, and evaluate licensing across solutions. 

  • Data protection

    Identify data impacted by security incidents, generate comprehensive summaries of data security and compliance risks, and surface risks that may violate regulatory compliance obligations.

  • Cloud security

    Discover attack paths impacting workloads and summarize cloud CVEs to proactively prevent threats and manage cloud security posture more efficiently.

What is Copilot for Security’s intended use? 

Copilot for Security is intended to help security analysts investigate and resolve incidents, summarize and explain security events, and share findings with stakeholders.  

How was Copilot for Security evaluated? What metrics are used to measure performance? 

Copilot for Security underwent substantial testing prior to being released. Testing included red teaming, which is the practice of rigorously testing the product to identify failure modes and scenarios that might cause Copilot for Security to do or say things outside of its intended uses or that don't support the Microsoft AI Principles.

Now that it is released, user feedback is critical in helping Microsoft improve the system. You have the option of providing feedback whenever you receive output from Copilot for Security. When a response is inaccurate, incomplete, or unclear, use the "Off-target" and "Report" buttons to flag any objectionable output. You can also confirm when responses are useful and accurate using the "Confirm" button. These buttons appear at the bottom of every Copilot for Security response and your feedback goes directly to Microsoft to help us improve the platform's performance.

What are the limitations of Copilot for Security? How can users minimize the impact of Copilot for Security’s limitations when using the system? 

  • The Early Access Program is designed to give customers the opportunity to get early access to Copilot for Security and provide feedback about the platform. Preview features aren’t meant for production use and might have limited functionality. 

  • Like any AI-powered technology, Copilot for Security doesn’t get everything right. However, you can help improve its responses by providing your observations using the feedback tool, which is built into the platform.  

  • The system might generate stale responses if it isn’t given the most current data through user input or plugins. To get the best results, verify that the right plugins are enabled.

  • The system is designed to respond to prompts related to the security domain, such as incident investigation and threat intelligence. Prompts outside the scope of security might result in responses that lack accuracy and comprehensiveness.

  • Copilot for Security might generate code or include code in responses, which could potentially expose sensitive information or vulnerabilities if not used carefully. Responses might appear to be valid but might not actually be semantically or syntactically correct or might not accurately reflect the intent of the developer. Users should always take the same precautions as they would with any code they write that uses material users didn't independently originate, including precautions to ensure its suitability. These include rigorous testing, IP scanning, and checking for security vulnerabilities.

  • Matches with Public Code: Copilot for Security is capable of generating new code, which it does in a probabilistic way. While the probability that it might produce code that matches code in the training set is low, a Copilot for Security suggestion might contain some code snippets that match code in the training set. Users should always take the same precautions as they would with any code they write that uses material developers didn't independently originate, including precautions to ensure its suitability. These include rigorous testing, IP scanning, and checking for security vulnerabilities.

  • The system might not be able to process long prompts, such as hundreds of thousands of characters.

  • Use of the platform might be subject to usage limits or capacity throttling. Even with shorter prompts, choosing a plugin, making API calls, generating responses, and checking them before displaying them to the user can take time (up to several minutes) and require high GPU capacity. 

  • To minimize errors, users are advised to follow the prompting guidance

What operational factors and settings allow for effective and responsible use of Copilot for Security? 

  • You can use everyday words to describe what you’d like Copilot for Security to do. For example: 

    • "Tell me about my latest incidents" or "Summarize this incident."
  • As the system generates the response, you start to see the steps the system is taking in the process log, providing opportunities to double-check its processes and sources.  

  • At any time during the prompt formation, you can cancel, edit, rerun, or delete a prompt. 

  • You can provide feedback about a response's quality, including reporting anything unacceptable to Microsoft. 

  • Responses can be pinned, shared, and exported – helping security professionals collaborate and share observations. 

  • Administrators have control over the plugins that connect to Copilot for Security.   

  • You can choose, personalize, and manage plugins that work with Copilot for Security.

  • Copilot for Security created promptbooks, which are a group of prompts that run in sequence to complete a specific workflow.

How is Microsoft approaching responsible AI for Copilot for Security?

At Microsoft, we take our commitment to responsible AI seriously. Copilot for Security is being developed in accordance with our AI principles. We're working with OpenAI to deliver an experience that encourages responsible use. For example, we have and will continue to collaborate with OpenAI on foundational model work. We have designed the Copilot for Security user experience to keep humans at the center. We developed a safety system that is designed to mitigate failures and prevent misuse with things like harmful content annotation, operational monitoring, and other safeguards. The invite-only early access program is also a part of our approach to responsible AI. We're taking user feedback from those with early access to Copilot for Security to improve the tool before making it broadly available.

Responsible AI is a journey, and we'll continually improve our systems along the way. We're committed to making our AI more reliable and trustworthy, and your feedback will help us do so.

Do you comply with the EU AI Act?

We are committed to compliance with the EU AI Act. Our multi-year effort to define, evolve, and implement our Responsible AI Standard and internal governance has strengthened our readiness.

At Microsoft, we recognize the importance of regulatory compliance as a cornerstone of trust and reliability in AI technologies. We're committed to creating responsible AI by design. Our goal is to develop and deploy AI that will have a beneficial impact on and earn trust from society.

Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Microsoft's Responsible AI Standard takes these six principles and breaks them down into goals and requirements for the AI we make available.

Our Responsible AI Standard takes into account regulatory proposals and their evolution, including the initial proposal for the EU AI Act. We developed our most recent products and services in the AI space such as Microsoft Copilot and Microsoft Azure OpenAI Service in alignment with our Responsible AI Standard. As final requirements under the EU AI Act are defined in more detail, we look forward to working with policymakers to ensure feasible implementation and application of the rules, to demonstrating our compliance, and to engaging with our customers and other stakeholders to support compliance across the ecosystem.