How much Microsoft would have access to see code when using LLMs through the Azure Open AI APIs and what is the exposure in terms of attack surfaces from security standpoint

Mendoza, Christopher 20 Reputation points
2023-11-20T18:26:25.78+00:00

Here we are passing source code in plain text to LLM APIs through the endpoints exposed by Azure Open AI services. Question is that how did we ensure that Intel source code is not accessible by any Microsoft engineer working in the Azure backend. We want to understand both technical protection and legal protection mechanisms that are in place to allow safe access to source code by Azure cloud services.

Azure AI Language
Azure AI Language
An Azure service that provides natural language capabilities including sentiment analysis, entity extraction, and automated question answering.
521 questions
{count} votes

Accepted answer
  1. navba-MSFT 27,550 Reputation points Microsoft Employee Moderator
    2023-11-21T03:31:11.3166667+00:00

    @Mendoza, Christopher Welcome to Microsoft Q&A Forum, Thank you for posting your query here!
    .
    I understand that your concern about the security of their source code when using LLM APIs through Azure Open AI services. You want to know how Microsoft ensures that their source code is not accessible by anyone in Microsoft.
    .
    Please note that, Your prompts (inputs) and completions (outputs), your embeddings, and your training data:

    • are NOT available to other customers.
    • are NOT available to OpenAI.
    • are NOT used to improve OpenAI models.
    • are NOT used to improve any Microsoft or 3rd party products or services.
    • are NOT used for automatically improving Azure OpenAI models for your use in your resource (The models are stateless, unless you explicitly fine-tune models with your training data).
    • Your fine-tuned Azure OpenAI models are available exclusively for your use.
      .

    Technical protection for Security and Encryption aspect:

    Legal Protection

    • The Microsoft Products and Services Data Protection Addendum governs data processing by the Azure OpenAI Service.
    • Azure OpenAI doesn’t use customer data to retrain models.
    • The Azure OpenAI Service is fully controlled by Microsoft; Microsoft hosts the OpenAI models in Microsoft’s Azure environment and the Service does NOT interact with any services operated by OpenAI (e.g. ChatGPT, or the OpenAI API).
      .

    Attack Surface Exposure

    • Azure OpenAI Service has an integrated safety system that provides protection from undesirable inputs and outputs and monitors for misuse.
    • It also provides comprehensive logging and monitoring and enhanced security for enterprise deployments of the Azure OpenAI Service API.

    .

    In summary, Azure OpenAI Service has robust mechanisms in place to ensure the privacy and security of your source code and Microsoft doesn't use your company data to train any of the models.
    .

    More Info:
    https://learn.microsoft.com/en-us/azure/ai-services/openai/encrypt-data-at-rest
    https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy
    https://learn.microsoft.com/en-us/azure/ai-services/openai/faq
    Report abuse of Azure OpenAI Service through the Report Abuse Portal

    Report problematic content to cscraireport@microsoft.com
    .
    Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.

    **
    Please do not forget to "Accept the answer” and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.

    1 person found this answer helpful.
    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.