@Mendoza, Christopher Welcome to Microsoft Q&A Forum, Thank you for posting your query here!
.
I understand that your concern about the security of their source code when using LLM APIs through Azure Open AI services. You want to know how Microsoft ensures that their source code is not accessible by anyone in Microsoft.
.
Please note that, Your prompts (inputs) and completions (outputs), your embeddings, and your training data:
- are NOT available to other customers.
- are NOT available to OpenAI.
- are NOT used to improve OpenAI models.
- are NOT used to improve any Microsoft or 3rd party products or services.
- are NOT used for automatically improving Azure OpenAI models for your use in your resource (The models are stateless, unless you explicitly fine-tune models with your training data).
- Your fine-tuned Azure OpenAI models are available exclusively for your use.
.
Technical protection for Security and Encryption aspect:
- Azure AI services data is encrypted and decrypted using FIPS 140-2 compliant 256-bit AES encryption.
- Encryption and decryption are transparent, meaning encryption and access are managed for you.
- Your data is secure by default and you don’t need to modify your code or applications to take advantage of encryption.
- Azure OpenAI provides two methods for authentication: API Keys and Microsoft Entra ID.
- Azure OpenAI supports VNETs and Private Endpoints.
.
Legal Protection
- The Microsoft Products and Services Data Protection Addendum governs data processing by the Azure OpenAI Service.
- Azure OpenAI doesn’t use customer data to retrain models.
- The Azure OpenAI Service is fully controlled by Microsoft; Microsoft hosts the OpenAI models in Microsoft’s Azure environment and the Service does NOT interact with any services operated by OpenAI (e.g. ChatGPT, or the OpenAI API).
.
Attack Surface Exposure
- Azure OpenAI Service has an integrated safety system that provides protection from undesirable inputs and outputs and monitors for misuse.
- It also provides comprehensive logging and monitoring and enhanced security for enterprise deployments of the Azure OpenAI Service API.
.
In summary, Azure OpenAI Service has robust mechanisms in place to ensure the privacy and security of your source code and Microsoft doesn't use your company data to train any of the models.
.
More Info:
https://learn.microsoft.com/en-us/azure/ai-services/openai/encrypt-data-at-rest
https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy
https://learn.microsoft.com/en-us/azure/ai-services/openai/faq
Report abuse of Azure OpenAI Service through the Report Abuse Portal
Report problematic content to cscraireport@microsoft.com
.
Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.
**
Please do not forget to "Accept the answer” and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.