Don't expose Proprietary data to the data in LLM

Janarthanan S 210 Reputation points

What are the security terms and conditions for using LLM so that we don't expose proprietary data to the data in an LLM outside of our control?

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
1,046 questions
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
1,701 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Chakaravarthi Rangarajan Bhargavi 205 Reputation points

    Hi Janarthanan,

    Thanks for posting this interesting question, to safeguard proprietary data when using a large language model (LLM), enforce strict data sanitization, access controls, and encryption measures. Implement dummy data for testing, have users sign non-disclosure agreements, and maintain detailed audit trails. Regularly update and patch the LLM, conduct third-party security audits, and educate personnel on security best practices. These measures collectively mitigate the risk of exposing sensitive information to external LLMs beyond your control.

    If you find this answer useful kindly accept it, for assistance ping here thanks.

    0 comments No comments