Don't expose Proprietary data to the data in LLM

Janarthanan S 700 Reputation points
2023-09-13T13:29:08.7433333+00:00

What are the security terms and conditions for using LLM so that we don't expose proprietary data to the data in an LLM outside of our control?

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
3,068 questions
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
2,849 questions
{count} votes

1 answer

Sort by: Most helpful
  1. ChakaravarthiRangarajanBhargavi-1820 715 Reputation points
    2023-09-13T15:24:35.0833333+00:00

    Hi Janarthanan,

    Thanks for posting this interesting question, to safeguard proprietary data when using a large language model (LLM), enforce strict data sanitization, access controls, and encryption measures. Implement dummy data for testing, have users sign non-disclosure agreements, and maintain detailed audit trails. Regularly update and patch the LLM, conduct third-party security audits, and educate personnel on security best practices. These measures collectively mitigate the risk of exposing sensitive information to external LLMs beyond your control.

    If you find this answer useful kindly accept it, for assistance ping here thanks.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.