Azure OpenAI models failing to identify and respond to queries about sensitive information present within the provided documents.

Chaymae El Aattabi 5 Reputation points
2024-02-22T20:08:09.24+00:00

Hello everyone,

I am currently working on developing an AI assistant using Azure's OpenAI models, and I've encountered an issue that I hope you can assist me with. Whenever I pose questions to the model that involve sensitive data, such as names or CNI (National Identity Card) details, the response I receive is, "The requested information is not available in the retrieved data. Please try another query or topic.," despite the fact that this information is present in the documents. Could anyone offer some guidance or insights on how to resolve this? Any help would be greatly appreciated.

Azure AI Search
Azure AI Search
An Azure search service with built-in artificial intelligence capabilities that enrich information to help identify and explore relevant content at scale.
1,234 questions
Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
3,828 questions
{count} vote

2 answers

Sort by: Most helpful
  1. Grmacjon-MSFT 18,896 Reputation points
    2024-02-23T01:00:24.75+00:00

    Hello @Chaymae El Aattabi

    The behavior is actually a feature of Azure’s OpenAI models designed to protect sensitive data. Azure OpenAI’s models are stateless and do not have access to any customer data, queries, or output.

    This means that they do not store or retrieve personal data from previous interactions unless explicitly fine-tuned with your own training data

    The models are designed to prioritize privacy and security, and they do not use your data to improve any other services

    So even if sensitive information like names or National Identity Card (CNI) details are present in the documents, the models will not retrieve or generate this information in their responses.

    If you’re trying to retrieve sensitive information from your own documents, you might need to adjust your approach. Instead of asking the model to retrieve the information directly, consider structuring your query in a way that prompts the model to generate a response based on the information in the document, without explicitly revealing the sensitive data.

    0 comments No comments

  2. Rocío Urquijo Fuertes 0 Reputation points
    2024-02-23T10:35:12.51+00:00

    For who it could help: A Responsible Approach Microsoft has a layered approach for generative models, guided by Microsoft’s responsible AI principles. In Azure OpenAI Service, an integrated safety system provides protection from undesirable inputs and outputs and monitors for misuse. In addition, Microsoft provides guidance and best practices for customers to responsibly build applications using these models and expects customers to comply with the Azure OpenAI Code of Conduct. With Open AI’s GPT-4, new research advances from OpenAI have enabled an additional layer of protection. Guided by human feedback, safety is built directly into the GPT-4 model, which enables the model to be more effective at handling harmful inputs, thereby reducing the likelihood that the model will generate a harmful response. https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/introducing-azure-openai-service-on-your-data-in-public-preview/ba-p/3847000

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.