Disallow automatic chunking

Ehrensperger Tim 0 Reputation points
2024-11-28T10:04:28.81+00:00

When using JSON files as a data source, I want these files to not be chunked automatically by the Azure open ai service (the files have a reasonable length which should easily fit in the context size of the open ai models).

  1. Where does this chunking happen? (when I retrieve my files in the Azure portal, search service, go to the respective index, and search with "*", I get the complete files, unchunked) But when I use the chatbot and click on a reference given by the chatbot, the files are chunked into very small pieces. Where in the workflow does this happen?
  2. How can I disallow the chunking? I want to use the unchunked files as input to the openai model because I notice with chunked files that the model hallucinates about the MISSING parts of the document (i.e. the other, missing, not retrieved chunks). I assume these hallucinations would be less of a problem if the complete, unchunked file would be given as input to the model. Furthermore, I need the reference displayed to be unchunked to be useful because my files contain important information at the beginning and at the end.

Thank you!

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
3,378 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Pavankumar Purilla 1,645 Reputation points Microsoft Vendor
    2024-11-28T20:09:36.15+00:00

    Hi Ehrensperger Tim,
    Greetings & Welcome to Microsoft Q&A forum! Thanks for posting your query!

    The chunking of JSON files in Azure OpenAI Service typically happens either during the indexing process in Azure Cognitive Search or query time when documents are split into smaller chunks to fit the model’s token limit. While querying the index with * shows unchunked files, the chatbot retrieves smaller chunks to construct responses.

    To prevent chunking, ensure that your files are indexed as single units by storing the full content in one field (e.g., content), and configure your chatbot’s retrieval workflow to fetch entire documents without splitting. In Azure AI Studio or your integration pipeline, adjust retrieval and formatting logic to pass unchunked files to the model, ensuring references and context remain intact, which can reduce hallucinations and improve relevance.

    Hope this helps. Do let us know if you have any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.