@ifreegroup When performing fine-tuning, the model weights are adjusted to align with your data and better ensure that the output is in the format/style that you expect without having to provide many examples.
But the data used itself is not really ingested and used as memory for the fine-tuned model and the responses are not really expected to be the same as in your training set.
If you really need responses that are based on factual data, you should consider using OpenAI on your data, which involves indexing and performing semantic search on your data sources and using that as context in your prompts.
This approach ensures that generated responses are based on the context provided and is the approach that most OpenAI-based chatbots are built upon.