Inconsistency in QA Responses Across Different Environments
We have encountered an issue with our client's setup, where we have designed and implemented intelligent bots for webchat and WhatsApp, structured in an orchestrator/CLU/QA framework.
In our configuration, we have three distinct environments: Development (DEV), Homologation (HML), and Production (PRD). We have observed the issue where, despite utilizing identical knowledge bases (KBs), undergoing the same training, and employing consistent release versions across all environments, we encounter disparities in the responses provided by the QA component. Specifically, this discrepancy in responses occurs in approximately 5-10% of the inquiries.
We are seeking assistance with the following:
- An explanation as to why these differences in responses are occurring.
- Guidance on how we can prevent this issue from arising in the future.
We look forward to any ideas.
Regards