CoherenceEvaluator Class
Evaluates coherence score for a given query and response or a multi-turn conversation, including reasoning.
The coherence measure assesses the ability of the language model to generate text that reads naturally, flows smoothly, and resembles human-like language in its responses. Use it when assessing the readability and user-friendliness of a model's generated responses in real-world applications.
Note
To align with our support of a diverse set of models, an output key without the gpt_ prefix has been added.
To maintain backwards compatibility, the old key with the gpt_ prefix is still be present in the output;
however, it is recommended to use the new key moving forward as the old key will be deprecated in the future.
Constructor
CoherenceEvaluator(model_config, *, threshold=3)
Parameters
Name | Description |
---|---|
model_config
Required
|
Configuration for the Azure OpenAI model. |
threshold
Required
|
The threshold for the coherence evaluator. Default is 3. |
Keyword-Only Parameters
Name | Description |
---|---|
threshold
|
Default value: 3
|
Attributes
id
Evaluator identifier, experimental and to be used only with evaluation in cloud.
id = 'azureml://registries/azureml/models/Coherence-Evaluator/versions/4'