Evaluating the performance of LLM summarization prompts with G-Eval
Article
Abstractive summarization evaluation remains an area in which previous methods of automatic evaluation metrics have performed poorly when compared with human judgements. A technique known as G-Eval has been developed, which uses GPT-4 to evaluate the quality of summaries without a ground truth. This method shows state-of-the-art results when compared to human judgements, based on meta-evaluation against SummEval benchmark.
G-Eval builds on the concept of four key dimensions proposed by Kryscinski et al. (2019) to abstractive summary quality:
Coherence - the collective quality of all sentences in the summary. The summary should be well-structured and well-organized and should build a coherent body of information about a topic.
Consistency - factual alignment between the summary and source document. The summary should contain only statements that are entailed by the source document.
Fluency - the quality of individual sentences of the summary. The summary should have no formatting problems and grammatical errors that make the summary difficult to read.
Relevance - selection of the most important content from the source document. The summary should include only important information from the source document.
The implementation of the G-Eval technique involves four separate and detailed prompts designed to evaluate the summary output against each of these dimensions, with a score from 1-5 on a Likert scale (except for Fluency, which adopts a 1-3 scale). These prompts, along with the input document and summary to be evaluated, are fed to GPT-4; the score outputs are collected and final scores are calculated.
Adopts Chain-of-Thought (CoT), a set of intermediate instructions generated by LLM describing the detailed evaluation steps to provide more context and guidance for LLM to evaluate the generated summary
Evaluates in a form-filling paradigm
Uses the probability-weighted summation of the output scores as the final score to obtain more fine-grained, continuous scores that better reflect the quality and diversity of the generated texts
Implementation
G-Eval now has an example implementation in the official prompt flow repository. In this implementation, the original G-Eval implementation prompts have been improved to be more generic and agnostic to the domain of the source data being evaluated. The score parser has also been improved for better performance, verified with Meta-evaluation over the SummEval benchmark.
GPT-4 does not support the output of token probabilities. G-Eval sets n=20, temperature=2, top_p=1 to sample 20 times to estimate the token probabilities.
Results
G-Eval, and in particular this generalized implementation, shows state-of-the-art results when compared to other abstractive summarization evaluation methods.
Spearman correlations (ρ) between different methods and human judgements in the SummEval benchmark
Method
Fluency (ρ)
Consistency (ρ)
Coherence (ρ)
Relevance (ρ)
Average
G-Eval - GPT-4 0613 8k + original prompts in paper
Obtenga información sobre cómo crear solicitudes atractivas e informativas con Microsoft Copilot. Este módulo le enseñará los conceptos básicos de la ingeniería de solicitudes, los elementos de una solicitud eficaz y los procedimientos recomendados para la escritura de solicitudes.
Evaluating the performance of machine learning models is crucial for determining their effectiveness and reliability. To do that, quantitative measurement with reference to ground truth output (also known as evaluation metrics) are needed. However, LLM applications are a recent and fast evolving ML field, where model evaluation is not straightforward and there is no unified approach to measure LLM performance. Several metrics have been proposed in the literature for evaluating the performance of LLMs. It is
In some cases, LLMs may not perform well on specific domains, tasks, or datasets, or may produce inaccurate or misleading outputs. In such cases, fine-tuning the model can be a useful technique to adapt it to the desired goal and improve its quality and reliability.