FAQに戻る
Enterprise Applications

Can BLEU measure the fluency of generated text?

BLEU cannot directly measure the fluency of generated text. It primarily evaluates precision by comparing n-gram overlap between the generated output and reference translations.

BLEU focuses on lexical similarity rather than grammatical correctness, syntactic structure, or natural flow. It counts matching word sequences but ignores word order beyond n-grams and doesn't assess sentence structure or semantic coherence. High BLEU scores can sometimes occur with grammatically awkward or nonsensical output. Its effectiveness is confined to translation-like tasks with high-quality references and remains insensitive to many fluency errors.

Its core application is efficiently automating evaluation of translation adequacy and precision in machine translation development. For true fluency assessment, complementary methods are essential: human judgments focusing on naturalness, dedicated language models measuring perplexity (like GPT or BERT scoring), or metrics specifically designed for fluency and grammaticality (such as YiSi or statistical parsers) should be used instead.

関連する質問