Back to glossary

AI GLOSSARY

Win Rate

Evaluation & Performance

A metric used to evaluate generative AI models by comparing their outputs head-to-head, typically through human judgment or a reference model, and measuring how often one model's output is preferred over another's. Increasingly used as a practical alternative to automated metrics for assessing the quality of open-ended generation tasks.