Back to glossary
AI GLOSSARY
Greedy Decoding
Large Language Model (LLM) Terms
A decoding strategy where the model always selects the single most probable next token at each step. It is fast and deterministic but tends to produce repetitive or suboptimal outputs, since always taking the locally best option does not guarantee the globally best sequence.
See also: decoding, temperature, beam search.