Back to glossary

AI GLOSSARY

Decoding

Large Language Model (LLM) Terms

The process by which a language model converts its internal probability distributions over possible next tokens into actual output text. Different decoding strategies, such as greedy decoding, beam search, or sampling, make different tradeoffs between speed, diversity, and coherence of the generated output.
See also: temperature, top-p sampling, autoregressive model.