Back to glossary

AI GLOSSARY

Grounding

Large Language Model (LLM) Terms

The broader practice of connecting a language model's outputs to verifiable external information, whether through retrieved documents, databases, tool use, or real-world data. Grounding addresses one of the most significant weaknesses of language models: their tendency to generate plausible-sounding but factually incorrect content.
See also: grounded generation, retrieval-augmented generation, hallucination.