{"version":"1.0","type":"rich","provider_name":"gaks.ai AI Glossary","provider_url":"https://gaks.ai/glossary","title":"Distillation — AI Glossary","author_name":"Glenn Katrud Solheim","author_url":"https://gaks.ai","width":600,"height":200,"html":"<div style=\"font-family:sans-serif;border:1px solid #e0e0e0;border-radius:8px;padding:16px;max-width:600px;background:#ffffff;color:#111111;\"><p style=\"margin:0 0 4px;font-size:11px;color:#666;\">AI Glossary — gaks.ai</p><h3 style=\"margin:0 0 8px;font-size:16px;\">Distillation</h3><p style=\"margin:0 0 12px;font-size:14px;line-height:1.6;\">A technique where a smaller, simpler model, the student, is trained to mimic the behavior of a larger, more powerful model, the teacher. The result is a compact model that retains much of the capability of the original but is cheaper and faster to run. Distillation is widely used to make large foundation models practical to deploy on consumer hardware or at scale.  See also: quantization, model compression, fine-tuning.</p><a href=\"https://gaks.ai/glossary/distillation\" style=\"font-size:12px;color:#0077aa;\">Source: gaks.ai/glossary/distillation →</a></div>"}