{"version":"1.0","type":"rich","provider_name":"gaks.ai AI Glossary","provider_url":"https://gaks.ai/glossary","title":"Inference — AI Glossary","author_name":"Glenn Katrud Solheim","author_url":"https://gaks.ai","width":600,"height":200,"html":"<div style=\"font-family:sans-serif;border:1px solid #e0e0e0;border-radius:8px;padding:16px;max-width:600px;background:#ffffff;color:#111111;\"><p style=\"margin:0 0 4px;font-size:11px;color:#666;\">AI Glossary — gaks.ai</p><h3 style=\"margin:0 0 8px;font-size:16px;\">Inference</h3><p style=\"margin:0 0 12px;font-size:14px;line-height:1.6;\">The process of using a trained model to generate predictions or outputs on new data. While training happens once or periodically, inference happens every time the model is used. It is what end users experience when they interact with an AI product, and its speed and cost are key considerations in deployment.  See also: training, inference cost, forward pass.</p><a href=\"https://gaks.ai/glossary/inference\" style=\"font-size:12px;color:#0077aa;\">Source: gaks.ai/glossary/inference →</a></div>"}