{"version":"1.0","type":"rich","provider_name":"gaks.ai AI Glossary","provider_url":"https://gaks.ai/glossary","title":"Value Alignment — AI Glossary","author_name":"Glenn Katrud Solheim","author_url":"https://gaks.ai","width":600,"height":200,"html":"<div style=\"font-family:sans-serif;border:1px solid #e0e0e0;border-radius:8px;padding:16px;max-width:600px;background:#ffffff;color:#111111;\"><p style=\"margin:0 0 4px;font-size:11px;color:#666;\">AI Glossary — gaks.ai</p><h3 style=\"margin:0 0 8px;font-size:16px;\">Value Alignment</h3><p style=\"margin:0 0 12px;font-size:14px;line-height:1.6;\">The specific challenge of ensuring that an AI system's values, the things it implicitly optimizes for through its behavior, match the values of the humans it is meant to serve. Value alignment is broader than just following instructions, it requires the system to have internalized a sufficiently rich and accurate model of human values to behave appropriately even in novel situations that its designers did not anticipate.</p><a href=\"https://gaks.ai/glossary/value-alignment\" style=\"font-size:12px;color:#0077aa;\">Source: gaks.ai/glossary/value-alignment →</a></div>"}