Back to glossary

AI GLOSSARY

Reward Shaping

Safety, Alignment & Ethics

The practice of modifying or augmenting a reinforcement learning agent's reward signal to make learning faster, more stable, or better directed, for example by providing intermediate rewards for progress toward a goal rather than only rewarding final success. Reward shaping can dramatically improve learning efficiency but must be done carefully to avoid inadvertently incentivizing undesired behaviors.