Back to glossary

AI GLOSSARY

Control Problem

Safety, Alignment & Ethics

The fundamental challenge of ensuring that a sufficiently capable AI system remains under meaningful human control and pursues goals that are beneficial to humanity. The control problem becomes increasingly difficult as AI systems become more capable, since a system that is significantly smarter than its overseers may find ways to circumvent controls or pursue its objectives in unexpected ways. It is closely related to, but distinct from, AI alignment.
See also: AI alignment, AI safety, AI containment.

External reference