Back to glossaryExternal reference
AI GLOSSARY
Mixture of Experts
MoENeural Network Architectures
An architecture where a model consists of many specialized subnetworks, called experts, and a routing mechanism that selectively activates only a subset of them for each input. MoE allows models to have a very large total number of parameters while keeping the computational cost of each forward pass manageable, and is a key technique behind some of the most capable and efficient large language models.