Back to glossary
AI GLOSSARY
Distributed Training
Deployment & Infrastructure
A training approach that spreads the work of training a large model across multiple machines or processors working in parallel. Distributed training is essential for the largest modern AI models, which would take impractically long to train on a single machine. It encompasses both data parallelism and model parallelism strategies.
See also: data parallelism, GPU, model parallelism.