Back to glossary

AI GLOSSARY

Data Parallelism

Deployment & Infrastructure

A distributed training strategy where the same model is replicated across multiple processors or machines, each processing a different subset of the training data simultaneously, with the results combined to update the model. Data parallelism is the most common approach for scaling neural network training.
See also: training, GPU, model parallelism.

External reference