Data Ingestion

Using TensorFlow

Distributed Training with TensorFlow

Using PaddlePaddle

Distributed Training with PaddlePaddle

Using a custom framework

Distribute training across multiple nGraph backends

Important

Distributed training is not officially supported in version 0.27; however, the following configuration options have worked for nGraph devices with mixed or limited success in testing.

In the previous section, we described the steps needed to create a “trainable” nGraph model. Here we demonstrate how to train a data parallel model by distributing the graph to more than one device.

Frameworks can implement distributed training with nGraph versions prior to 0.13:

  • Use -DNGRAPH_DISTRIBUTED_ENABLE=OMPI to enable distributed training with OpenMPI. Use of this flag requires that OpenMPI be a pre-existing library in the system. If it’s not present on the system, install OpenMPI version 2.1.1 or later before running the compile.

  • Use -DNGRAPH_DISTRIBUTED_ENABLE=MLSL to enable the option for Intel® Machine Learning Scaling Library for Linux* OS:

    Note

    The Intel® MLSL option applies to Intel® Architecture CPUs (CPU) and Interpreter backends only. For all other backends, OpenMPI is presently the only supported option. We recommend the use of Intel MLSL for CPU backends to avoid an extra download step.

Finally, to run the training using two nGraph devices, invoke

mpirun

To deploy data-parallel training, the AllReduce op should be added after the steps needed to complete the backpropagation; the new code is highlighted below:

See the full code in the examples folder /doc/examples/mnist_mlp/dist_mnist_mlp.cpp.

mpirun -np 2 dist_mnist_mlp

To synchronize gradients across all workers, the essential operation for data parallel training, due to its simplicity and scalability over parameter servers, is allreduce. The AllReduce op is one of the nGraph Library’s core ops. To enable gradient synchronization for a network, we simply inject the AllReduce op into the computation graph, connecting the graph for the autodiff computation and optimizer update (which then becomes part of the nGraph graph). The nGraph Backend will handle the rest.