Integrate Supported Frameworks

A framework is “supported” when there is a framework bridge that can be cloned from one of our GitHub repos and built to connect to a supported backend with nGraph, all the while maintaining the framework’s programmatic or user interface. Current bridge-enabled frameworks include TensorFlow* and MXNet*.

Once connected via the bridge, the framework can then run and train a deep learning model with various workloads on various backends using nGraph Compiler as an optimizing compiler available through the framework.

MXNet* bridge

  1. See the README on nGraph-MXNet Integration for how to enable the bridge.
  2. (Optional) For experimental or alternative approaches to distributed training methodologies, including data parallel training, see the MXNet-relevant sections of the docs on Distributed Training in nGraph and How to topics like Train using multiple nGraph CPU backends with data parallel.

TensorFlow* bridge

See the ngraph tensorflow bridge README for how to install the DSO for the nGraph-TensorFlow bridge.

neon™

Use neon as a frontend for nGraph backends

neon is an open source Deep Learning framework that has a history of being the fastest framework for training CNN-based models with GPUs. Detailed info about neon’s features and functionality can be found in the neon docs. This section covers installing neon on an existing system that already has an ngraph_dist installed.

Important

As of version 0.8, these instructions presume that your system already has the library installed to the default location, as outlined in our Build the Library documentation.

  1. Set the NGRAPH_CPP_BUILD_PATH and the LD_LIBRARY_PATH. You can use the env command to see if these paths have been set already and if they have not, they can be set with something like:

    export NGRAPH_CPP_BUILD_PATH=$HOME/ngraph_dist/
    export LD_LIBRARY_PATH=$HOME/ngraph_dist/lib/
    
  2. The neon framework uses the pip package manager during installation; install it with Python version 3.5 or higher:

    $ sudo apt-get install python3-pip python3-venv
    $ python3 -m venv neon_venv
    $ cd neon_venv
    $ . bin/activate
    (neon_venv) ~/frameworks$
    
  3. Go to the “python” subdirectory of the ngraph repo we cloned during the previous Build the Library, and complete these actions:

    (neon_venv)$ cd /opt/libraries/ngraph/python
    (neon_venv)$ git clone --recursive -b allow-nonconstructible-holders https://github.com/jagerman/pybind11.git
    (neon_venv)$ export PYBIND_HEADERS_PATH=/opt/libraries/ngraph/python/pybind11
    (neon_venv)$ pip install -U .
    
  4. Finally we’re ready to install the neon integration:

    (neon_venv)$ git clone git@github.com:NervanaSystems/ngraph-neon
    (neon_venv)$ cd ngraph-neon
    (neon_venv)$ make install
    
  5. To test a training example, you can run the following from ngraph-neon/examples/cifar10

    (neon_venv)$ python cifar10_conv.py
    
  6. (Optional) For experimental or alternative approaches to distributed training methodologies, including data parallel training, see the Distributed Training in nGraph and How to articles on Train using multiple nGraph CPU backends with data parallel.