Integrate Supported Frameworks¶
A framework is “supported” when there is a framework bridge that can be cloned from one of our GitHub repos and built to connect to nGraph device backends, all the while maintaining the framework’s programmatic or user interface. Bridges currently exist for the TensorFlow* and MXNet* frameworks.
Once connected via the bridge, the framework can then run and train a deep learning model with various workloads on various backends using nGraph Compiler as an optimizing compiler available through the framework.
- See the README on nGraph-MXNet Integration for how to enable the bridge.
- Optional: For experimental or alternative approaches to distributed training methodologies, including data parallel training, see the MXNet-relevant sections of the docs on Distributed Training in nGraph and How to topics like Train using multiple nGraph CPU backends with data parallel.
neon as a frontend for nGraph backends¶
neon is an open source Deep Learning framework that has a history
of being the fastest framework for training CNN-based models with GPUs.
Detailed info about neon’s features and functionality can be found in the
neon docs. This section covers installing neon on an existing
system that already has an
As of version 0.9, these instructions presume that your system already has the library installed to the default location, as outlined in our Build the Library documentation.
LD_LIBRARY_PATH. You can use the
envcommand to see if these paths have been set already and if they have not, they can be set with something like:
export NGRAPH_CPP_BUILD_PATH=$HOME/ngraph_dist/ export LD_LIBRARY_PATH=$HOME/ngraph_dist/lib/
The neon framework uses the pip package manager during installation; install it with Python version 3.5 or higher:
$ sudo apt-get install python3-pip python3-venv $ python3 -m venv neon_venv $ cd neon_venv $ . bin/activate (neon_venv) ~/frameworks$
Go to the “python” subdirectory of the
ngraphrepo we cloned during the previous Build the Library, and complete these actions:
(neon_venv)$ cd /opt/libraries/ngraph/python (neon_venv)$ git clone --recursive -b allow-nonconstructible-holders https://github.com/jagerman/pybind11.git (neon_venv)$ export PYBIND_HEADERS_PATH=/opt/libraries/ngraph/python/pybind11 (neon_venv)$ pip install -U .
Finally we’re ready to install the neon integration:
(neon_venv)$ git clone email@example.com:NervanaSystems/ngraph-neon (neon_venv)$ cd ngraph-neon (neon_venv)$ make install
To test a training example, you can run the following from
(neon_venv)$ python cifar10_conv.py
(Optional) For experimental or alternative approaches to distributed training methodologies, including data parallel training, see the Distributed Training in nGraph and How to articles on Train using multiple nGraph CPU backends with data parallel.