Intel® Nervana™ Graph requires Python 2.7 or Python 3.4+ running on a Linux* or UNIX-based OS. Before installing, also ensure your system has recent updates of the following packages:

Ubuntu* 16.04+ or CentOS* 7.4+ Mac OS X* Description
python-pip pip Tool to install Python dependencies
python-virtualenv (*) virtualenv (*) Allows creation of isolated environments ((*): This is required only for Python 2.7 installs. With Python3: test for presence of venv with python3 -m venv -h)
libhdf5-dev h5py Enables loading of hdf5 formats
libyaml-dev pyaml Parses YAML format inputs
pkg-config pkg-config Retrieves information about installed libraries


  1. Choose your build environment. Installing within a virtual environment is the easiest option for most users. To prepare for a system installation, you may skip this step.

    • Python3 To create and activate a Python 3 virtualenv:
    $ python3 -m venv .venv
    $ . .venv/bin/activate
    • Python 2.7 To create and activate a Python 2 virtualenv:
    $ virtualenv -p python2.7 .venv
    $ . .venv/bin/activate
  2. Download the source code.

    $ git clone
    $ cd ngraph
  3. ONNX dependency.

    • To build with the ONNX dependency, extra steps to support ONNX are needed. For more information about Nervana Graph support for ONNX, you can read this blog post. The following commands will compile ONNX on CentOS v7.4+ systems:

      $ yum install autoconf automake libtool curl g++ unzip -y
      $ git clone
      $ cd protobuf
      $ ./
      $ ./configure
      $ make && make install
      $ ldconfig
    • To prepare the ONNX dependency on Mac OS X* or Ubuntu* systems, run:

      $ make onnx_dependency


To build and install Intel Nervana Graph, simply run make install from within the clone of the repo as follows:

$ make install

Back-end Configuration

After completing the prerequisites and installation of the base Nervana Graph package, additional packages can be added to achieve optimal performance when running on your various backend platforms.

  1. CPU/Intel® architecture transformer

    (Optional) To run Intel Nervana Graph with optimal performance on a CPU backend, configure your build of Nervana Graph with the Intel® Math Kernel Library for Deep Neural Networks, AKA the Intel® MKL DNN, a new open-source library designed to accelerate Deep Learning (DL) applications on Intel® architecture.

    $ git clone
    $ cd mkl-dnn/scripts && ./ && cd ..
    $ mkdir -p build && cd build
    $ cmake -DCMAKE_INSTALL_PREFIX=$PWD/../install .. && make install
    $ cd ../.. && export MKLDNN_ROOT=$PWD/mkl-dnn/install
  2. GPU transformer

    (Optional) Enabling neon to use GPUs requires installation of CUDA SDK and drivers. Remember to add the CUDA path to your environment variables:

  • On Ubuntu

    export PATH="/usr/local/cuda/bin:"$PATH
    export LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/local/cuda/lib:/usr/local/lib:"$LD_LIBRARY_PATH
  • On Mac OS X

    export PATH="/usr/local/cuda/bin:"$PATH
    export DYLD_LIBRARY_PATH="/usr/local/cuda/lib:"$DYLD_LIBRARY_PATH
  • To add GPU support after installing, you can also run:

    $ make gpu_prepare

Getting Started

Some Jupyter* notebook walkthroughs demonstrate ways to use Intel Nervana Graph:

  • examples/walk_through/: Use Nervana Graph to implement logistic regression
  • examples/mnist/MNIST_Direct.ipynb: Build a deep learning model directly on Nervana Graph

The neon framework can also be used to define and train deep learning models:

  • examples/mnist/ Multilayer perceptron network on MNIST dataset.
  • examples/cifar10/ Convolutional neural network on CIFAR-10.
  • examples/cifar10/ Multilayer perceptron on CIFAR-10 dataset.
  • examples/ptb/ Character-level RNN model on Penn Treebank data.

Some TensorFlow* examples that define graphs which can be passed to ngraph for execution are also included:

  • frontends/tensorflow/examples/
  • frontends/tensorflow/examples/
  • frontends/tensorflow/examples/

Developer Guidelines

Before checking in code, run the unit tests and check for style errors:

$ make test_cpu test_gpu test_integration
$ make style

Documentation can be generated with pandoc:

$ sudo apt-get install pandoc
$ make doc

View the documentation at doc/build/html/index.html.