Install

Build Environments

The v0.2.0 version of Intel nGraph library supports Linux*-based systems with the following packages and prerequisites:

Operating System Compiler Build System Status Additional Packages
CentOS 7.4 64-bit GCC 4.8 CMake 3.4.3 supported wget zlib-devel ncurses-libs ncurses-devel patch diffutils gcc-c++ make git perl-Data-Dumper
Ubuntu 16.04 (LTS) 64-bit Clang 3.9 CMake 3.5.1 + GNU Make supported build-essential cmake clang-3.9 git curl zlib1g zlib1g-dev libtinfo-dev
Clear Linux* OS for Intel Architecture Clang 5.0.1 CMake 3.10.2 experimental bundles machine-learning-basic dev-utils python3-basic python-basic-dev

Other configurations may work, but should be considered experimental with limited support. On Ubuntu 16.04 with gcc-5.4.0 or clang-3.9, for example, we recommend adding -DNGRAPH_USE_PREBUILT_LLVM=TRUE to the cmake command in step 4 below. This fetches a pre-built tarball of LLVM+Clang from llvm.org, and it will substantially reduce build time.

If using gcc version 4.8, it may be necessary to add symlinks from gcc to gcc-4.8, and from g++ to g++-4.8, in your PATH, even if you explicitly specify the CMAKE_C_COMPILER and CMAKE_CXX_COMPILER flags when building. (Do NOT supply the -DNGRAPH_USE_PREBUILT_LLVM flag in this case, because the prebuilt tarball supplied on llvm.org is not compatible with a gcc 4.8-based build.)

Installation Steps

The CMake procedure installs ngraph_dist to the installing user’s $HOME directory as the default location. See the CMakeLists.txt file for details about how to change or customize the install location.

Ubuntu 16.04

The process documented here will work on Ubuntu* 16.04 (LTS)

  1. (Optional) Create something like /opt/libraries and (with sudo), give ownership of that directory to your user. Creating such a placeholder can be useful if you’d like to have a local reference for APIs and documentation, or if you are a developer who wants to experiment with how to Execute a computation using resources available through the code base.

    $ sudo mkdir -p /opt/libraries
    $ sudo chown -R username:username /opt/libraries
    $ cd /opt/libraries
    
  2. Clone the NervanaSystems ngraph repo:

    $ git clone https://github.com/NervanaSystems/ngraph.git
    $ cd ngraph
    
  3. Create a build directory outside of the ngraph/src directory tree; somewhere like ngraph/build, for example:

    $ mkdir build && cd build
    
  4. Generate the GNU Makefiles in the customary manner (from within the build directory). If running gcc-5.4.0 or clang-3.9, remember that you can also append cmake with the prebuilt LLVM option to speed-up the build. Another option if your deployment system has Intel® Advanced Vector Extensions (Intel® AVX) is to target the accelerations available directly by compiling the build as follows during the cmake step: -DNGRAPH_TARGET_ARCH=skylake-avx512.

    $ cmake ../ [-DNGRAPH_USE_PREBUILT_LLVM=TRUE] [-DNGRAPH_TARGET_ARCH=skylake-avx512]
    
  5. Run $ make and make install to install libngraph.so and the header files to $HOME/ngraph_dist:

    $ make   # note: make -j <N> may work, but sometimes results in out-of-memory errors if too many compilation processes are used
    $ make install
    
  6. (Optional, requires doxygen, Sphinx, and breathe). Run make html inside the doc/sphinx directory of the cloned source to build a copy of the website docs locally. The low-level API docs with inheritance and collaboration diagrams can be found inside the /docs/doxygen/ directory. See the Documentation Contributor README for more details about how to build documentation for nGraph.

CentOS 7.4

The process documented here will work on CentOS 7.4.

  1. (Optional) Create something like /opt/libraries and (with sudo), give ownership of that directory to your user. Creating such a placeholder can be useful if you’d like to have a local reference for APIs and documentation, or if you are a developer who wants to experiment with how to Execute a computation using resources available through the code base.

    $ sudo mkdir -p /opt/libraries
    $ sudo chown -R username:username /opt/libraries
    
  2. Update the system with yum and issue the following commands:

    $ sudo yum update
    $ sudo yum install zlib-devel install ncurses-libs ncurses-devel patch diffutils wget gcc-c++ make git perl-Data-Dumper
    
  3. Install Cmake 3.4:

    $ wget https://cmake.org/files/v3.4/cmake-3.4.3.tar.gz
    $ tar -xzvf cmake-3.4.3.tar.gz
    $ cd cmake-3.4.3
    $ ./bootstrap
    $ make && sudo make install
    
  4. Clone the NervanaSystems ngraph repo via HTTPS and use Cmake 3.4.3 to install the nGraph libraries to $HOME/ngraph_dist. .. code-block:: console

    $ cd /opt/libraries $ git clone https://github.com/NervanaSystems/ngraph.git $ cd ngraph && mkdir build && cd build $ cmake ../ $ make && sudo make install

macOS* development

Note

Although we do not currently offer full support for the macOS platform, some configurations and features may work.

The repository includes two scripts (maint/check-code-format.sh and maint/apply-code-format.sh) that are used respectively to check adherence to libngraph code formatting conventions, and to automatically reformat code according to those conventions. These scripts require the command clang-format-3.9 to be in your PATH. Run the following commands (you will need to adjust them if you are not using bash):

$ brew install llvm@3.9
$ mkdir -p $HOME/bin
$ ln -s /usr/local/opt/llvm@3.9/bin/clang-format $HOME/bin/clang-format-3.9
$ echo 'export PATH=$HOME/bin:$PATH' >> $HOME/.bash_profile

Test

The Intel® nGraph library code base uses GoogleTest’s* googletest framework for unit tests. The cmake command from the Install guide automatically downloaded a copy of the needed gtest files when it configured the build directory.

To perform unit tests on the install:

  1. Create and configure the build directory as described in our Install guide.

  2. Enter the build directory and run make check:

    $ cd build/
    $ make check
    

Compile a framework with libngraph

After building and installing nGraph on your system, there are two likely paths for what you’ll want to do next: either compile a framework to run a DL training model, or load an import of an “already-trained” model for inference on an Intel nGraph-enabled backend.

For the former case, this early 0.2.0, Integrate Supported Frameworks, can help you get started with a training a model on a supported framework.

For the latter case, if you’ve followed a tutorial from ONNX, and you have an exported, serialized model, you can skip the section on frameworks and go directly to our Import a model documentation.

Please keep in mind that both of these are under continuous development, and will be updated frequently in the coming months. Stay tuned!