nGraph™

Welcome to the nGraph™ documentation site. nGraph is an open-source C++ library and runtime / compiler suite for Deep Learning ecosystems. Our goal is to empower algorithm designers, data scientists, framework architects, software engineers, and others with the means to make their work Portable, Adaptable, and Deployable across the most modern Machine Learning hardware available today: optimized Deep Learning computation devices.

_images/ngraph-ecosystem.png

Portable

One of nGraph’s key features is framework neutrality. While we currently support three popular frameworks with pre-optimized deployment runtimes for training Deep Neural Network, models, you are not limited to these when choosing among frontends. Architects of any framework (even those not listed above) can use our documentation for how to compile and run a training model and design or tweak a framework to bridge directly to the nGraph compiler. With a portable model at the core of your DL ecosystem, it’s no longer necessary to bring large datasets to the model for training; you can take your model – in whole, or in part – to where the data lives and save potentially significant or quantifiable machine resources.

Adaptable

We’ve recently begun support for the ONNX format. Developers who already have a “trained” DNN model can use nGraph to bypass significant framework-based complexity and import it to test or run on targeted and efficient backends with our user-friendly Python-based API. See the ngraph onnx companion tool to get started.

Framework Bridge Code Available? ONNX Support?
TensorFlow Yes Yes
MXNet Yes Yes
neon none needed Yes
PyTorch Not yet Yes
CNTK Not yet Yes
Other Not yet Doable

Deployable

It’s no secret that the DL ecosystem is evolving rapidly. Benchmarking comparisons can be blown steeply out of proportion by subtle tweaks to batch or latency numbers here and there. Where traditional GPU-based training excels, inference can lag and vice versa. Sometimes what we care about is not “speed at training a large dataset” but rather latency compiling a complex multi-layer algorithm locally, and then outputting back to an edge network, where it can be analyzed by an already-trained model.

Indeed, when choosing among topologies, it is important to not lose sight of the ultimate deployability and machine-runtime demands of your component in the larger ecosystem. It doesn’t make sense to use a heavy-duty backhoe to plant a flower bulb. Furthermore, if you are trying to develop an entirely new genre of modeling for a DNN component, it may be especially beneficial to consider ahead of time how portable and mobile you want that model to be within the rapidly-changing ecosystem. With nGraph, any modern CPU can be used to design, write, test, and deploy a training or inference model. You can then adapt and update that same core model to run on a variety of backends:

Backend Current nGraph support Future nGraph support
Intel® Architecture Processors (CPUs) Yes Yes
Intel® Nervana™ Neural Network Processor™ (NNPs) Yes Yes
NVIDIA* CUDA (GPUs) Yes Some
Field Programmable Gate Arrays (FPGAs) Coming soon Yes
Movidius Not yet Yes
Other Not yet Ask

The value we’re offering to the developer community is empowerment: we are confident that Intel® Architecture already provides the best computational resources available for the breadth of ML/DL tasks. We welcome ideas and contributions from the community.

Further project details can be found on our About page, or see our Install guide for how to get started.

Note

The library code is under active development as we’re continually adding support for more kinds of DL models and ops, framework compiler optimizations, and backends.


Indices and tables