To understand how a data science framework (TensorFlow, PyTorch, PaddlePaddle integration, and others) can unlock acceleration available in the nGraph Compiler, it helps to familiarize yourself with some basic concepts.
We use the term bridge to describe code that connects to any nGraph device backend(s) while maintaining the framework’s programmatic or user interface. We have a bridge for the TensorFlow framework. We also have a PaddlePaddle integration bridge. Intel previously contributed work to an MXNet bridge; however, support for the MXNet bridge is no longer active.
Because it is framework agnostic (providing opportunities to optimize at the graph level), nGraph can do the heavy lifting required by many popular workloads without any additional effort of the framework user. Optimizations that were previously available only after careful integration of a kernel or hardware-specific library are exposed via the Core graph construction API
The illustration above shows how this works.
While a Deep Learning framework is ultimately meant for end-use by data scientists, or for deployment in cloud container environments, nGraph’s Core ops are designed for framework builders themselves. We invite anyone working on new and novel frameworks or neural network designs to explore our highly-modularized stack of components.
Please read the Integrating other frameworks section for other framework-agnostic configurations available to users of the nGraph Compiler stack.