Integrate Supported Frameworks¶
A framework is “supported” when there is a framework bridge that can be cloned from one of our GitHub repos and built to connect to nGraph device backends, all the while maintaining the framework’s programmatic or user interface. Bridges currently exist for the TensorFlow* and MXNet* frameworks.
Once connected via the bridge, the framework can then run and train a deep learning model with various workloads on various backends using nGraph Compiler as an optimizing compiler available through the framework.
- See the README on nGraph-MXNet Integration.
- Testing latency for Inference: See the Testing latency doc for a fully-documented example how to compile and test latency with an MXNet-supported model.
- Training: For experimental or alternative approaches to distributed training methodologies, including data parallel training, see the MXNet-relevant sections of the docs on Distributed Training in nGraph and How to topics like Train using multiple nGraph CPU backends with data parallel.