Interact with Backends

Backend

Backends are responsible for function execution and value allocation. They can be used to carry out a programmed computation from a framework by using a CPU or GPU; or they can be used with an Interpreter mode, which is primarily intended for testing, to analyze a program, or for a framework developer to develop customizations. Experimental APIs to support current and future nGraph Backends are also available; see, for example, the section on PlaidML.

../_images/backend-dgm.png
class Backend

Interface to a generic backend.

Backends are responsible for function execution and value allocation.

Subclassed by ngraph::runtime::cpu::CPU_Backend, ngraph::runtime::gpu::GPU_Backend, ngraph::runtime::hybrid::HybridBackend, ngraph::runtime::intelgpu::IntelGPUBackend, ngraph::runtime::interpreter::INTBackend, ngraph::runtime::nop::NOPBackend, ngraph::runtime::plaidml::PlaidML_Backend

Public Functions

virtual std::shared_ptr<ngraph::runtime::Tensor> create_tensor(const ngraph::element::Type &element_type, const Shape &shape) = 0

Create a tensor specific to this backend.

Return
shared_ptr to a new backend-specific tensor
Parameters
  • element_type: The type of the tensor element
  • shape: The shape of the tensor

virtual std::shared_ptr<ngraph::runtime::Tensor> create_tensor(const ngraph::element::Type &element_type, const Shape &shape, void *memory_pointer) = 0

Create a tensor specific to this backend.

Return
shared_ptr to a new backend-specific tensor
Parameters
  • element_type: The type of the tensor element
  • shape: The shape of the tensor
  • memory_pointer: A pointer to a buffer used for this tensor. The size of the buffer must be sufficient to contain the tensor. The lifetime of the buffer is the responsibility of the caller.

template <typename T>
std::shared_ptr<ngraph::runtime::Tensor> create_tensor(const Shape &shape)

Create a tensor of C type T specific to this backend.

Return
shared_ptr to a new backend specific tensor
Parameters
  • shape: The shape of the tensor

virtual Handle compile(std::shared_ptr<Function> func) = 0

Compiles a Function.

Return
compiled function or nullptr on failure
Parameters
  • func: The function to compile

virtual bool call(std::shared_ptr<Function> func, const std::vector<std::shared_ptr<runtime::Tensor>> &outputs, const std::vector<std::shared_ptr<runtime::Tensor>> &inputs) = 0

Executes a single iteration of a Function. If func is not compiled the call will compile it.

Return
true if iteration is successful, false otherwise
Parameters
  • func: The function to execute

bool call_with_validate(std::shared_ptr<Function> func, const std::vector<std::shared_ptr<runtime::Tensor>> &outputs, const std::vector<std::shared_ptr<runtime::Tensor>> &inputs)

Executes a single iteration of a Function. If func is not compiled the call will compile it. Optionally validates the inputs and outputs against the function graph.

Return
true if iteration is successful, false otherwise
Parameters
  • func: The function to execute

void remove_compiled_function(std::shared_ptr<Function> func)

Compiled functions may be cached. This function removes a compiled function from the cache.

Parameters
  • func: The function to execute

virtual void enable_performance_data(std::shared_ptr<Function> func, bool enable)

Enable the collection of per-op performance information on a specified Function. Data collection is via the get_performance_data method.

Parameters
  • func: The function to collect perfomance data on.
  • enable: Set to true to enable or false to disable data collection

vector<ngraph::runtime::PerformanceCounter> get_performance_data(std::shared_ptr<Function> func) const

Collect performance information gathered on a Function.

Return
Vector of PerformanceCounter information.
Parameters
  • func: The function to get collected data.

bool is_supported(const Node &node) const

Test if a backend is capable of supporting an op.

Return
true if the op is supported, false otherwise.
Parameters
  • node: is the op to test.

Public Static Functions

unique_ptr<runtime::Backend> create(const std::string &type)

Create a new Backend object.

Return
unique_ptr to a new Backend or nullptr if the named backend does not exist.
Parameters
  • type: The name of a registered backend, such as “CPU” or “GPU”. To select a subdevice use “GPU:N” where sN is the subdevice number.

vector<string> get_registered_devices()

Query the list of registered devices.

Return
A vector of all registered devices.

Tensor

class Tensor

Subclassed by ngraph::runtime::cpu::CPUTensorView, ngraph::runtime::gpu::GPUTensor, ngraph::runtime::HostTensor, ngraph::runtime::intelgpu::IntelGPUTensorView, ngraph::runtime::plaidml::PlaidML_Tensor

Public Functions

const Shape &get_shape() const

Get tensor shape.

Return
const reference to a Shape

Strides get_strides() const

Get tensor strides.

Return
Strides

const element::Type &get_element_type() const

Get tensor element type.

Return
element::Type

size_t get_element_count() const

Get number of elements in the tensor.

Return
number of elements in the tensor

size_t get_size_in_bytes() const

Get the size in bytes of the tensor.

Return
number of bytes in tensor’s allocation

const std::string &get_name() const

Get tensor’s unique name.

Return
tensor’s name

shared_ptr<descriptor::layout::TensorLayout> get_tensor_layout() const

Get tensor layout.

Return
tensor layout

void set_tensor_layout(const std::shared_ptr<descriptor::layout::TensorLayout> &layout)

Set tensor layout.

Parameters
  • layout: Layout to set

bool get_stale() const

Get the stale value of the tensor. A tensor is stale if its data is changed.

Return
true if there is new data in this tensor

void set_stale(bool val)

Set the stale value of the tensor. A tensor is stale if its data is changed.

virtual void write(const void *p, size_t offset, size_t n) = 0

Write bytes directly into the tensor.

Parameters
  • p: Pointer to source of data
  • offset: Offset into tensor storage to begin writing. Must be element-aligned.
  • n: Number of bytes to write, must be integral number of elements.

virtual void read(void *p, size_t offset, size_t n) const = 0

Read bytes directly from the tensor.

Parameters
  • p: Pointer to destination for data
  • offset: Offset into tensor storage to begin writing. Must be element-aligned.
  • n: Number of bytes to read, must be integral number of elements.

void copy_from(const ngraph::runtime::Tensor &source)

copy bytes directly from source to this tensor

Parameters
  • source: The source tensor

PlaidML

The nGraph ecosystem has recently added initial (experimental) support for PlaidML, which is an advanced Machine Learning library that can further accelerate training models built on GPUs. When you select the PlaidML option as a backend, it behaves as an advanced tensor compiler that can further speed up training with large data sets.

class PlaidML_Backend : public ngraph::runtime::Backend

Public Functions

std::shared_ptr<ngraph::runtime::Tensor> create_tensor(const ngraph::element::Type &element_type, const Shape &shape)

Create a tensor specific to this backend.

Return
shared_ptr to a new backend-specific tensor
Parameters
  • element_type: The type of the tensor element
  • shape: The shape of the tensor

std::shared_ptr<ngraph::runtime::Tensor> create_tensor(const ngraph::element::Type &element_type, const Shape &shape, void *memory_pointer)

Create a tensor specific to this backend.

Return
shared_ptr to a new backend-specific tensor
Parameters
  • element_type: The type of the tensor element
  • shape: The shape of the tensor
  • memory_pointer: A pointer to a buffer used for this tensor. The size of the buffer must be sufficient to contain the tensor. The lifetime of the buffer is the responsibility of the caller.

bool compile(std::shared_ptr<Function> func)

Compiles a Function.

Return
compiled function or nullptr on failure
Parameters
  • func: The function to compile

bool call(std::shared_ptr<Function> func, const std::vector<std::shared_ptr<runtime::Tensor>> &outputs, const std::vector<std::shared_ptr<runtime::Tensor>> &inputs)

Executes a single iteration of a Function. If func is not compiled the call will compile it.

Return
true if iteration is successful, false otherwise
Parameters
  • func: The function to execute

void remove_compiled_function(std::shared_ptr<Function> func)

Compiled functions may be cached. This function removes a compiled function from the cache.

Parameters
  • func: The function to execute