Executable

The compile function on an Executable has more direct methods to actions such as validate, call, get_performance_data, and so on.

class Executable

Subclassed by ngraph::runtime::cpu::CPU_Executable, ngraph::runtime::dynamic::DynamicExecutable, ngraph::runtime::gcpu::GCPUExecutable, ngraph::runtime::intelgpu::IntelGPUExecutable, ngraph::runtime::interpreter::INTExecutable, ngraph::runtime::nop::NOPExecutable, ngraph::runtime::plaidml::PlaidML_Executable

Public Functions

virtual bool call(const std::vector<std::shared_ptr<runtime::Tensor>> &outputs, const std::vector<std::shared_ptr<runtime::Tensor>> &inputs) = 0

Return
true if iteration is successful, false otherwise
Parameters
  • outputs: vector of runtime::Tensor used as outputs
  • inputs: vector of runtime::Tensor used as inputs

bool call_with_validate(const std::vector<std::shared_ptr<runtime::Tensor>> &outputs, const std::vector<std::shared_ptr<runtime::Tensor>> &inputs)

Executes a single iteration of a Function.

Return
true if iteration is successful, false otherwise
Parameters
  • outputs: vector of runtime::Tensor used as outputs
  • inputs: vector of runtime::Tensor used as inputs

vector<runtime::PerformanceCounter> get_performance_data() const

Collect performance information gathered on a Function.

Return
Vector of PerformanceCounter information.

void validate(const std::vector<std::shared_ptr<runtime::Tensor>> &outputs, const std::vector<std::shared_ptr<runtime::Tensor>> &inputs)

Validates a Function.

Parameters
  • outputs: vector of runtime::Tensor used as outputs
  • inputs: vector of runtime::Tensor used as inputs

const ngraph::ParameterVector &get_parameters() const

Query the input Parameters.

Return
an ngraph::op::ParameterVector of all input parameters

const ngraph::ResultVector &get_results() const

Query the output Results.

Return
an ngraph::ResultVector of all input parameters

void save(std::ostream &output_stream)

Save this compiled Executable to an output stream. Saved stream may be read with Backend::load.

shared_ptr<runtime::Tensor> create_input_tensor(size_t input_index)

Create an input Tensor.

Return
A Tensor
Parameters
  • input_index: The index position in the input Parameter vector. This would be the same order of Parameters passed into the inputs in the call() method.

shared_ptr<runtime::Tensor> create_output_tensor(size_t output_index)

Create an output Tensor.

Return
A Tensor
Parameters
  • output_index: The index position in the output Result vector. This would be the same order of Results passed into the outputs in the call() method.

vector<shared_ptr<runtime::Tensor>> create_input_tensor(size_t input_index, size_t pipeline_depth)

Create a vector of input Tensors.

Return
A vector of Tensors, one for each stage of the pipeline
Parameters
  • input_index: The index position in the input Parameter vector. This would be the same order of Parameters passed into the inputs in the call() method.
  • pipeline_depth: The number of stages in the input pipeline. For double-buffered input you would specify pipeline_depth=2

vector<shared_ptr<runtime::Tensor>> create_output_tensor(size_t output_index, size_t pipeline_depth)

Create a vector of output Tensors.

Return
A vector of Tensors, one for each stage of the pipeline
Parameters
  • output_index: The index position in the output Result vector. This would be the same order of Results passed into the outputs in the call() method.
  • pipeline_depth: The number of stages in the output pipeline. For double-buffered output you would specify pipeline_depth=2