# Intel® Nervana™ graph¶

This API documentation covers the public API for Intel® Nervana™ graph (ngraph), organized into three main modules:

• ngraph: Contains the core ops for constructing the graph.
• ngraph.transformers: Defines methods for executing a defined graph on hardware.
• ngraph.types: Types in ngraph (for example, Axes, Op, etc.)

## Intel Nervana (ngraph) API¶

Several ops are used to create different types of tensors:

Method Description
ngraph.variable() Create a trainable variable.
ngraph.persistent_tensor() Tensor that persists across computations.
ngraph.placeholder() Used for input values, typically from host.
ngraph.constant() Immutable constant that can be inlined.

Assigning the above tensors requires defining Axis, which can be done using the following methods:

Method Description
ngraph.axes_with_order() Return a tensor with a different axes order.
ngraph.cast_axes() Cast the axes of a tensor to new axes.
ngraph.make_axes() Create an Axes object.
ngraph.make_axis() Create an Axis.

We also provide several helper function for retrieving information from tensors.

Method Description
ngraph.batch_size() Returns the batch size
ngraph.is_constant() Returns true if tensor is constant
ngraph.is_constant_scalar() Returns true if tensor is a constant scalar
ngraph.constant_value() Returns the value of a constant tensor
ngraph.tensor_size() Returns the total size of the tensor

To compose a computational graph, we support the following operations:

Method Description
ngraph.absolute() $$\operatorname{abs}(a)$$
ngraph.negative() $$-a$$
ngraph.sign() if $$x<0$$, $$-1$$; if $$x=0$$, $$0$$; if $$x>0$$, $$1$$
ngraph.add() $$a+b$$
ngraph.reciprocal() $$1/a$$
ngraph.square() $$a^2$$
ngraph.sqrt() $$\sqrt{a}$$
ngraph.cos() $$\cos(a)$$
ngraph.sin() $$\sin(a)$$
ngraph.tanh() $$\tanh(a)$$
ngraph.sigmoid() $$1/(1+\exp(-a))$$
ngraph.exp() $$\exp(a)$$
ngraph.log() $$\log(a)$$
ngraph.safelog() $$\log(a)$$
ngraph.one_hot() Convert to one-hot
ngraph.variance() Compute variance
ngraph.stack() Stack tensors along an axis
ngraph.convolution() Convolution operation
ngraph.pad() Pad a tensor with zeros along each dimension
ngraph.pooling() Pooling operation
ngraph.squared_L2() dot x with itself

Note

Additional operations are supported that are not currently documented, and so are not included in the list above. We will continue to populate this API when the documentation is updated.

## ngraph.transformers¶

Method Description
ngraph.transformers.allocate_transformer() Allocate a transformer.
ngraph.transformers.make_transformer() Generates a transformer using the factory.
ngraph.transformers.make_transformer_factory() Creates a new factory with cpu default.
ngraph.transformers.set_transformer_factory() Sets the Transformer factory used by make_transformer.
ngraph.transformers.transformer_choices() Return the list of available transformers.
ngraph.transformers.Transformer() Produce an executable version of op-graphs.

## ngraph.types¶

Method Description
ngraph.types.AssignableTensorOp() Assign a tensor. Used by ng.placeholder, and more.
ngraph.types.Axis() An Axis labels a dimension of a tensor.
ngraph.types.Axes() Axes represent multiple axis dimensions.
ngraph.types.Computation() Computations to attach to transformers.
ngraph.types.NameableValue() Objects that can derive name from the name scope.
ngraph.types.NameScope() Name scope for objects.
ngraph.types.Op() Basic class for ops.
ngraph.types.TensorOp() Base class for ops related to Tensors.

### ngraph¶

Graph construction.

ngraph.absolute(x)[source]

Returns the absolute value of x.

Parameters: x (TensorOp) – A tensor. The absolute value of x. TensorOp
ngraph.add(x, y, dtype=None)[source]

Parameters: x – A tensor. y – A tensor. dtype – The type of the result. An Op for x + y.
ngraph.as_op(x)[source]

Finds an Op appropriate for x.

If x is an Op, it returns x. Otherwise, constant(x) is returned.

Parameters: x – Some value. Op
ngraph.as_ops(xs)[source]

Converts an iterable of values to a tuple of Ops using as_op.

Parameters: xs – An iterable of values. A tuple of Ops.
ngraph.axes_with_order(x, axes)[source]

Return a tensor with a different axes order.

Parameters: x (TensorOp) – The tensor. axes (Axes) – A permutation of the axes of the tensor. The new tensor. TensorOp
ngraph.batch_size(x)[source]
Parameters: x – A Tensor The size of the batch axis in x.
ngraph.broadcast(x, axes)[source]

Parameters: x (TensorOp) – The tensor. axes – New axes. Tensor with additional axes. TensorOp
ngraph.cast_axes(tensor, axes)[source]

Cast the axes of a tensor to new axes.

Parameters: tensor (TensorOp) – The tensor. axes (Axes) – The new axes. The tensor with new axes. TensorOp
ngraph.computation(returns, *args)[source]

Defines a host-callable graph computation.

Parameters: returns – Values returned by the computation. A list, set, or op. *args – Inputs to the computation. A computation op.
ngraph.constant(const, axes=None, dtype=None, **kwargs)[source]
Makes a constant scalar/tensor. For a tensor, constant provides the opportunity
to supply axes. Scalar/NumPytensor arguments are usually automatically converted to tensors, but constant may be used to supply axes or in the rare cases where constant is not automatically provided.
Parameters: const – The constant, a scalar or a NumPy array. axes – The axes for the constant. dtype (optional) – The dtype to use. An AssignableTensorOp for the constant.
ngraph.convolution(conv_params, inputs, filters, axes, docstring=None)[source]
Parameters: conv_params – Dimensions. inputs (TensorOp) – The input tensor. filters (TensorOp) – Filter/kernel tensor. docstring (String, optional) – Documentation for the op. The result of the convolution. TensorOp
ngraph.cos(x)[source]

Returns the cos of x.

Parameters: x (TensorOp) – A tensor. The cos of x. TensorOp
ngraph.deconvolution(conv_params, inputs, filters, axes, docstring=None)[source]
Parameters: conv_params – Dimensions. inputs (TensorOp) – The input tensor. filters (TensorOp) – Filter/kernel tensor. docstring (String, optional) – Documentation for the op. The result of the deconvolution. TensorOp
ngraph.exp(x)[source]

Returns the exp of x.

Parameters: x (TensorOp) – A tensor. The exp of x. TensorOp
ngraph.fill(x, scalar)[source]
ngraph.log(x)[source]

Returns the log of x.

Parameters: x (TensorOp) – A tensor. The log of x. TensorOp
ngraph.lookuptable(lut, idx, axes, update=True, pad_idx=None, docstring=None)[source]

An operation to do the lookup from lut using the idx. Output axes are given as well, so from lut and idx axes, it indicates which axis to do the lookup.

Parameters: lut (TensorOp) – The lookup table. idx (TensorOp) – The indices to do the lookup. axes (Axes) – output axes pad_idx (int) – The int indicates the padding index docstring (String, optional) – Documentation for the op. The result of the lookup. TensorOp
ngraph.ctc(activations, labels, activation_lens, label_lens, axes=None)[source]

Computes the CTC cost using warp-ctc :param activations: The network output to compare against the transcripts :type activations: TensorOp :param labels: One-hot encoded transcript labels :type labels: TensorOp :param activation_lens: Length of activations for each example in the batch :type activation_lens: TensorOp :param label_lens: Transcript length for each example in the batch :type label_lens: TensorOp :param axes: Output axes for the cost tensor. Defaults to batch axis. :type axes: Axes, optional

Returns: The result of the CTC op. TensorOp

References

Graves, A., et al. (2006). https://doi.org/10.1145/1143844.1143891 warp-ctc: https://github.com/baidu-research/warp-ctc

ngraph.make_axes(axes=())[source]

Makes an Axes object.

Parameters: axes – A list of Axis. An Axes. Axes
ngraph.make_axis(length=None, name=None, docstring=None)[source]

Returns a new Axis.

Parameters: length (int, optional) – Length of the axis. name (String, optional) – Name of the axis. batch (bool, optional) – This is a batch axis. Defaults to False. recurrent (bool, optional) – This is a recurrent axis. Defaults to False. docstring (String, optional) – A docstring for the axis. A new Axis. Axis
ngraph.negative(x)[source]

Returns the negative of x.

Parameters: x (TensorOp) – tensor. The negative of x. (TensorOp)
ngraph.one_hot(x, axis)[source]
Parameters: x – The one_hot tensor. axis – The hot axis. The op. OneHotOp
ngraph.pad(x, paddings, axes=None)[source]

Pads a tensor with zeroes along each of its dimensions. TODO: clean up slice / unslice used here

Parameters: x – the tensor to be padded paddings – the length of the padding along each dimension. should be an array with the same length as x.axes. Each element of the array should be either an integer, in which case the padding will be symmetrical, or a tuple of the form (before, after) axes – the axes to be given to the padded tensor. If unsupplied, we create new axes of the correct lengths. symbolic expression for the padded tensor TensorOp
ngraph.persistent_tensor(axes, dtype=None, initial_value=None, **kwargs)[source]

Persistent storage, not trainable.

Storage that will retain its value from computation to computation.

Parameters: axes (Axes) – The axes of the persistent storage. dtype (optional) – The dtype of the persistent storage. initial_value (optional) – A host constant or callable. If callable, will be called to generate an initial value. The persistent storage. AssignableTensorOp
ngraph.placeholder(axes, dtype=None, initial_value=None, **kwargs)[source]

A place for a tensor to be supplied; typically used for computation arguments.

Parameters: axes (Axes) – The axes of the placeholder. dtype (optional) – The dtype of the placeholder. initial_value (optional) – Deprecated. A host constant or callable. If callable, will be called to generate an initial value. The placeholder. AssignableTensorOp
ngraph.pooling(poolparams, inputs, axes, docstring=None)[source]
Parameters: poolparams – Dimensions. inputs (TensorOp) – Input to pooling. docstring (String, optional) – Dcoumentation for the computation. The pooling computation. TensorOp
ngraph.reciprocal(x)[source]

Returns the reciprocal of x.

Parameters: x (TensorOp) – A tensor. The reciprocal of x. TensorOp
ngraph.safelog(x, limit=-50.0)[source]
ngraph.sequential(ops=None)[source]

Compute every op in order, compatible with existing dependencies, returning last value.

Ops will only be executed once, so to return the value of an earlier op, just add it again at the end of the list.

Parameters: ops – Sequence of ops to compute.
ngraph.sigmoid(x)[source]

Computes the sigmoid of x.

Parameters: x – The sigmoid computation.
ngraph.sign(x)[source]

Returns the sign of x.

Parameters: x (TensorOp) – A tensor. The sign of x. TensorOp
ngraph.sin(x)[source]

Returns the sin of x.

Parameters: x (TensorOp) – A tensor. sin of x. TensorOp
ngraph.slice_along_axis(x, axis, idx)[source]

Returns a slice of a tensor constructed by indexing into a single axis at a single position. If the axis occurs multiple times in the dimensions of the input tensor, we select only on the first occurrence. :param x: input tensor :param axis: axis along which to slice :param idx: index to select from the axis

Returns: a slice of x y
ngraph.sqrt(x)[source]

Returns the square root of x.

Parameters: x (TensorOp) – A tensor. The square root of x. TensorOp
ngraph.square(x)[source]

Returns the square of x.

Parameters: x (TensorOp) – A tensor. The square of x. TensorOp
ngraph.squared_L2(x, out_axes=None, reduction_axes=None)[source]
Parameters: x (TensorOp) – The first value, axes shifted down by 1. y (TensorOp) – The second value. The result. TensorOp
ngraph.stack(x_list, axis, pos=0)[source]
Parameters: x_list – A list of identically-axed tensors to join. axis – The axis to select joined tensors. pos – The position within the axes of the x_list tensors to insert axis in the result. The joined tensors. TensorOp
ngraph.tanh(x)[source]

Returns the cos of x.

Parameters: x (TensorOp) – A tensor. The tanh of x. TensorOp
ngraph.temporary(axes, dtype=None, **kwargs)[source]

Temporary storage.

Statically allocates storage that may be reused outside of the scope of the values.

Parameters: axes (Axes) – The axes of the storage. dtype (optional) – The dtype of the storage. constant (optional) – Once initialization is complete, this tensor should not change. The placeholder. AssignableTensorOp
ngraph.tensor_size(x, reduction_axes=None, out_axes=None)[source]

A scalar returning the total size of a tensor in elements.

Parameters: x – The tensor whose axes we are measuring. reduction_axes – if supplied, return the size of these axes instead.
ngraph.tensor_slice(x, slices, axes=None)[source]

Creates a sliced version of a tensor.

Parameters: x – The tensor. slices – One slice for each dimension in x. axes – Axes for the result. If not specified, axes will be generated. A sliced view of the tensor.
ngraph.value_of(tensor)[source]

Capture the value of a tensor.

Parameters: tensor – The value to be captured. A copy of the value.
ngraph.variable(axes, dtype=None, initial_value=None, **kwargs)[source]

A trainable tensor.

Parameters: axes (Axes) – Axes for the variable. dtype (optional) – The dtype for the tensor. initial_value – A constant or callable. If a callable, the callable will be called to provide an initial value. The variable. AssignableTensorOp
ngraph.variance(x, out_axes=None, reduction_axes=None)[source]

### ngraph.transformers¶

Transformer manipulation.

ngraph.transformers.allocate_transformer(name, **kargs)[source]

Allocate a named backend.

ngraph.transformers.make_transformer()[source]

Generates a Transformer using the factory in this module which defaults to CPU

Returns: Transformer

ngraph.transformers.make_transformer_factory(name, **kargs)[source]
ngraph.transformers.set_transformer_factory(factory)[source]

Sets the Transformer factory used by make_transformer

Parameters: factory (object) – Callable object which generates a Transformer
ngraph.transformers.transformer_choices()[source]

Return the list of available transformers.

class ngraph.transformers.Transformer(**kwargs)[source]

Produce an executable version of op-graphs.

Computations are subsets of Ops to compute. The transformer determines storage allocation and transforms the computations and allocations into functions.

Parameters: fusion (bool) – Whether to combine sequences of operations into one operation. **kwargs – Args for related classes.
computations

set of Computation – The set of requested computations.

all_results

set of ngraph.op_graph.op_graph.Op – A root set of Ops that need to be computed.

finalized

bool – True when transformation has been performed.

initialized

bool – True when variables have been initialized/restored.

fusion

bool – True when fusion was enabled.

device_buffers

set – Set of handles for storage allocations.

add_computation(computation)[source]
close()[source]
computation(results, *parameters)[source]

Adds a computation to the transformer. In the case of not providing parameters explicitly, the computation will keep using the old values for the parameters.

Parameters: results – Values to be computed *parameters – Values to be set as arguments to evaluate Callable.
device_to_host(computation, op, tensor=None)[source]

Copy a computation result from the device back to the host.

Parameters: computation – The computation. op – The op associated with the value. tensor – Optional tensor for returned value. The value of op.
classmethod get_default_tolerance(desired)[source]
get_layout_change_cost_function(op, arg)[source]

Returns a BinaryLayoutConstraint which computes the cost of a layout change between the specified op and its specified arg (if any cost).

Parameters: op – graph op to get cost function for arg – argument to the op to generate cost function for An object that inherits from BinaryLayoutConstraint and can be used to calculate any layout change cost.
get_layout_cost_function(op)[source]

Returns a UnaryLayoutConstraint which computes the cost of an op given an assigned data layout for that op.

Parameters: op – graph op to get cost function for An object that inherits from UnaryLayoutConstraint and can be used to calculate the layout assignment cost.
get_layouts(op)[source]

Returns a list of possible axis layouts for the op. The default layout must be the first item in the returned list.

Parameters: op – graph op to get possible layouts for A list of objects that inherit from LayoutAssignment. The first item in the list must be the default layout for this op.
get_tensor_view_value(op, host_tensor=None)[source]

Returns the contents of the tensor view for op.

Parameters: op – The computation graph op. host_tensor – Optional tensor to copy value into. A NumPy tensor with the elements associated with op.
host_to_device(computation, parameters, args)[source]

Copy args to parameters in computation.

Parameters: computation – The computation. parameters – Parameters of the computation. args – Values for the parameters.
initialize()[source]

Initialize storage. Will allocate if not already performed.

initialize_allocations()[source]

Inititializes allocation caches.

make_computation(computation)[source]

Wrap in Computation or a transformer-specific subclass.

Parameters: computation – Computation or a subclass.
register_graph_pass(graph_pass, position=None)[source]

Register a graph pass to be run.

Parameters: () (graph_pass) – The pass to register position (int) – insert index in the list of passes, append by default
save_output_statistics_file()[source]

Save collected statistics data to file

set_output_statistics_file(statistics_file)[source]

Collects data for transformer

transformers = {'hetr': <class 'ngraph.transformers.hetrtransform.HetrTransformer'>, 'cpu': <class 'ngraph.transformers.cputransform.CPUTransformer'>}
use_exop

Returns – True if this transformer uses the execution graph.

### ngraph.types¶

class ngraph.types.AssignableTensorOp(initial_value=None, is_constant=False, is_input=False, is_persistent=False, is_trainable=False, is_placeholder=False, const=None, **kwargs)[source]

Value comes directly from storage.

Parameters: is_input – The storage is used as an input from the CPU. Implies persistent. is_persistent – The storage value persists form computation to computation. is_constant – The storage value does not change once initialized. is_placeholder – This is a placeholder. const – The host value of the constant for constant storage. initial_value – If callable, a function that generates an Op whose tensor should be used as the initial value. Otherwise an Op that should be used as the initial value.
input

bool – The storage is used as an input.

add_control_dep(op)[source]

Allocations happen before executed ops, so all_deps are ignored.

Parameters: op –

Returns:

adjoints(error)

Returns a map containing the adjoints of this op with respect to other ops.

Creates the map if it does not already exist.

Parameters: error (TensorOp, optional) – The tensor holding the error value the derivative will be computed at. Must have the same axes as dependent. Map from Op to dSelf/dOp.
all_deps

Returns – All dependencies of the op, including args and control_deps. x.all_deps == OrderedSet(x.args) | x.control_deps, setter functions are used to maintain this invariant. However, user outside of the Op class should still avoid changing x._all_deps, x._control_deps and x._args directly.

all_op_references(ops)

Currently ops can have references to other ops anywhere in their __dict__, (not just args, but the other typical places handled in serialization’s add_edges). This function iterates through an ops __dict__ attributes and tests if any of them are subclasses of Op.

This is ‘greedier’ than the ordered_ops method which only traverses the graph using the args and control_deps keys of an ops __dict__. In addition, the order of ops returned by this method is not guaranteed to be in a valid linear execution ordering.

all_ops(ops=None, isolate=False)

Collects all Ops created within the context. Does not hide ops created in this context from parent contexts unless isolate is True.

append_axis(axis)
args

All the inputs to this node.

axes

Returns – The axes of the tensor.

call_info()

Creates the TensorDescriptions (of this op or its arguments) required to evaluate it.

The list is used to allocate buffers (in the transformers) and supply values to the transform method (in the transform_call_info) method.

Only TensorDescriptions of the arguments are necessary. A TensorDescription of the output is generate by calling self.tensor_description()

const
control_deps

Returns – Control dependency of the op.

copy_with_new_args(args)

This method creates a new op given an original op and new args. The purpose here is to replace args for an op with layout conversions as needed but keep the op the same otherwise.

defs

Returns – AssignableTensorOp is not executed, so its appearance in the instruction stream does not affect liveness of its value.

deriv_handler

Overrides processing of this op for this derivative.

Returns: The op that should be used to process this op. If no deriv_handler has been set, self is returned.
effective_tensor_op
forward

If not None, self has been replaced with forward.

When set, invalidates cached tensor descriptions.

Returns: None or the replacement.
forwarded

Finds the op that handles this op.

Returns: Follows forwarding to the op that should handle this op.
generate_add_delta(adjoints, delta)

Adds delta to the backprop contribution..

Parameters: adjoints – dy/dOp for all Ops used to compute y. delta – Backprop contribute.
generate_adjoints(adjoints, delta, *args)

With delta as the computation for the adjoint of this Op, incorporates delta into the adjoints for thr args.

Parameters: adjoints – dy/dOp for all ops involved in computing y. delta – Backprop amount for this Op. *args – The args of this Op.
get_all_ops()
get_object_by_name(name)

Returns the object with the given name, if it hasn’t been garbage collected.

Parameters: name (str) – Unique object name instance of NameableValue
graph_label

The label used for drawings of the graph.

has_axes

Returns – True if axes have been set.

has_side_effects

Returns – True if this Op has side-effects. This will prevent the Op from being eliminated during dead code elimination.

insert_axis(index, axis)

Inserts an axis :param index: Index to insert at :param axis: The Axis object to insert

invalidate_property_cache(property_name)

Invalidates self.all_deps cache

is_commutative

Returns – True if the Op is commutative.

is_constant
is_device_op

Returns – False, because this is handled by the transformer.

is_input
is_persistent
is_placeholder
is_scalar

Returns – True if this op is a scalar.

is_sequencing_op

Returns – True if this op’s sole purpose is to influence the sequencing of other ops.

is_state_op

Returns – True if this op is state.

is_tensor_op
is_trainable
mean(reduction_axes=None, out_axes=None)

Used in Neon front end.

Returns: mean(self)

name
named(name)
one
Returns a singleton constant 1 for this Op. Used by DerivOp to ensure that
we don’t build unique backprop graphs for every variable.
Returns: A unique constant 1 associated with this TensorOp.
ordered_ops(roots)

Topological sort of ops reachable from roots. Note that ngraph is using depenency edges rather than dataflow edges, for example, top_sort(a -> b -> c) => [c, b, a].

Parameters: roots – List of ops. A list of sorted ops.
placeholders()

Return all placeholder Ops used in computing this node.

Returns: Set of placeholder Ops.
remove_control_dep(dep)

Remove an op from the list of ops that need to run before this op.

Parameters: dep – The op.
replace_self(rep)
safe_name
scalar_op

Returns the scalar op version of this op. Will be overridden by subclasses

scope
shape

This is required for parameter initializers in legacy neon code. It expects layers to implement a shape that it can use to pass through layers.

Returns: self.axes

shape_dict()

Retuns: shape of this tensor as a dictionary

short_name
states_read

Returns – All state read by this op.

states_written

Returns – All state written by this op.

tensor

Deprecated. See effective_tensor_op.

Returns: The op providing the value.

tensor_description()

Returns a TensorDescription describing the output of this TensorOp

Returns: TensorDescription for this op.
unscoped_name
update_forwards()

Replaces internal op references with their forwarded versions.

Any subclass that uses ops stored outside of args and all_deps needs to override this method to update those additional ops.

This is mainly to reduce the number of places that need to explicitly check for forwarding.

variables()

Return all trainable Ops used in computing this node.

Returns: Set of trainable Ops.
visit_input_closure(roots, fun)

Apply function fun in the topological sorted order of roots.

Parameters: roots – List of ops. None
class ngraph.types.Axis(length=None, name=None, **kwargs)[source]

An Axis labels a dimension of a tensor. The op-graph uses the identity of Axis objects to pair and specify dimensions in symbolic expressions. This system has several advantages over using the length and position of the axis as in other frameworks:

1) Convenience. The dimensions of tensors, which may be nested deep in a computation graph, can be specified without having to calculate their lengths.

2) Safety. Axis labels are analogous to types in general-purpose programming languages, allowing objects to interact only when they are permitted to do so in advance. In symbolic computation, this prevents interference between axes that happen to have the same lengths but are logically distinct, e.g. if the number of training examples and the number of input features are both 50.

Parameters: length – The length of the axis. batch – Whether the axis is a batch axis. recurrent – Whether the axis is a recurrent axis.
axes
is_batch

Tests if an axis is a batch axis.

Returns: True if the axis is a batch axis. bool
is_channel

Tests if an axis is a channel axis.

Returns: True if the axis is a channel axis. bool
is_flattened

Returns – True if this is a flattened axis.

is_recurrent

Tests if an axis is a recurrent axis.

Returns: True if the axis is a recurrent axis. bool
length

Returns – The length of the axis.

named(name)[source]
class ngraph.types.Axes(axes=None)[source]

An Axes is a tuple of Axis objects used as a label for a tensor’s dimensions.

T
static as_flattened_list(axes)[source]

Converts Axes to a list of axes with flattened axes expanded recursively.

Returns: List of Axis objects
static as_nested_list(axes)[source]

Converts Axes to a list of axes with flattened axes expressed as nested lists

Returns: Nested list of Axis objects
static assert_valid_broadcast(axes, new_axes)[source]

Checks whether axes can be broadcasted to new_axes. We require that the components of axes be laid out in the same order in new_axes.

Axes:
axes: The original axes. new_axes: The broadcasted axes.
Returns: True if axes can be broadcasted to new_axes, False otherwise.
static assert_valid_flatten(unflattend_axes, flattened_axes)[source]

Checks whther axes can safely be flattened to produce new_axes. The requirements are that the components of axes should all be present in new_axes and that they should be laid out in the same order.

Parameters: unflattend_axes – The original axes. flattened_axes – The flattened axes. True if axes can be safely flattened to new_axes, False otherwise.
static assert_valid_unflatten(flattened_axes, unflattend_axes)[source]

Checks whether axes can safely be unflattened to produce new_axes. The requirements are that the components of axes should all be present in new_axes and that they should be laid out in the same order.

Parameters: flattened_axes – The original axes. unflattend_axes – The unflattened axes. True if axes can be safely unflattened to new_axes, False otherwise.
batch_axes()[source]
Returns: The tensor’s batch Axis wrapped in an Axes object if there is one on this tensor, otherwise returns None
batch_axis()[source]
Returns: The tensor’s batch Axis or None if there isn’t one.
channel_axis()[source]
Returns: The tensor’s channel Axis or None if there isn’t one.
feature_axes()[source]
Returns: The Axes subset that are not batch or recurrent axes.
find_by_name(name)[source]
flatten(force=False)[source]

Produces flattened form of axes

Parameters: force – Add a FlattenedAxis even when the axis is already flat. This is needed when the flatten is balanced by a later unflatten, as in dot. A flat axis.
full_lengths

Returns all information about the lengths of the axis objects in this Axes in the form of a nested tuple. An element of the outer tuple that is itself a tuple contains the restored lengths of axes that have been flattened in this Axis object.

Returns: A nested tuple with the axis lengths. tuple
get_by_names(*names)[source]

Get multiple axis objects by their names

Parameters: *names – One name for each axis to return Returns the requested axis. If multiple are requested, returns a tuple. Axis or tuple KeyError – If a name is not found.
index(axis)[source]

Returns the index of an axis

Parameters: axis – The axis to search for. The index.
is_equal_set(other)[source]

Returns true if other has the same set of Axis names as self

Parameters: other – the right-hand side operator axes bool, true if other has the same set of Axis names as self
is_not_equal_set(other)[source]

Returns true if other does not the same set of Axis names as self

Parameters: other – the right-hand side operator axes bool, true if other does not has the same set of Axis names as self
is_sub_set(other)[source]

Returns true if other is subset of self, i.e. <=

Parameters: other – the right-hand side operator axes bool, true if other is subset of self
is_super_set(other)[source]

Returns true if other is superset of self, i.e. >=

Parameters: other – the right-hand side operator axes bool, true if other is superset of self
static is_valid_flatten_or_unflatten(src_axes, dst_axes)[source]

Checks whether we can flatten OR unflatten from src_axes to dst_axes.

The requirements are that the components of axes should all be present in new_axes and that they should be laid out in the same order. This check is symmetric.

lengths

Returns – tuple: The lengths of the outer axes.

names

Returns – tuple: The names of the outer axes.

recurrent_axis()[source]
Returns: The tensor’s recurrent Axis or None if there isn’t one.
sample_axes()[source]
Returns: The Axes subset that are not batch axes.
set_shape(shape)[source]

Set shape of Axes

Parameters: shape – tuple or list of shapes, must be the same length as the axes
size

TODO – delete this method, the size should come from the tensor

spatial_axes()[source]
Returns: The Axes subset that are not batch, recurrent, or channel axes.
class ngraph.types.Computation(transformer, computation_op, **kwargs)[source]

A handle for a computation function.

Parameters: (obj (transformer) – Transformer): The associated transformer. returns – If an Op, return the value of the Op, if sequence of Ops, return the sequence of values, if a set return a map, if None, return None. *args – AllocationOps marked input will be arguments to the function. **kwargs – Args for related classes.
generate_profile(profiler_start, profiler_stop)[source]
get_object_by_name(name)

Returns the object with the given name, if it hasn’t been garbage collected.

Parameters: name (str) – Unique object name instance of NameableValue
graph_label

The label used for drawings of the graph.

name

The name.

named(name)
safe_name
short_name
unpack_args_or_feed_dict(args, kwargs)[source]
class ngraph.types.NameableValue(name=None, graph_label_type=None, docstring=None, **kwargs)[source]

An object that can be named.

Parameters: graph_label_type – A label that should be used when drawing the graph. Defaults to the class name. name (str) – The name of the object. **kwargs – Parameters for related classes.
graph_label_type

A label that should be used when drawing the graph.

id

Unique id for this object.

static get_object_by_name(name)[source]

Returns the object with the given name, if it hasn’t been garbage collected.

Parameters: name (str) – Unique object name instance of NameableValue
graph_label

The label used for drawings of the graph.

name

The name.

named(name)[source]
safe_name
short_name
class ngraph.types.Op(args=(), metadata=None, const=None, constant=False, persistent=False, trainable=False, **kwargs)[source]

Any operation that can be in an AST.

Parameters: args – Values used by this node. const – The value of a constant Op, or None, constant (bool) – The Op is constant. Default False. forward – If not None, the node to use instead of this node. metadata – String key value dictionary for frontend metadata. kwargs – Args defined in related classes.
const

The value of a constant.

constant

bool – The value is constant.

control_deps

OrderedSet – Ops in addtion to args that must run before this op.

persistent

bool – The value will be retained from computation to computation and not shared. Always True if reference is set.

metadata

Dictionary with of string keys and values used for attaching arbitrary metadata to nodes.

trainable

The value is trainable.

add_control_dep(dep)[source]

Add an op that needs to run before this op.

Parameters: dep – The op.
all_deps

Returns – All dependencies of the op, including args and control_deps. x.all_deps == OrderedSet(x.args) | x.control_deps, setter functions are used to maintain this invariant. However, user outside of the Op class should still avoid changing x._all_deps, x._control_deps and x._args directly.

static all_op_references(ops)[source]

Currently ops can have references to other ops anywhere in their __dict__, (not just args, but the other typical places handled in serialization’s add_edges). This function iterates through an ops __dict__ attributes and tests if any of them are subclasses of Op.

This is ‘greedier’ than the ordered_ops method which only traverses the graph using the args and control_deps keys of an ops __dict__. In addition, the order of ops returned by this method is not guaranteed to be in a valid linear execution ordering.

static all_ops(ops=None, isolate=False)[source]

Collects all Ops created within the context. Does not hide ops created in this context from parent contexts unless isolate is True.

args

All the inputs to this node.

call_info()[source]

Creates the TensorDescriptions (of this op or its arguments) required to evaluate it.

The list is used to allocate buffers (in the transformers) and supply values to the transform method (in the transform_call_info) method.

Only TensorDescriptions of the arguments are necessary. A TensorDescription of the output is generate by calling self.tensor_description()

const

Returns – For a constant, returns the constant value.

control_deps

Returns – Control dependency of the op.

copy_with_new_args(args)[source]

This method creates a new op given an original op and new args. The purpose here is to replace args for an op with layout conversions as needed but keep the op the same otherwise.

defs

Returns – For liveness analysis. The storage associated with everything in the returned list is modified when the Op is executed.

deriv_handler

Overrides processing of this op for this derivative.

Returns: The op that should be used to process this op. If no deriv_handler has been set, self is returned.
effective_tensor_op

The op that provides the value for this op.

For example, for a TensorValueOp, the op itself provides the value of the state, while for a SequenceOp, the last op in the sequence will provide the value, or, rather, the effective op comes from it.

This op deprecates tensor, which does some strange things that require isinstance checks in a number of callers.

Returns: The op used for the value of this op.
forward

If not None, self has been replaced with forward.

When set, invalidates cached tensor descriptions.

Returns: None or the replacement.
forwarded

Finds the op that handles this op.

Returns: Follows forwarding to the op that should handle this op.
static get_all_ops()[source]
get_object_by_name(name)

Returns the object with the given name, if it hasn’t been garbage collected.

Parameters: name (str) – Unique object name instance of NameableValue
graph_label

The label used for drawings of the graph.

has_side_effects

Returns – True if this Op has side-effects. This will prevent the Op from being eliminated during dead code elimination.

invalidate_property_cache(property_name)[source]

Invalidates self.all_deps cache

is_commutative

Returns – True if the Op is commutative.

is_constant

Returns – True if this op is a constant tensor.

is_device_op

Returns – True if the Op executes on the device.

is_input

Returns – True if this op is a tensor that the host can write to.

is_persistent

Returns – True if this op is a tensor whose value is preserved from computation to computation.

is_placeholder

Returns – True if this op is a placeholder, i.e. a place to attach a tensor.

is_scalar

Returns – True if this op is a scalar.

is_sequencing_op

Returns – True if this op’s sole purpose is to influence the sequencing of other ops.

is_state_op

Returns – True if this op is state.

is_tensor_op

Returns – True if this op is a tensor.

is_trainable

Returns – True if this op is a tensor that is trainable, i.e. is Op.variables will return it.

name
named(name)
static ordered_ops(roots)[source]

Topological sort of ops reachable from roots. Note that ngraph is using depenency edges rather than dataflow edges, for example, top_sort(a -> b -> c) => [c, b, a].

Parameters: roots – List of ops. A list of sorted ops.
placeholders()[source]

Return all placeholder Ops used in computing this node.

Returns: Set of placeholder Ops.
remove_control_dep(dep)[source]

Remove an op from the list of ops that need to run before this op.

Parameters: dep – The op.
replace_self(rep)[source]
safe_name
scalar_op

Returns the scalar op version of this op. Will be overridden by subclasses

scope
short_name
states_read

Returns – All state read by this op.

states_written

Returns – All state written by this op.

tensor

Deprecated. See effective_tensor_op.

Returns: The op providing the value.

tensor_description()[source]
unscoped_name
update_forwards()[source]

Replaces internal op references with their forwarded versions.

Any subclass that uses ops stored outside of args and all_deps needs to override this method to update those additional ops.

This is mainly to reduce the number of places that need to explicitly check for forwarding.

variables()[source]

Return all trainable Ops used in computing this node.

Returns: Set of trainable Ops.
static visit_input_closure(roots, fun)[source]

Apply function fun in the topological sorted order of roots.

Parameters: roots – List of ops. None
class ngraph.types.TensorOp(dtype=None, axes=None, scale=None, is_value_op=None, **kwargs)[source]

Super class for all Ops whose value is a Tensor.

Parameters: axes – The axes of the tensor. dtype – The element type of the tensor. scale – If specified, a scaling factor applied during updates. is_value_op – If specified, the normal dtype/axes/scale defaulting is disabled since those values will be supplied by a subclass, such as ValueOp. **kwargs – Arguments for related classes.
add_control_dep(dep)

Add an op that needs to run before this op.

Parameters: dep – The op.
adjoints(error)[source]

Returns a map containing the adjoints of this op with respect to other ops.

Creates the map if it does not already exist.

Parameters: error (TensorOp, optional) – The tensor holding the error value the derivative will be computed at. Must have the same axes as dependent. Map from Op to dSelf/dOp.
all_deps

Returns – All dependencies of the op, including args and control_deps. x.all_deps == OrderedSet(x.args) | x.control_deps, setter functions are used to maintain this invariant. However, user outside of the Op class should still avoid changing x._all_deps, x._control_deps and x._args directly.

all_op_references(ops)

Currently ops can have references to other ops anywhere in their __dict__, (not just args, but the other typical places handled in serialization’s add_edges). This function iterates through an ops __dict__ attributes and tests if any of them are subclasses of Op.

This is ‘greedier’ than the ordered_ops method which only traverses the graph using the args and control_deps keys of an ops __dict__. In addition, the order of ops returned by this method is not guaranteed to be in a valid linear execution ordering.

all_ops(ops=None, isolate=False)

Collects all Ops created within the context. Does not hide ops created in this context from parent contexts unless isolate is True.

append_axis(axis)[source]
args

All the inputs to this node.

axes

Returns – The axes of the tensor.

call_info()

Creates the TensorDescriptions (of this op or its arguments) required to evaluate it.

The list is used to allocate buffers (in the transformers) and supply values to the transform method (in the transform_call_info) method.

Only TensorDescriptions of the arguments are necessary. A TensorDescription of the output is generate by calling self.tensor_description()

const

Returns – For a constant, returns the constant value.

control_deps

Returns – Control dependency of the op.

copy_with_new_args(args)

This method creates a new op given an original op and new args. The purpose here is to replace args for an op with layout conversions as needed but keep the op the same otherwise.

defs

Returns – For liveness analysis. The storage associated with everything in the returned list is modified when the Op is executed.

deriv_handler

Overrides processing of this op for this derivative.

Returns: The op that should be used to process this op. If no deriv_handler has been set, self is returned.
effective_tensor_op

The op that provides the value for this op.

For example, for a TensorValueOp, the op itself provides the value of the state, while for a SequenceOp, the last op in the sequence will provide the value, or, rather, the effective op comes from it.

This op deprecates tensor, which does some strange things that require isinstance checks in a number of callers.

Returns: The op used for the value of this op.
forward

If not None, self has been replaced with forward.

When set, invalidates cached tensor descriptions.

Returns: None or the replacement.
forwarded

Finds the op that handles this op.

Returns: Follows forwarding to the op that should handle this op.
generate_add_delta(adjoints, delta)[source]

Adds delta to the backprop contribution..

Parameters: adjoints – dy/dOp for all Ops used to compute y. delta – Backprop contribute.
generate_adjoints(adjoints, delta, *args)[source]

With delta as the computation for the adjoint of this Op, incorporates delta into the adjoints for thr args.

Parameters: adjoints – dy/dOp for all ops involved in computing y. delta – Backprop amount for this Op. *args – The args of this Op.
get_all_ops()
get_object_by_name(name)

Returns the object with the given name, if it hasn’t been garbage collected.

Parameters: name (str) – Unique object name instance of NameableValue
graph_label

The label used for drawings of the graph.

has_axes

Returns – True if axes have been set.

has_side_effects

Returns – True if this Op has side-effects. This will prevent the Op from being eliminated during dead code elimination.

insert_axis(index, axis)[source]

Inserts an axis :param index: Index to insert at :param axis: The Axis object to insert

invalidate_property_cache(property_name)

Invalidates self.all_deps cache

is_commutative

Returns – True if the Op is commutative.

is_constant

Returns – True if this op is a constant tensor.

is_device_op

Returns – True if the Op executes on the device.

is_input

Returns – True if this op is a tensor that the host can write to.

is_persistent

Returns – True if this op is a tensor whose value is preserved from computation to computation.

is_placeholder

Returns – True if this op is a placeholder, i.e. a place to attach a tensor.

is_scalar

Returns – True if this op is a scalar.

is_sequencing_op

Returns – True if this op’s sole purpose is to influence the sequencing of other ops.

is_state_op

Returns – True if this op is state.

is_tensor_op
is_trainable

Returns – True if this op is a tensor that is trainable, i.e. is Op.variables will return it.

mean(reduction_axes=None, out_axes=None)[source]

Used in Neon front end.

Returns: mean(self)

name
named(name)
one
Returns a singleton constant 1 for this Op. Used by DerivOp to ensure that
we don’t build unique backprop graphs for every variable.
Returns: A unique constant 1 associated with this TensorOp.
ordered_ops(roots)

Topological sort of ops reachable from roots. Note that ngraph is using depenency edges rather than dataflow edges, for example, top_sort(a -> b -> c) => [c, b, a].

Parameters: roots – List of ops. A list of sorted ops.
placeholders()

Return all placeholder Ops used in computing this node.

Returns: Set of placeholder Ops.
remove_control_dep(dep)

Remove an op from the list of ops that need to run before this op.

Parameters: dep – The op.
replace_self(rep)
safe_name
scalar_op

Returns the scalar op version of this op. Will be overridden by subclasses

scope
shape

This is required for parameter initializers in legacy neon code. It expects layers to implement a shape that it can use to pass through layers.

Returns: self.axes

shape_dict()[source]

Retuns: shape of this tensor as a dictionary

short_name
states_read

Returns – All state read by this op.

states_written

Returns – All state written by this op.

tensor

Deprecated. See effective_tensor_op.

Returns: The op providing the value.

tensor_description()[source]

Returns a TensorDescription describing the output of this TensorOp

Returns: TensorDescription for this op.
unscoped_name
update_forwards()

Replaces internal op references with their forwarded versions.

Any subclass that uses ops stored outside of args and all_deps needs to override this method to update those additional ops.

This is mainly to reduce the number of places that need to explicitly check for forwarding.

variables()

Return all trainable Ops used in computing this node.

Returns: Set of trainable Ops.
visit_input_closure(roots, fun)

Apply function fun in the topological sorted order of roots.

Parameters: roots – List of ops. None
class ngraph.types.Transformer(**kwargs)[source]

Produce an executable version of op-graphs.

Computations are subsets of Ops to compute. The transformer determines storage allocation and transforms the computations and allocations into functions.

Parameters: fusion (bool) – Whether to combine sequences of operations into one operation. **kwargs – Args for related classes.
computations

set of Computation – The set of requested computations.

all_results

set of ngraph.op_graph.op_graph.Op – A root set of Ops that need to be computed.

finalized

bool – True when transformation has been performed.

initialized

bool – True when variables have been initialized/restored.

fusion

bool – True when fusion was enabled.

device_buffers

set – Set of handles for storage allocations.

add_computation(computation)[source]
close()[source]
computation(results, *parameters)[source]

Adds a computation to the transformer. In the case of not providing parameters explicitly, the computation will keep using the old values for the parameters.

Parameters: results – Values to be computed *parameters – Values to be set as arguments to evaluate Callable.
device_to_host(computation, op, tensor=None)[source]

Copy a computation result from the device back to the host.

Parameters: computation – The computation. op – The op associated with the value. tensor – Optional tensor for returned value. The value of op.
classmethod get_default_tolerance(desired)[source]
get_layout_change_cost_function(op, arg)[source]

Returns a BinaryLayoutConstraint which computes the cost of a layout change between the specified op and its specified arg (if any cost).

Parameters: op – graph op to get cost function for arg – argument to the op to generate cost function for An object that inherits from BinaryLayoutConstraint and can be used to calculate any layout change cost.
get_layout_cost_function(op)[source]

Returns a UnaryLayoutConstraint which computes the cost of an op given an assigned data layout for that op.

Parameters: op – graph op to get cost function for An object that inherits from UnaryLayoutConstraint and can be used to calculate the layout assignment cost.
get_layouts(op)[source]

Returns a list of possible axis layouts for the op. The default layout must be the first item in the returned list.

Parameters: op – graph op to get possible layouts for A list of objects that inherit from LayoutAssignment. The first item in the list must be the default layout for this op.
get_tensor_view_value(op, host_tensor=None)[source]

Returns the contents of the tensor view for op.

Parameters: op – The computation graph op. host_tensor – Optional tensor to copy value into. A NumPy tensor with the elements associated with op.
host_to_device(computation, parameters, args)[source]

Copy args to parameters in computation.

Parameters: computation – The computation. parameters – Parameters of the computation. args – Values for the parameters.
initialize()[source]

Initialize storage. Will allocate if not already performed.

initialize_allocations()[source]

Inititializes allocation caches.

make_computation(computation)[source]

Wrap in Computation or a transformer-specific subclass.

Parameters: computation – Computation or a subclass.
register_graph_pass(graph_pass, position=None)[source]

Register a graph pass to be run.

Parameters: () (graph_pass) – The pass to register position (int) – insert index in the list of passes, append by default
save_output_statistics_file()[source]

Save collected statistics data to file

set_output_statistics_file(statistics_file)[source]

Collects data for transformer

transformers = {'hetr': <class 'ngraph.transformers.hetrtransform.HetrTransformer'>, 'cpu': <class 'ngraph.transformers.cputransform.CPUTransformer'>}
use_exop

Returns – True if this transformer uses the execution graph.