This document is relevant for: Inf1

PyTorch Neuron (torch-neuron) Supported operators#

Current operator lists may be generated with these commands inside python:

import torch.neuron
print(*torch.neuron.get_supported_operations(), sep='\n')

PyTorch Neuron release [package version 1.*.*.2.9.1.0, SDK 2.13.0]#

Date: 08/28/2023

Added support for new operators:

  • aten::clamp_min

  • aten::clamp_max

PyTorch Neuron release [2.9.0.0]#

Date: 03/28/2023

Added support for new operators:

  • aten::tensordot

  • aten::adaptive_avg_pool1d

  • aten::prelu

  • aten::reflection_pad2d

  • aten::baddbmm

  • aten::repeat

PyTorch Neuron release [2.5.0.0]#

Date: 11/23/2022

Added support for new operators:

  • aten::threshold

  • aten::roll

  • aten::instance_norm

  • aten::amin

  • aten::amax

  • aten::new_empty

  • aten::new_ones

  • aten::tril

  • aten::triu

  • aten::zero_

  • aten::all

  • aten::broadcast_tensors

  • aten::broadcast_to

  • aten::logical_and

  • aten::logical_not

  • aten::logical_or

  • aten::logical_xor

  • aten::_convolution_mode

Added limited support for new operators:

PyTorch Neuron release [2.2.0.0]#

Date: 03/25/2022

Added support for new operators:

  • aten::max_pool2d_with_indices: Fully supported (Was previously supported only when indices were unused).

PyTorch Neuron release [2.1.7.0]#

Date: 01/20/2022

Added support for new operators:

  • aten::bucketize

  • aten::any

  • aten::remainder

  • aten::clip

  • aten::repeat_interleave

  • aten::tensor_split

  • aten::split_with_sizes

  • aten::isnan

  • aten::embedding_renorm_

  • aten::dot

  • aten::mv

  • aten::hardsigmoid

  • aten::hardswish

  • aten::trunc

  • aten::one_hot: Supported when num_classes is known at trace time.

    The dynamic version of this operation when num_classes = -1 is not supported.

  • aten::adaptive_max_pool1d

  • aten::adaptive_max_pool2d

PyTorch Neuron Release [2.0.536.0]#

  • The following are operators with limited support on Neuron. Unlike fully supported operators, these operators are not returned when using torch_neuron.get_supported_operations(). See each operator description for conditional support:

    • aten::max_pool2d_with_indices - Supported when indices outputs are not used by a downstream operation. This allows the operation to be compiled to Neuron when it is equivalent to an aten::max_pool2d.

    • aten::max_pool3d_with_indices - Supported when indices outputs are not used by a downstream operation. This allows the operation to be compiled to Neuron when it is equivalent to an aten::max_pool3d.

    • aten::where - Supported when used as a conditional selection (3-argument variant). Unsupported when used to generate a dynamic list of indices (1-argument variant). See torch.where().

PyTorch Neuron Release [2.0.318.0]#

Added support for new operators:

  • aten::empty_like

  • aten::log

  • aten::type_as

  • aten::movedim

  • aten::einsum

  • aten::argmax

  • aten::min

  • aten::argmin

  • aten::abs

  • aten::cos

  • aten::sin

  • aten::linear

  • aten::pixel_shuffle

  • aten::group_norm

  • aten::_weight_norm

PyTorch Neuron Release [1.5.21.0]#

No change

PyTorch Neuron Release [1.5.7.0]#

Added support for new operators:

  • aten::erf

  • prim::DictConstruct

PyTorch Neuron Release [1.4.1.0]#

No change

PyTorch Neuron Release [1.3.5.0]#

Added support for new operators:

  • aten::numel

  • aten::ones_like

  • aten::reciprocal

  • aten::topk

PyTorch Neuron Release [1.2.16.0]#

No change

PyTorch Neuron Release [1.2.15.0]#

No change

PyTorch Neuron Release [1.2.3.0]#

Added support for new operators:

  • aten::silu

  • aten::zeros_like

PyTorch Neuron Release [1.1.7.0]#

Added support for new operators:

  • aten::_shape_as_tensor

  • aten::chunk

  • aten::empty

  • aten::masked_fill

PyTorch Neuron Release [1.0.24045.0]#

Added support for new operators:

  • aten::__and__

  • aten::bmm

  • aten::clone

  • aten::expand_as

  • aten::fill_

  • aten::floor_divide

  • aten::full

  • aten::hardtanh

  • aten::hardtanh_

  • aten::le

  • aten::leaky_relu

  • aten::lt

  • aten::mean

  • aten::ne

  • aten::softplus

  • aten::unbind

  • aten::upsample_bilinear2d

PyTorch Neuron Release [1.0.1720.00]#

Added support for new operators:

  • aten::constant_pad_nd

  • aten::meshgrid

PyTorch Neuron Release [1.0.1532.0]#

Added support for new operators:

  • aten::ones

PyTorch Neuron Release [1.0.1522.0]#

No change

PyTorch Neuron Release [1.0.1386.0]#

Added support for new operators:

  • aten::ceil

  • aten::clamp

  • aten::eq

  • aten::exp

  • aten::expand_as

  • aten::flip

  • aten::full_like

  • aten::ge

  • aten::gt

  • aten::log2

  • aten::log_softmax

  • aten::max

  • aten::neg

  • aten::relu

  • aten::rsqrt

  • aten::scalarImplicit

  • aten::sqrt

  • aten::squeeze

  • aten::stack

  • aten::sub

  • aten::sum

  • aten::true_divide

  • aten::upsample_nearest2d

  • prim::Constant

  • prim::GetAttr

  • prim::ImplicitTensorToNum

  • prim::ListConstruct

  • prim::ListUnpack

  • prim::NumToTensor

  • prim::TupleConstruct

  • prim::TupleUnpack

Please note, primitives are included in this list from this release.

PyTorch Neuron Release [1.0.1168.0]#

Added support for new operators:

  • aten::ScalarImplicit

PyTorch Neuron Release [1.0.1001.0]#

Added support for new operators:

  • aten::detach

  • aten::floor

  • aten::gelu

  • aten::pow

  • aten::sigmoid

  • aten::split

Remove support for operators:

  • aten::embedding: Does not meet performance criteria

  • aten::erf: Error function does not meet accuracy criteria

  • aten::tf_dtype_from_torch: Internal support function, not an operator

PyTorch Neuron Release [1.0.825.0]#

No change

PyTorch Neuron Release [1.0.763.0]#

Added support for new operators:

  • aten::Int

  • aten::arange

  • aten::contiguous

  • aten::div

  • aten::embedding

  • aten::erf

  • aten::expand

  • aten::eye

  • aten::index_select

  • aten::layer_norm

  • aten::matmul

  • aten::mm

  • aten::permute

  • aten::reshape

  • aten::rsub

  • aten::select

  • aten::size

  • aten::slice

  • aten::softmax

  • aten::tf_dtype_from_torch

  • aten::to

  • aten::transpose

  • aten::unsqueeze

  • aten::view

  • aten::zeros

Remove support for operators:

  • aten::tf_broadcastable_slice: Internal support function, not an operator

  • aten::tf_padding: Internal support function, not an operator

These operators were already supported previously:

  • aten::_convolution

  • aten::adaptive_avg_pool2d

  • aten::add

  • aten::add_

  • aten::addmm

  • aten::avg_pool2d

  • aten::batch_norm

  • aten::cat

  • aten::dimension_value

  • aten::dropout

  • aten::flatten

  • aten::max_pool2d

  • aten::mul

  • aten::relu_

  • aten::t

  • aten::tanh

  • aten::values

  • prim::Constant

  • prim::GetAttr

  • prim::ListConstruct

  • prim::ListUnpack

  • prim::TupleConstruct

  • prim::TupleUnpack

PyTorch Neuron Release [1.0.672.0]#

No change

PyTorch Neuron Release [1.0.552.0]#

Added support for new operators:

  • aten::_convolution

  • aten::adaptive_avg_pool2d

  • aten::add

  • aten::add_

  • aten::addmm

  • aten::avg_pool2d

  • aten::batch_norm

  • aten::cat

  • aten::dimension_value

  • aten::dropout

  • aten::flatten

  • aten::max_pool2d

  • aten::mul

  • aten::relu_

  • aten::t

  • aten::tanh

  • aten::tf_broadcastable_slice

  • aten::tf_padding

  • aten::values

  • prim::Constant

  • prim::GetAttr

  • prim::ListConstruct

  • prim::ListUnpack

  • prim::TupleConstruct

  • prim::TupleUnpack

This document is relevant for: Inf1