This document is relevant for: Inf1, Inf2, Trn1, Trn1n

Neuron 2.x Introduction at Trn1 GA - FAQ#

What Instances are supported with this release?#

This release supports Trn1 and Inf1.

What ML frameworks support Trn1 in this release?#

In this release, PyTorch Neuron (torch-neuronx) supports Trn1. Future Neuron releases will add support for additional ML frameworks to Trn1.

What ML frameworks support Inf1 in this release?#

In this release, the following ML frameworks support Inf1:

  • PyTorch Neuron (torch-neuron) - the same version as in Neuron 1.19.2.

  • TensorFlow Neuron (tensorflow-neuron) - the same version as in released in Neuron 1.19.2.

  • MXNet Neuron (mxnet-neuron) - the same version as in Neuron 1.19.2.

Note

Inf1 support Inference only.

What are the common Neuron packages that are shared between Trn1 and Inf1?#

Common Neuron packages between Inf1 and Trn1#

Package

Description

aws-neuronx-dkms

Neuron Driver

aws-neuronx-k8-plugin

Neuron Plugin for Kubernetes

aws-neuronx-k8-scheduler

Neuron Scheduler for Kubernetes

aws-neuronx-oci-hooks

Neuron OCI Hooks support

What additional Neuron packages support Trn1 only?#

Neuron packages supporting Trn1 only#

Package

Description

neuronx-cc

Neuron Compiler with XLA frontend

torch-neuronx

Neuron PyTorch with PyTorch XLA backend

aws-neuronx-collective

Collective Communication Operation library

aws-neuronx-tools

Neuron System Tools

aws-neuronx-runtime-lib

Neuron Runtime

Note

In next releases aws-neuronx-tools and aws-neuronx-runtime-lib will support Inf1 also.

What additional Neuron packages support Inf1 only?#

Neuron packages supporting Inf1 only#

Package

Description

neuron-cc

Neuron Compiler (Inference only)

torch-neuron

Neuron PyTorch (Inference only)

tensorflow-neuron

TensorFlow Neuron (Inference only)

mxnet-neuron

MXNet Neuron (Inference only)

aneuronperf

NeuronPerf

If I have trained a model on Trn1, can I load the model (from a checkpoint) and deploy it on Inf1?#

You can deploy the model on Inf1 or any other platform such as CPU, GPU or others, as long as the operators and data-types supported by the source platform are also supported by the target platform.

Can a Neuron model binary (NEFF) that was compiled on Trn1, run on Inf1?#

No, the model must be re-compiled for Inf1. This can be done directly using our CLI or via a framework such as PyTorch.

Can a Neuron model binary (NEFF) that was compiled on Inf1, run on Trn1?#

No. The model must be re-compiled for Trn1 using PyTorch.

If I have trained a model on Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on CPU, GPU or other platforms?#

Yes, as long as the operators and data-types supported by the source platform are also supported by the target platform.

XLA operators supported by Trn1 can be found here.

If I have trained a model on a platform other than Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on Trn1?#

Yes, as long as the operators and data-types supported by the source platform are also supported by the target platform.

XLA operators supported by Trn1 can be found here.

What distributed ML frameworks/libraries are be supported by Neuron?#

PyTorch Neuron provides support for distributed training. See <Megatron-LM GPT Pretraining Tutorial> for an example.

What happened to releases 2.0-2.2?#

These releases correspond to prior, private-preview releases.

This document is relevant for: Inf1, Inf2, Trn1, Trn1n