Neuron 2.x Introduction at Trn1 GA - FAQ
Contents
This document is relevant for: Inf1
, Inf2
, Trn1
, Trn1n
Neuron 2.x Introduction at Trn1 GA - FAQ#
Table of contents
What are the common Neuron packages that are shared between Trn1 and Inf1?
What are the changes in Neuron packages and installation instructions introduced in this release?
If I have trained a model on Trn1, can I load the model (from a checkpoint) and deploy it on Inf1?
Can a Neuron model binary (NEFF) that was compiled on Trn1, run on Inf1?
Can a Neuron model binary (NEFF) that was compiled on Inf1, run on Trn1?
What distributed ML frameworks/libraries are be supported by Neuron?
What Instances are supported with this release?#
This release supports Trn1 and Inf1.
What ML frameworks support Trn1 in this release?#
In this release, PyTorch Neuron (torch-neuronx
) supports Trn1. Future Neuron releases will add support for additional ML frameworks to Trn1.
What ML frameworks support Inf1 in this release?#
In this release, the following ML frameworks support Inf1:
PyTorch Neuron (
torch-neuron
) - the same version as in Neuron 1.19.2.TensorFlow Neuron (
tensorflow-neuron
) - the same version as in released in Neuron 1.19.2.MXNet Neuron (
mxnet-neuron
) - the same version as in Neuron 1.19.2.
Note
Inf1 support Inference only.
What additional Neuron packages support Trn1 only?#
Package |
Description |
---|---|
|
Neuron Compiler with XLA frontend |
|
Neuron PyTorch with PyTorch XLA backend |
|
Collective Communication Operation library |
|
Neuron System Tools |
|
Neuron Runtime |
Note
In next releases aws-neuronx-tools
and aws-neuronx-runtime-lib
will support Inf1 also.
What additional Neuron packages support Inf1 only?#
Package |
Description |
---|---|
|
Neuron Compiler (Inference only) |
|
Neuron PyTorch (Inference only) |
|
TensorFlow Neuron (Inference only) |
|
MXNet Neuron (Inference only) |
|
NeuronPerf |
What are the changes in Neuron packages and installation instructions introduced in this release?#
For full details please see:
Introducing Packaging and installation changes application note.
If I have trained a model on Trn1, can I load the model (from a checkpoint) and deploy it on Inf1?#
You can deploy the model on Inf1 or any other platform such as CPU, GPU or others, as long as the operators and data-types supported by the source platform are also supported by the target platform.
Can a Neuron model binary (NEFF) that was compiled on Trn1, run on Inf1?#
No, the model must be re-compiled for Inf1. This can be done directly using our CLI or via a framework such as PyTorch.
Can a Neuron model binary (NEFF) that was compiled on Inf1, run on Trn1?#
No. The model must be re-compiled for Trn1 using PyTorch.
If I have trained a model on Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on CPU, GPU or other platforms?#
Yes, as long as the operators and data-types supported by the source platform are also supported by the target platform.
XLA operators supported by Trn1 can be found here.
If I have trained a model on a platform other than Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on Trn1?#
Yes, as long as the operators and data-types supported by the source platform are also supported by the target platform.
XLA operators supported by Trn1 can be found here.
What distributed ML frameworks/libraries are be supported by Neuron?#
PyTorch Neuron provides support for distributed training. See <Megatron-LM GPT Pretraining Tutorial> for an example.
What happened to releases 2.0-2.2?#
These releases correspond to prior, private-preview releases.
This document is relevant for: Inf1
, Inf2
, Trn1
, Trn1n