This document is relevant for: Inf1, Trn1

PyTorch Neuron#

PyTorch Neuron unlocks high-performance and cost-effective deep learning acceleration on AWS Trainium-based and Inferentia-based Amazon EC2 instances.

PyTorch Neuron plugin architecture enables native PyTorch models to be accelerated on Neuron devices, so you can use your existing framework application and get started easily with minimal code changes.

Setup Guide for Trn1

Important

AutoScalingGroups is currently not supported on Trn1 and will be added soon.

To launch a Trn1 cluster you can use AWS ParallelCluster, please see example.

Note

Neuron Driver installed on Deep Learning AMI (DLAMI) with Conda does not support Trn1.

If you want to use DLAMI with Conda, please make sure to uninstall aws-neuron-dkms and install aws-neuronx-dkms before using Neuron on DLAMI with Conda.

Tutorials (torch-neuronx)
Additional Examples (torch-neuronx)
API Reference Guide (torch-neuronx)
Developer Guide (torch-neuronx)
Setup Guide for Inf1
Tutorials (torch-neuron)
Additional Examples (torch-neuron)
API Reference Guide (torch-neuron)
Developer Guide (torch-neuron)

This document is relevant for: Inf1, Trn1