This document is relevant for: Inf1, Inf2, Trn1, Trn1n

Previous Releases Notes (Neuron 2.x)#

Neuron 2.8.0 (02/24/2023)#

Table of contents

What’s New#

This release adds support for EC2 Inf2 instances, introduces initial inference support with TensorFlow 2.x Neuron (tensorflow-neuronx) on Trn1 and Inf2, and introduces minor enhancements and bug fixes.

This release introduces the following:

What’s New

Details

Support for EC2 Inf2 instances

  • Inference support for Inf2 instances in PyTorch Neuron (torch-neuronx)

  • Inference support for Inf2 instances in TensorFlow 2.x Neuron (tensorflow-neuronx)

  • Overall documentation update to include Inf2 instances

TensorFlow 2.x Neuron (tensorflow-neuronx) support

  • This releases introduces initial inference support with TensorFlow 2.x Neuron (tensorflow-neuronx) on Trn1 and Inf2

New Neuron GitHub samples

Minor enhancements and bug fixes.

Release included packages

For more detailed release notes of the new features and resolved issues, see Neuron Components Release Notes.

Neuron 2.7.0 (02/08/2023)#

Table of contents

What’s New#

This release introduces new capabilities and libraries, as well as features and tools that improves usability. This release introduces the following:

What’s New

Details

PyTorch 1.13

Support of PyTorch 1.13 version for PyTorch Neuron (torch-neuronx). For resources see PyTorch Neuron

PyTorch DistributedDataParallel (DDP) API

Support of PyTorch DistributedDataParallel (DDP) API in PyTorch Neuron (torch-neuronx). For resources how to use PyTorch DDP API with Neuron, please check Distributed Data Parallel Training Tutorial.

Inference support in torch-neuronx

For more details please visit pytorch-neuronx-main` page. You can also try Neuron Inference samples https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx in the aws-neuron-samples GitHub repo.

Neuron Custom C++ Operators[Experimental]

Initial support for Neuron Custom C++ Operators [Experimental] , with Neuron Custom C++ Operators (“CustomOps”) you can now write CustomOps that run on NeuronCore-v2 chips. For more resources please check Neuron Custom C++ Operators [Experimental] section.

transformers-neuronx [Experimental]

transformers-neuronx is a new library enabling LLM model inference. It contains models that are checkpoint-compatible with HuggingFace Transformers, and currently supports Transformer Decoder models like GPT2, GPT-J and OPT. Please check aws-neuron-samples repository

Neuron sysfs filesystem

Neuron sysfs filesystem exposes Neuron Devices under /sys/devices/virtual/neuron_device providing visibility to Neuron Driver and Runtime at the system level. By performing several simple CLIs such as reading or writing to a sysfs file, you can get information such as Neuron Runtime status, memory usage, Driver info etc. For resources about Neuron sysfs filesystem visit Neuron Sysfs User Guide.

TFLOPS support in Neuron System Tools

Neuron System Tools now also report model actual TFLOPs rate in both neuron-monitor and neuron-top. More details can be found in the Neuron Tools documentation.

New sample scripts for training

This release adds multiple new sample scripts for training models with torch-neuronx, Please check aws-neuron-samples repository

New sample scripts for inference

This release adds multiple new sample scripts for deploying models with torch-neuronx, Please check aws-neuron-samples repository

Neuron GitHub samples repository for Amazon EKS

A new AWS Neuron GitHub samples repository for Amazon EKS, Please check aws-neuron-samples repository

For more detailed release notes of the new features and resolved issues, see Neuron Components Release Notes.

Neuron 2.6.0 (12/12/2022)#

This release introduces the support of PyTorch 1.12 version, and introduces PyTorch Neuron (torch-neuronx) profiling through Neuron Plugin for TensorBoard. Pytorch Neuron (torch-neuronx) users can now profile their models through the following TensorBoard views:

  • Operator Framework View

  • Operator HLO View

  • Operator Trace View

This release introduces the support of LAMB optimizer for FP32 mode, and adds support for capturing snapshots of inputs, outputs and graph HLO for debugging.

In addition, this release introduces the support of new operators and resolves issues that improve stability for Trn1 customers.

For more detailed release notes of the new features and resolved issues, see Neuron Components Release Notes.

Neuron 2.5.0 (11/23/2022)#

Neuron 2.5.0 is a major release which introduces new features and resolves issues that improve stability for Inf1 customers.

Component

New in this release

PyTorch Neuron (torch-neuron)

TensorFlow Neuron (tensorflow-neuron)

  • tf-neuron-auto-multicore tool to enable automatic data parallel on multiple NeuronCores.

  • Experimental support for tracing models larger than 2GB using extract-weights flag (TF2.x only), see TensorFlow 2.x (tensorflow-neuron) Tracing API

  • tfn.auto_multicore Python API to enable automatic data parallel (TF2.x only)

This Neuron release is the last release that will include torch-neuron versions 1.7 and 1.8, and that will include tensorflow-neuron versions 2.5 and 2.6.

In addition, this release introduces changes to the Neuron packaging and installation instructions for Inf1 customers, see Introducing Neuron packaging and installation changes for Inf1 customers for more information.

For more detailed release notes of the new features and resolved issues, see Neuron Components Release Notes.

Neuron 2.4.0 (10/27/2022)#

This release introduces new features and resolves issues that improve stability. The release introduces “memory utilization breakdown” feature in both Neuron Monitor and Neuron Top system tools. The release introduces support for “NeuronCore Based Sheduling” capability to the Neuron Kubernetes Scheduler and introduces new operators support in Neuron Compiler and PyTorch Neuron. This release introduces also additional eight (8) samples of models’ fine tuning using PyTorch Neuron. The new samples can be found in the AWS Neuron Samples GitHub repository.

Neuron 2.3.0 (10/10/2022)#

This Neuron 2.3.0 release extends Neuron 1.x and adds support for the new AWS Trainium powered Amazon EC2 Trn1 instances. With this release, you can now run deep learning training workloads on Trn1 instances to save training costs by up to 50% over equivalent GPU-based EC2 instances, while getting the highest training performance in AWS cloud for popular NLP models.

What’s New

Tested workloads and known issues

  • rn2.3.0_tested

  • rn2.3.0-known-issues

Neural-networks training support
PyTorch Neuron (torch-neuronx)
Neuron Runtime, Drivers and Networking Components
Neuron Tools
Developer Flows

The following workloads were tested in this release:

  • Distributed data-parallel pre-training of Hugging Face BERT model on single Trn1.32xl instance (32 NeuronCores).

  • Distributed data-parallel pre-training of Hugging Face BERT model on multiple Trn1.32xl instances.

  • HuggingFace BERT MRPC task finetuning on single NeuronCore or multiple NeuronCores (data-parallel).

  • Megatron-LM GPT3 (6.7B parameters) pre-training on single Trn1.32xl instance.

  • Megatron-LM GPT3 (6.7B parameters) pre-training on multi Trn1.32xl instances.

  • Multi-Layer Perceptron (ML) model training on single NeuronCore or multiple NeuronCores (data-parallel).

  • For maximum training performance, please set environment variables XLA_USE_BF16=1 to enable full BF16 and Stochastic Rounding.

This document is relevant for: Inf1, Inf2, Trn1, Trn1n