This document is relevant for: Inf1

Neuron Hardware#

AWS Neuron hardware consists of custom-designed machine learning accelerators optimized for deep learning workloads. This section covers the architecture and capabilities of AWS Inferentia and Trainium chips, their NeuronCore processing units, and the EC2 instances that host them.

AWS Inferentia

First-generation inference accelerator chip

AWS Inferentia2

Second-generation inference accelerator chip

AWS Trainium

First-generation training accelerator chip

AWS Trainium2

Second-generation training accelerator chip

NeuronCore v1

Processing unit architecture for Inferentia

NeuronCore v2

Processing unit architecture for Inferentia2 and Trainium

NeuronCore v3

Processing unit architecture for Trainium2

NeuronCores Architecture

Overview of NeuronCore processing units

Neuron Devices

Device management and configuration

Neuron Instances

EC2 instance types with Neuron accelerators

Inf1 Architecture

Inf1 instance architecture and specifications

Inf2 Architecture

Inf2 instance architecture and specifications

Trn1 Architecture

Trn1 instance architecture and specifications

Trn2 Architecture

Trn2 instance architecture and specifications

This document is relevant for: Inf1