This document is relevant for: Inf2, Trn1, Trn2, Trn3
NxD Core for Inference#
NeuronX Distributed Core (NxD Core) is a package for supporting different distributed inference mechanisms for Neuron devices. It provides XLA-friendly implementations of some of the more popular distributed inference techniques. As the size of the model scales, fitting these models on a single device becomes impossible and hence we have to make use of model sharding techniques to partition the model across multiple devices.
As part of this library, we enable support for Tensor Parallelism sharding technique with other distributed library supported to be added in future.
About NeuronX-Distributed (NxD) Inference#
NeuronX Distributed (NxD Core) provides fundamental building blocks that enable you to run advanced inference workloads on AWS Inferentia and Trainium instances. These building blocks include parallel linear layers that enable distributed inference, a model builder that compiles PyTorch modules into Neuron models, and more.
As part of NxD Core, Neuron offers NxD Inference, which is a library that provides optimized model and module implementations that build on top of NxD Core. For more information about NxD Inference, see NxD Inference Overview.
For examples of how to build directly on NxD Core, see the following:
T5 3B inference tutorial [html] [notebook]
NxD Core for Inference Documentation#
Setup
Install PyTorch Neuron on Trn1 to create a pytorch environment. It is recommended to work out of a Python virtual environment (such as venv) so as to avoid package installation issues.
You can install the neuronx-distributed package using the following command:
python -m pip install neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com
App Notes
API Reference Guide
Developer Guide
Tutorials
This document is relevant for: Inf2, Trn1, Trn2, Trn3