This document is relevant for: Inf1
, Inf2
, Trn1
, Trn2
Introducing PyTorch 2.6 Support#
What are we introducing?#
Starting with the Neuron 2.23 release, customers will be able to upgrade to PyTorch NeuronX(torch-neuronx)
supporting PyTorch 2.6
.
PyTorch Neuron (torch-neuronx) Setup is updated to include installation instructions for PyTorch NeuronX 2.6 for Amazon Linux 2023 and Ubuntu 22. Note that PyTorch NeuronX 2.6 is supported on Python 3.9, 3.10, and 3.11.
Please review migration guide for possible changes to training scripts. No code changes are required for inference scripts.
How is PyTorch NeuronX 2.6 different compared to PyTorch NeuronX 2.5?#
PyTorch NeuronX 2.6 uses Torch-XLA 2.6 which has improved support for Automatic Mixed Precision and buffer aliasing. Additionally:
Reintroduced
XLA_USE_32BIT_LONG
to give customers the flexibility to use INT32 for their workloads. This flag was removed in v2.5.Added xm.xla_device_kind() to return XLA device kind string (‘NC_v2’ for Trainium1, ‘NC_v3’ and ‘NC_v3d’ for Trainium2). See Logical NeuronCore configuration for more info.
See Torch-XLA 2.6 release for a full list.
See Migrate your application to PyTorch 2.6 for changes needed to use PyTorch NeuronX 2.6.
Note
GSPMD and Torch Dynamo (torch.compile) support in Neuron will be available in a future release.
How can I install PyTorch NeuronX 2.6?#
To install PyTorch NeuronX 2.6 please follow the PyTorch Neuron (torch-neuronx) Setup guides for Amazon Linux 2023 and Ubuntu 22 AMI. Please also refer to the Neuron multi-framework DLAMI setup guide for Ubuntu 22 with a pre-installed virtual environment for PyTorch NeuronX 2.6 that you can use to easily get started. PyTorch NeuronX 2.6 can be installed using the following:
python -m pip install --upgrade neuronx-cc==2.* torch-neuronx==2.6.* torchvision
Note
PyTorch NeuronX 2.6 is currently available for Python 3.9, 3.10, 3.11.
Migrate your application to PyTorch 2.6#
Please make sure you have first installed the PyTorch NeuronX 2.6 as described above in installation guide
Migrating training scripts#
To migrate the training scripts from PyTorch NeuronX 2.5 to PyTorch NeuronX 2.6, implement the following changes:
Note
xm
below refers to torch_xla.core.xla_model
and xr
refers to torch_xla.runtime
The environment variables
XLA_DOWNCAST_BF16
andXLA_USE_BF16
are deprecated (warning when used) and will be removed in an upcoming release. Please switch to automatic mixed-precision or usemodel.to(torch.bfloat16)
command to convert model to BF16 format. (see Migration From XLA_USE_BF16/XLA_DOWNCAST_BF16)The functions
xm.xrt_world_size()
,xm.xla_model.get_ordinal()
, andxm.xla_model.get_local_ordinal()
are deprecated (warning when used). Please switch toxr.world_size()
,xr.global_ordinal()
, andxr.local_ordinal()
respectively as replacements.The default behavior of
torch.load
parameterweights_only
is changed fromFalse
toTrue
. Leavingweights_only
asTrue
can cause issues with pickling.If using
xmp.spawn
, thenprocs
argument limited to 1 or None since v2.1. Previously, passing a value > 1 would result in a warning. In torch-xla 2.6, passing a value > 1 would result in an error with an actionable message to useNEURON_NUM_DEVICES
to set the number of NeuronCores to use.
See v2.5 migration guide for additional changes needed if you are migrating from PyTorch NeuronX 2.1.
Migrating inference scripts#
There are no code changes required in the inference scripts.
Troubleshooting and Known Issues#
Tensor split on second dimension of 2D array not working#
Currently, when using tensor split operation on a 2D array in the second dimension, the resulting tensors don’t have the expected data (pytorch/xla#8640). The work-around is to set XLA_DISABLE_FUNCTIONALIZATION=0
. Another work-around is to use torch.tensor_split
.
Lower BERT pretraining performance with torch-neuronx 2.6 compared to torch-neuronx 2.5#
Currently, BERT pretraining performance is ~10% lower with torch-neuronx 2.6 compared to torch-neuronx 2.5. This is due to a known regression in torch-xla pytorch/xla#9037 and can affect other models with high graph tracing overhead. To work-around this issue, please build the r2.6_aws_neuron
branch of torch-xla as follows (see Install with support for C++11 ABI for C++11 ABI version):
# Setup build env (make sure you are in a python virtual env). Replace "apt" with "yum" on AL2023.
sudo apt install cmake
pip install yapf==0.30.0
wget https://github.com/bazelbuild/bazelisk/releases/download/v1.20.0/bazelisk-linux-amd64
sudo cp bazelisk-linux-amd64 /usr/local/bin/bazel
# Clone repos
git clone --recursive https://github.com/pytorch/pytorch --branch v2.6.0
cd pytorch/
git clone --recursive https://github.com/pytorch/xla.git --branch r2.6_aws_neuron
_GLIBCXX_USE_CXX11_ABI=0 python setup.py bdist_wheel
# pip wheel will be present in ./dist
cd xla/
CXX_ABI=0 python setup.py bdist_wheel
# pip wheel will be present in ./dist and can be installed instead of the torch-xla released in pypi.org
Lower BERT pretraining performance when switch to using model.to(torch.bfloat16)
#
Currently, BERT pretraining performance is ~11% lower when switching to using model.to(torch.bfloat16)
as part of migration away from the deprecated environment variable XLA_DOWNCAST_BF16
due to pytorch/xla#8545. As a work-around to recover the performance, you can set XLA_DOWNCAST_BF16=1
which would still work in torch-neuronx 2.5 and 2.6 although there will be deprecation warnings (as noted below).
Warning “XLA_DOWNCAST_BF16 will be deprecated after the 2.6 release, please downcast your model directly”#
Environment variables XLA_DOWNCAST_BF16
and XLA_USE_BF16
are deprecated (warning when used). Please switch to automatic mixed-precision or use model.to(torch.bfloat16)
command to cast model to BF16. (see Migration From XLA_USE_BF16/XLA_DOWNCAST_BF16)
AttributeError: <module ‘torch_xla.core.xla_model’ … does not have the attribute ‘xrt_world_size’#
This is an error that torch_xla.core.xla_model.xrt_world_size()
is removed in torch-xla version 2.7. Please switch to using torch_xla.runtime.world_size()
instead.
AttributeError: <module ‘torch_xla.core.xla_model’ … does not have the attribute ‘get_ordinal’#
This is an error that torch_xla.core.xla_model.xla_model.get_ordinal()
is removed in torch-xla version 2.7. Please switch to using torch_xla.runtime.global_ordinal()
instead.
AttributeError: <module ‘torch_xla.core.xla_model’ … does not have the attribute ‘get_local_ordinal’#
This is an error that torch_xla.core.xla_model.xla_model.get_local_ordinal()
is removed in torch-xla version 2.7. Please switch to using torch_xla.runtime.local_ordinal()
instead.
Socket Error: Socket failed to bind#
In PyTorch 2.6, there needs to be a socket available for both torchrun and the init_process_group
to bind. Both of these, by default,
will be set to unused sockets. If you plan to use a MASTER_PORT
environment variable then this error may occur, if the port you set it to
is already in use.
[W socket.cpp:426] [c10d] The server socket has failed to bind to [::]:2.600 (errno: 98 - Address already in use).
[W socket.cpp:426] [c10d] The server socket has failed to bind to ?UNKNOWN? (errno: 98 - Address already in use).
[E socket.cpp:462] [c10d] The server socket has failed to listen on any local network address.
RuntimeError: The server socket has failed to listen on any local network address.
The server socket has failed to bind to ?UNKNOWN? (errno: 98 - Address already in use).
To resolve the issue, please ensure if you are setting MASTER_PORT
that the port you’re setting it to is not used anywhere else in your scripts. Otherwise,
you can leave MASTER_PORT
unset, and torchrun will set the default port for you.
AttributeError: module 'torch' has no attribute 'xla'
Failure#
In PyTorch 2.6, training scripts might fail during activation checkpointing with the error shown below.
AttributeError: module 'torch' has no attribute 'xla'
The solution is to use torch_xla.utils.checkpoint.checkpoint
instead of torch.utils.checkpoint.checkpoint
as the checkpoint function while wrapping pytorch modules for activation checkpointing.
Refer to the pytorch/xla discussion regarding this issue.
Also set use_reentrant=True
while calling the torch_xla checkpoint function. Failure to do so will lead to XLA currently does not support use_reentrant==False
error.
For more details on checkpointing, refer the documentation.
Error Attempted to access the data pointer on an invalid python storage
when using HF Trainer API#
While using HuggingFace Transformers Trainer API to train (i.e. HuggingFace Trainer API fine-tuning tutorial), you may see the error “Attempted to access the data pointer on an invalid python storage”. This is a known issue and has been fixed in the version 4.37.3
of HuggingFace Transformers.
FileNotFoundError: [Errno 2] No such file or directory: 'libneuronpjrt-path'
Failure#
In PyTorch 2.6, users might face the error shown below due to incompatible libneuronxla
and torch-neuronx
versions being installed.
FileNotFoundError: [Errno 2] No such file or directory: 'libneuronpjrt-path'
Check that the version of libneuronxla
that support PyTorch NeuronX 2.6 is 2.2.*
. If not, then uninstall libneuronxla
using pip uninstall libneuronxla
and then reinstall the packages following the installation guide installation guide
Input dimension should be either 1 or equal to the output dimension it is broadcasting into
or IndexError: index out of range
error during Neuron Parallel Compile#
When running Neuron Parallel Compile with HF Trainer API, you may see the error Status: INVALID_ARGUMENT: Input dimension should be either 1 or equal to the output dimension it is broadcasting into
or IndexError: index out of range
in Accelerator’s pad_across_processes
function. This is due to data-dependent operation in evaluation metrics computation. Data-dependent operations would result in undefined behavior with Neuron Parallel Compile trial execution (execute empty graphs with zero outputs). To work-around this error, please disable compute_metrics when NEURON_EXTRACT_GRAPHS_ONLY is set to 1:
compute_metrics=None if os.environ.get("NEURON_EXTRACT_GRAPHS_ONLY") else compute_metrics
Compiler assertion error when running Stable Diffusion training#
Currently, with PyTorch 2.6 (torch-neuronx), we are seeing the following compiler assertion error with Stable Diffusion training when gradient accumulation is enabled. This will be fixed in an upcoming release. For now, if you would like to run Stable Diffusion training with Neuron SDK release 2.23, please disable gradient accumulation in torch-neuronx 2.6.
ERROR 222163 [NeuronAssert]: Assertion failure in usr/lib/python3.9/concurrent/futures/process.py at line 239 with exception:
too many partition dims! {{0,+,960}[10],+,10560}[10]
Frequently Asked Questions (FAQ)#
Do I need to recompile my models with PyTorch 2.6?#
Yes.
Do I need to update my scripts for PyTorch 2.6?#
Please see the migration guide
What environment variables will be changed with PyTorch NeuronX 2.6 ?#
The environment variables XLA_DOWNCAST_BF16
and XLA_USE_BF16
are deprecated (warning when used). Please switch to automatic mixed-precision or use model.to(torch.bfloat16)
command to cast model to BF16. (see Migration From XLA_USE_BF16/XLA_DOWNCAST_BF16)
What features will be missing with PyTorch NeuronX 2.6?#
PyTorch NeuronX 2.6 has all of the supported features in PyTorch NeuronX 2.5, with known issues listed above, and unsupported features as listed in PyTorch Neuron (torch-neuronx) release notes.
Can I use Neuron Distributed and Transformers Neuron libraries with PyTorch NeuronX 2.6?#
Yes, NeuronX Distributed, and Transformers NeuronX, and AWS Neuron Reference for NeMo Megatron libraries will work with PyTorch NeuronX 2.6.
Can I still use PyTorch 2.5 version?#
PyTorch 2.5 is supported for releases 2.21/2.22/2.23 and will reach end-of-life in a future release. Additionally, the CVE CVE-2025-32434 affects PyTorch version 2.5. We recommend upgrading to the new version of Torch-NeuronX by following PyTorch Neuron (torch-neuronx) Setup.
Can I still use PyTorch 2.1 version?#
PyTorch 2.1 is supported for release 2.21 and has reached end-of-life in release 2.22. Additionally, the CVEs CVE-2024-31583 and CVE-2024-31580 affect PyTorch versions 2.1 and earlier. We recommend upgrading to the new version of Torch-NeuronX by following PyTorch Neuron (torch-neuronx) Setup.
This document is relevant for: Inf1
, Inf2
, Trn1
, Trn2