This document is relevant for: Inf1

Run inference in pytorch neuron container#

Table of Contents

Overview#

This tutorial demonstrates how to run a pytorch DLC on an inferentia instance.

By the end of this tutorial you will be able to run the inference using the container

You will use an inf1.2xlarge to test your Docker configuration for Inferentia.

To find out the available neuron devices on your instance, use the command ls /dev/neuron*.

Setup Environment#

  1. Launch an Inf1 Instance

  2. Set up docker environment according to Tutorial Docker environment setup

3. A sample Dockerfile for for torch-neuron can be found here DLC sample Dockerfile for Application Container. This dockerfile needs the torchserve entrypoint found here Torchserve Example and torchserve config.properties found here Torchserve config.properties example.

With the files in a dir, build the image with the following command:

docker build . -f Dockerfile.pt -t neuron-container:pytorch

Run the following command to start the container

docker run -itd --name pt-cont -p 80:8080 -p 8081:8081 --device=/dev/neuron0 neuron-container:pytorch /usr/local/bin/entrypoint.sh -m 'pytorch-resnet-neuron=https://aws-dlc-sample-models.s3.amazonaws.com/pytorch/Resnet50-neuron.mar' -t /home/model-server/config.properties

This document is relevant for: Inf1