This document is relevant for: Inf1
Deploy Neuron Container on EC2#
Description#
You can use the Neuron version of the AWS Deep Learning Containers to run inference on inf1 instances. In this developer flow, you provision an EC2 inf1 instance using a Deep Learming AMI (DLAMI), pull the container image with the Neuron version of the desired framework, and run the container as a server for the already compiled model. This developer flow assumes the model has already has been compiled through a compilation developer flow
Setup Environment#
- Launch an Inf1 Instance
Please follow the instructions at launch an Amazon EC2 Instance to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see Inf1 web page.
Select your Amazon Machine Image (AMI) of choice, please note that Neuron supports Ubuntu 18 AMI or Amazon Linux 2 AMI, you can also choose Ubuntu 18 or Amazon Linux 2 Deep Learning AMI (DLAMI)
After launching the instance, follow the instructions in Connect to your instance to connect to the instance
Once you have your EC2 environment set according to Tutorial Docker environment setup, you can build and run a Neuron container using the Tutorial How to Build and Run a Neuron Container section above.
Note
Prior to running the container, make sure that the Neuron runtime on the instance is turned off, by running the command:
sudo service neuron-rtd stop
This document is relevant for: Inf1