Deploy Neuron Container on EC2¶
You can use the Neuron version of the AWS Deep Learning Containers to run inference on inf1 instances. In this developer flow, you provision an EC2 inf1 instance using a Deep Learming AMI (DLAMI), pull the container image with the Neuron version of the desired framework, and run the container as a server for the already compiled model. This developer flow assumes the model has already has been compiled through a compilation developer flow
- Launch an Inf1 Instance
Please follow the instructions at launch an Amazon EC2 Instance to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see Inf1 web page.
Select your Amazon Machine Image (AMI) of choice, please note that Neuron support Ubuntu 18 AMI or Amazon Linux 2 AMI, you can also choose Ubuntu 18 or Amazon Linux 2 Deep Learning AMI (DLAMI)
After launching the instance, follow the instructions in Connect to your instance to connect to the instance
Prior to running the container, make sure that the Neuron runtime on the instance is turned off, by running the command:
sudo service neuron-rtd stop