Deploy DLC on EC2

Table of Contents

Description

Neuron developer flow for DLC on EC2

You can use the Neuron version of the AWS Deep Learning Containers to run inference on inf1 instances. In this developer flow, you provision an EC2 inf1 instance using a Deep Learming AMI (DLAMI), pull the container image with the Neuron version of the desired framework, and run the container as a server for the already compiled model. This developer flow assumes the model has already has been compiled through a compilation developer flow

Setup Environment

  1. Launch an Inf1 Instance
    • Please follow the instructions at launch an Amazon EC2 Instance to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see Inf1 web page.

    • When choosing an Amazon Machine Image (AMI) make sure to select Deep Learning AMI with Conda Options. Please note that Neuron Conda environments are supported only in Ubuntu 18 DLAMI and Amazon Linux2 DLAMI, Neuron Conda environments are not supported in Amazon Linux DLAMI.

    • After launching the instance, follow the instructions in Connect to your instance to connect to the instance

    Note

    You can also launch the instance from AWS CLI, please see AWS CLI commands to launch inf1 instances.

    To deploy your container using a Jupyter Notebook see Jupyter Notebook QuickStart

  2. Deploy an inference container on your inf1 instance:

    Follow the Getting Started with Deep Learning Containers for Inference on EC2.

Note

Prior to running the container, make sure that the Neuron runtime on the instance is turned off, by running the command:

sudo service neuron-rtd stop