Compile with Framework API and Deploy on EC2 Inf1

Table of Contents

Description

Neuron developer flow on EC2

You can use a single inf1 instance as a development environment to compile and deploy Neuron models. In this developer flow, you provision an EC2 inf1 instance using a Deep Learming AMI (DLAMI) and execute the two steps of the development flow in the same instance. The DLAMI comes pre-packaged with the Neuron frameworks, compiler, and required runtimes to complete the flow. Development happens through Jupyter Notebooks or using a secure shell (ssh) connection in terminal. Follow the steps bellow to setup your environment.

Note

Model compilation can be executed on a non-inf1 instance for later deployment. Follow the same EC2 Developer Flow Setup using other instance families and leverage Amazon Simple Storage Service (S3) to share the compiled models between different instances.

Setup Environment

  1. Launch an Inf1 Instance:
    • Please follow the instructions at launch an Amazon EC2 Instance to Launch an Inf1 instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type. To get more information about Inf1 instances sizes and pricing see Inf1 web page.

    • When choosing an Amazon Machine Image (AMI) make sure to select Deep Learning AMI with Conda Options. Please note that Neuron Conda packages are supported only in Ubuntu 16 DLAMI, Ubuntu 18 DLAMI and Amazon Linux2 DLAMI, Neuron Conda packages are not supported in Amazon Linux DLAMI.

    • After launching the instance, follow the instructions in Connect to your instance to connect to the instance

    Note

    You can also launch the instance from AWS CLI, please see AWS CLI commands to launch inf1 instances.

  2. Set up a development environment:

    To compile and run inference from the instance terminal, first enable the ML framework conda environment of your choice by running one of the following from the terminal:

    • Enable PyTorch-Neuron Conda enviroment:

    source activate aws_neuron_pytorch_p36
    
    • Enable TensorFlow-Neuron Conda enviroment:

    source activate aws_neuron_tensorflow_p36
    
    • Enable MXNet-Neuron Conda enviroment:

    source activate aws_neuron_mxnet_p36
    

    To develop from a Jupyter notebook see Jupyter Notebook QuickStart

    You can also run a Jupyter notebook as a script, first enable the ML framework conda environment of your choice and see Running Jupyter Notebook as script for instructions.