This document is relevant for: Inf1

Neuron Apache MXNet Tutorials#

Before running a tutorial#

You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.

Follow instructions at MXNet Tutorial Setup before running an MXNet tutorial on Inferentia.

Computer Vision#

Natural Language Processing#

Utilizing Neuron Capabilities#

This document is relevant for: Inf1