This document is relevant for: Inf1
Neuron Apache MXNet Tutorials#
Before running a tutorial#
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at MXNet Tutorial Setup before running an MXNet tutorial on Inferentia.
Computer Vision#
ResNet-50 tutorial [html] [notebook]
Model Serving tutorial [html]
Getting started with Gluon tutorial [html] [notebook]
Natural Language Processing#
MXNet 1.8: Using data parallel mode tutorial [html] [notebook]
Utilizing Neuron Capabilities#
NeuronCore Groups tutorial [html] [notebook]
This document is relevant for: Inf1