Neuron Apache MXNet (Incubating) Tutorials¶
Before running a tutorial¶
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at MXNet Tutorials Setup before running an MXNet tutorial on Inferentia.
Model Serving tutorial [html]
Getting started with Gluon tutorial [html]