This document is relevant for:
Before running a tutorial#
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at TensorFlow Tutorial Setup before running a TensorFlow tutorial on Inferentia. We recommend new users start with the ResNet-50 tutorial.
Tensorflow 1.x - SSD300 tutorial [html]
Natural Language Processing#
Tensorflow 1.x - Running TensorFlow BERT-Large with AWS Neuron [html]