Before running a tutorial¶
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at PyTorch Tutorial Setup before running a PyTorch tutorial on Inferentia . We recommend new users start with the ResNet-50 tutorial.
Natural Language Processing¶
LibTorch C++ tutorial [html]