TensorFlow 1.x FAQ

How do I get started with TensorFlow?

The easiest entry point is the tutorials offered by the AWS Neuron team. For beginners, the ResNet50 tutorial is a good place to start.

What operators are supported?

neuron-cc list-operators --framework TENSORFLOW provides a list of supported TensorFlow 1.x operators, and they are the operators that run on the machine learning accelerator. Note that operators not in this list are still expected to work with the supported operators in native TensorFlow together, although not accelerated by the hardware.

How do I compile my model?

tensorflow-neuron includes a public-facing compilation API called tfn.saved_model.compile. More can be found here TensorFlow-Neuron 1.x Compilation API.

How do I deploy my model?

Same way as deploying any tensorflow SavedModel. In Python TensorFlow, the easiest way is through the tf.contrib.predictor module. If a Python-free deployment is preferred for performance or some other reasons, tensorflow-serving is a great choice and the AWS Neuron team provides pre-built model server apt/yum packages named as tensorflow-model-server-neuron.

How to debug or profile my model?

At TensorFlow level, the v1 profiler is a great tool that provides operator-level breakdown of the inference execution time. Additionally, the AWS Neuron TensorBoard integration provides visibility into what is happening inside of the Neuron runtime, and allows a more fine-grained (but also more hardware-awared) reasoning on where to improve the performance of machine learning applications.