{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Data Parallel HuggingFace Pretrained BERT with Weight Sharing (Deduplication)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Introduction\n", "\n", "In this tutorial we will compile and deploy BERT-base version of HuggingFace 🤗 Transformers BERT for Inferentia, with additional demonstration of using Weight Sharing (Deduplication) feature.\n", "\n", "To use the [Weight Sharing (Deduplication) feature](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-runtime/nrt-configurable-parameters.html#shared-weights-neuron-rt-multi-instance-shared-weights), you must set the Neuron Runtime environmental variable NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS to \"TRUE\" together with the [core placement API](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuron/api-core-placement.html) (``torch_neuron.experimental.neuron_cores_context()``).\n", "\n", "This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger. The compile part of this tutorial requires inf1.6xlarge and not the inference itself. For simplicity we will run this tutorial on inf1.6xlarge but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs.\n", "\n", "Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [PyTorch Installation Guide](../../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -> Change Kernel\" option on the top of this Jupyter notebook page." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install Dependencies:\n", "This tutorial requires the following pip packages:\n", "\n", "- `torch-neuron`\n", "- `neuron-cc[tensorflow]`\n", "- `transformers`\n", "\n", "Most of these packages will be installed when configuring your environment using the Neuron PyTorch setup guide. The additional dependencies must be installed here." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n", "!pip install --upgrade \"transformers==4.6.0\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Compile the model into an AWS Neuron optimized TorchScript\n", "\n", "This step compiles the model into an AWS Neuron optimized TorchScript, and saves it in the filed ``bert_neuron.pt``. This step is the same as the pretrained BERT tutorial without Shared Weights feature. We use batch 1 for simplicity." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import tensorflow # to workaround a protobuf version conflict issue\n", "import torch\n", "import torch.neuron\n", "from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig\n", "import transformers\n", "import os\n", "import warnings\n", "\n", "\n", "# Build tokenizer and model\n", "tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased-finetuned-mrpc\")\n", "model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased-finetuned-mrpc\", return_dict=False)\n", "\n", "# Setup some example inputs\n", "sequence_0 = \"The company HuggingFace is based in New York City\"\n", "sequence_1 = \"Apples are especially bad for your health\"\n", "sequence_2 = \"HuggingFace's headquarters are situated in Manhattan\"\n", "\n", "max_length=128\n", "paraphrase = tokenizer.encode_plus(sequence_0, sequence_2, max_length=max_length, padding='max_length', truncation=True, return_tensors=\"pt\")\n", "not_paraphrase = tokenizer.encode_plus(sequence_0, sequence_1, max_length=max_length, padding='max_length', truncation=True, return_tensors=\"pt\")\n", "\n", "# Run the original PyTorch model on compilation exaple\n", "paraphrase_classification_logits = model(**paraphrase)[0]\n", "\n", "# Convert example inputs to a format that is compatible with TorchScript tracing\n", "example_inputs_paraphrase = paraphrase['input_ids'], paraphrase['attention_mask'], paraphrase['token_type_ids']\n", "example_inputs_not_paraphrase = not_paraphrase['input_ids'], not_paraphrase['attention_mask'], not_paraphrase['token_type_ids']\n", "\n", "# Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron\n", "model_neuron = torch.neuron.trace(model, example_inputs_paraphrase)\n", "\n", "# Verify the TorchScript works on both example inputs\n", "paraphrase_classification_logits_neuron = model_neuron(*example_inputs_paraphrase)\n", "not_paraphrase_classification_logits_neuron = model_neuron(*example_inputs_not_paraphrase)\n", "\n", "# Save the TorchScript for later use\n", "model_neuron.save('bert_neuron.pt')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### Deploy the AWS Neuron optimized TorchScript\n", "\n", "To deploy the AWS Neuron optimized TorchScript, you may choose to load the saved TorchScript from disk and skip the slow compilation. This step is the same as the pretrained BERT tutorial without Shared Weights feature" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# Load TorchScript back\n", "model_neuron = torch.jit.load('bert_neuron.pt')\n", "# Verify the TorchScript works on both example inputs\n", "paraphrase_classification_logits_neuron = model_neuron(*example_inputs_paraphrase)\n", "not_paraphrase_classification_logits_neuron = model_neuron(*example_inputs_not_paraphrase)\n", "classes = ['not paraphrase', 'paraphrase']\n", "paraphrase_prediction = paraphrase_classification_logits_neuron[0][0].argmax().item()\n", "not_paraphrase_prediction = not_paraphrase_classification_logits_neuron[0][0].argmax().item()\n", "print('BERT says that \"{}\" and \"{}\" are {}'.format(sequence_0, sequence_2, classes[paraphrase_prediction]))\n", "print('BERT says that \"{}\" and \"{}\" are {}'.format(sequence_0, sequence_1, classes[not_paraphrase_prediction]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We define two helper functions to pad input and to count correct results." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def get_input_with_padding(batch, batch_size, max_length):\n", " ## Reformulate the batch into three batch tensors - default batch size batches the outer dimension\n", " encoded = batch['encoded']\n", " inputs = torch.squeeze(encoded['input_ids'], 1)\n", " attention = torch.squeeze(encoded['attention_mask'], 1)\n", " token_type = torch.squeeze(encoded['token_type_ids'], 1)\n", " quality = list(map(int, batch['quality']))\n", "\n", " if inputs.size()[0] != batch_size:\n", " print(\"Input size = {} - padding\".format(inputs.size()))\n", " remainder = batch_size - inputs.size()[0]\n", " zeros = torch.zeros( [remainder, max_length], dtype=torch.long )\n", " inputs = torch.cat( [inputs, zeros] )\n", " attention = torch.cat( [attention, zeros] )\n", " token_type = torch.cat( [token_type, zeros] )\n", "\n", " assert(inputs.size()[0] == batch_size and inputs.size()[1] == max_length)\n", " assert(attention.size()[0] == batch_size and attention.size()[1] == max_length)\n", " assert(token_type.size()[0] == batch_size and token_type.size()[1] == max_length)\n", "\n", " return (inputs, attention, token_type), quality\n", "\n", "def count(output, quality):\n", " assert output.size(0) >= len(quality)\n", " correct_count = 0\n", " count = len(quality)\n", " \n", " batch_predictions = [ row.argmax().item() for row in output ]\n", "\n", " for a, b in zip(batch_predictions, quality):\n", " if int(a)==int(b):\n", " correct_count += 1\n", "\n", " return correct_count, count" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data parallel inference\n", "In the below cell, we use the data parallel approach for inference. In this approach, we load multiple models, all of them running in parallel. Each model is loaded onto a single NeuronCore via the core placement API (``torch_neuron.experimental.neuron_cores_context()``). We also set Neuron Runtime environment variable ``NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS`` to \"TRUE\" as required to use the Weight Sharing feature.\n", "\n", "In the below implementation, we launch 16 models, thereby utilizing all the 16 cores on an inf1.6xlarge.\n", "\n", "> Note: Now if you try to decrease the num_cores in the below cells, please restart the notebook and run `!sudo rmmod neuron; sudo modprobe neuron` step in cell 2 to clear the Neuron cores.\n", "\n", "Since, we can run more than 1 model concurrently, the throughput for the system goes up. To achieve maximum gain in throughput, we need to efficiently feed the models so as to keep them busy at all times. In the below setup, we use parallel threads to feed data continuously to the models.\n", "\n", "When running the cell below, you can monitor the Inferentia device activities by running ``neuron-top`` in another terminal. You will see that \"Device Used Memory\" is 1.6GB total, and the model instance loaded onto NeuronDevice 0 NeuronCore 0 uses the most device memory (272MB) while the other model instances loaded onto other NeuronCores use less device memory (92MB). This shows the effect of using Shared Weights as the device memory usage is lower. If you change ``NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS`` to \"FALSE\" you will see that \"Device Used Memory\" is 3.2GB, and the model instances loaded onto NeuronDevice 0 NeuronCore 0 and 1 use the most device memory (360MB) while the other model instances now use 180MB each." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "from bert_benchmark_utils import BertTestDataset, BertResults\n", "import time\n", "import functools\n", "import os\n", "import torch.neuron as torch_neuron\n", "from concurrent import futures\n", "\n", "# Setting up NeuronCore groups for inf1.6xlarge with 16 cores\n", "num_cores = 16 # This value should be 4 on inf1.xlarge and inf1.2xlarge\n", "os.environ['NEURON_RT_NUM_CORES'] = str(num_cores)\n", "os.environ['NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS'] = 'TRUE'\n", "#os.environ['NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS'] = 'FALSE'\n", "\n", "max_length = 128\n", "num_cores = 16\n", "batch_size = 1\n", "\n", "tsv_file=\"glue_mrpc_dev.tsv\"\n", "\n", "data_set = BertTestDataset( tsv_file=tsv_file, tokenizer=tokenizer, max_length=max_length )\n", "data_loader = torch.utils.data.DataLoader(data_set, batch_size=batch_size, shuffle=True)\n", "\n", "#Result aggregation class (code in bert_benchmark_utils.py)\n", "results = BertResults(batch_size, num_cores)\n", "def result_handler(output, result_id, start, end, input_dict):\n", " correct_count, inference_count = count(output[0], input_dict.pop(result_id))\n", " elapsed = end - start\n", " results.add_result(correct_count, inference_count, [elapsed], [end], [start])\n", "\n", "with torch_neuron.experimental.neuron_cores_context(start_nc=0, nc_count=num_cores):\n", " model = torch.jit.load('bert_neuron.pt')\n", "\n", "# Warm up the cores\n", "z = torch.zeros( [batch_size, max_length], dtype=torch.long )\n", "batch = (z, z, z)\n", "for _ in range(num_cores*4):\n", " model(*batch)\n", "\n", "# Prepare the input data\n", "batch_list = []\n", "for batch in data_loader:\n", " batch, quality = get_input_with_padding(batch, batch_size, max_length)\n", " batch_list.append((batch, quality))\n", "\n", "# One thread running a model on one core\n", "def one_thread(feed_data, quality):\n", " start = time.time()\n", " result = model(*feed_data)\n", " end = time.time() \n", " return result[0], quality, start, end\n", "\n", "# Launch more threads than models/cores to keep them busy\n", "processes = []\n", "with futures.ThreadPoolExecutor(max_workers=num_cores*2) as executor:\n", " # extra loops to help you see activities in neuron-top\n", " for _ in range(10):\n", " for input_id, (batch, quality) in enumerate(batch_list):\n", " processes.append(executor.submit(one_thread, batch, quality))\n", "\n", "results = BertResults(batch_size, num_cores)\n", "for _ in futures.as_completed(processes): \n", " (output, quality, start, end) = _.result() \n", " correct_count, inference_count = count(output, quality)\n", " results.add_result(correct_count, inference_count, [start - end], [start], [end])\n", "\n", "with open(\"benchmark.txt\", \"w\") as f:\n", " results.report(f, window_size=1)\n", "\n", "with open(\"benchmark.txt\", \"r\") as f:\n", " for line in f:\n", " print(line)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python (torch-neuron)", "language": "python", "name": "aws_neuron_venv_pytorch_inf1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 4 }