{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "ZSEVn7GoyCWq" }, "source": [ "# Probing BERT models with part-of-speech tagging \n", "\n", "This tutorial explores how much of part-of-speech tagging is learned by BERT models (transformers pre-trained with language modeling tasks on large quantities of texts).\n", "\n", "Part-of-speech (POS) tagging is a natural language processing task which consists in labelling words in context with their grammatical category, such as noun, verb, preposition... The standard benchmark for this task is the universal dependency treebank, a corpus of texts in various languages annotated with syntactic trees in the dependency frame, morphological features and word-level part of speech tags. We are interested in the English section of this corpus which was created from the English Web Treebank (about 16k sentences and 340k words). The dataset is described in details here: https://universaldependencies.org/treebanks/en_ewt/index.html.\n", "\n", "We will download three files: the training set, the validation set and the test set. Each file is stored in the CoNLL-U format, a textual format specific to the Universal Dependencies corpora where each line represents a word in a sentence, and columns represent features or labels of that word. In particular, column 2 is the word form, and column 5 is the English-specific POS tags. \n", "\n", "Questions:\n", "* What is the information included in each column of the conll-u format?\n", "* What is the difference between columns 4 and 5?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 211 }, "id": "BFFwK_2yvoMX", "outputId": "20255b78-836b-4ccd-9657-55782a9509dc" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "# newdoc id = weblog-juancole.com_juancole_20051126063000_ENG_20051126_063000\n", "# sent_id = weblog-juancole.com_juancole_20051126063000_ENG_20051126_063000-0001\n", "# newpar id = weblog-juancole.com_juancole_20051126063000_ENG_20051126_063000-p0001\n", "# text = Al-Zaman : American forces killed Shaikh Abdullah al-Ani, the preacher at the mosque in the town of Qaim, near the Syrian border.\n", "1\tAl\tAl\tPROPN\tNNP\tNumber=Sing\t0\troot\t0:root\tSpaceAfter=No\n", "2\t-\t-\tPUNCT\tHYPH\t_\t1\tpunct\t1:punct\tSpaceAfter=No\n", "3\tZaman\tZaman\tPROPN\tNNP\tNumber=Sing\t1\tflat\t1:flat\t_\n", "4\t:\t:\tPUNCT\t:\t_\t1\tpunct\t1:punct\t_\n", "5\tAmerican\tamerican\tADJ\tJJ\tDegree=Pos\t6\tamod\t6:amod\t_\n", "6\tforces\tforce\tNOUN\tNNS\tNumber=Plur\t7\tnsubj\t7:nsubj\t_\n" ] } ], "source": [ "import urllib.request\n", "\n", "for filename in ['en_ewt-ud-train.conllu', 'en_ewt-ud-dev.conllu', 'en_ewt-ud-test.conllu']:\n", " urllib.request.urlretrieve('https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/' + filename, filename)\n", "\n", "with open('en_ewt-ud-train.conllu') as fp:\n", " for line in fp.readlines()[:10]:\n", " print(line, end='')" ] }, { "cell_type": "markdown", "metadata": { "id": "5Uq4e4uZhEkr" }, "source": [ "Reading CoNLL-U files is a bit of work, so we will use the conllu module that can do that for us. Once parsed, a file is presented as a list of sentences, each containing tokens. The tokens are dictionaries with keys associated with the columns for that word. Words and POS can be loaded from the `'form'` and `'xpos'` columns.\n", "\n", "Questions:\n", "* How many sentences are there in each subset?\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 54 }, "id": "voty0Uj100az", "outputId": "3f2d3a7f-d061-4c1d-a453-285f89c516ea" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['Al', '-', 'Zaman', ':', 'American', 'forces', 'killed', 'Shaikh', 'Abdullah', 'al', '-', 'Ani', ',', 'the', 'preacher', 'at', 'the', 'mosque', 'in', 'the', 'town', 'of', 'Qaim', ',', 'near', 'the', 'Syrian', 'border', '.']\n" ] } ], "source": [ "!pip -q install conllu\n", "\n", "import conllu\n", "\n", "def load_conllu(filename):\n", " with open(filename) as fp:\n", " data = conllu.parse(fp.read())\n", " sentences = [[token['form'] for token in sentence] for sentence in data]\n", " taggings = [[token['xpos'] for token in sentence] for sentence in data]\n", " return sentences, taggings\n", "\n", "train_sentences, train_taggings = load_conllu('en_ewt-ud-train.conllu')\n", "valid_sentences, valid_taggings = load_conllu('en_ewt-ud-dev.conllu')\n", "test_sentences, test_taggings = load_conllu('en_ewt-ud-test.conllu')\n", "\n", "print(train_sentences[0])\n", "#print(list(zip(train_sentences[42], train_taggings[42])))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 54 }, "id": "jg_2s99hczWV", "outputId": "a4e92882-0b8f-4421-d2b4-b4143930bf27" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[('Gore', 'NNP'), ('released', 'VBD'), ('a', 'DT'), ('statement', 'NN'), ('Friday', 'NNP'), ('taking', 'VBG'), ('Bush', 'NNP'), ('to', 'IN'), ('task', 'NN'), ('for', 'IN'), ('his', 'PRP$'), ('comments', 'NNS'), ('on', 'IN'), ('Pakistan', 'NNP'), (\"'s\", 'POS'), ('recent', 'JJ'), ('coup', 'NN'), ('.', '.')]\n" ] } ], "source": [ "print(list(zip(train_sentences[182], train_taggings[182])))" ] }, { "cell_type": "markdown", "metadata": { "id": "zrpAqwsR2uWI" }, "source": [ "The meaning of the tags is briefly described in https://www.sketchengine.eu/tagsets/penn-treebank-tagset/. If you compute a few statistics you will see that nouns, prepositions and determiners are the most frequent tags.\n", "\n", "Questions:\n", "* What is the meaning of DT?\n", "* What is the meaning of FW?\n", "* What tags are associated with word \"move\"?\n", "* What words can be tagged as IN?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 903 }, "id": "_70_2Up01uyj", "outputId": "fbfda35f-1797-4eb6-8cac-f5d05dee3c70" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "number of different tags: 50\n", "26923 NN\n", "20718 IN\n", "16818 DT\n", "12448 NNP\n", "12195 PRP\n", "11575 JJ\n", "10830 RB\n", "10317 .\n", "9476 VB\n", "8446 NNS\n", "8062 ,\n", "6709 CC\n", "5403 VBD\n", "5374 VBP\n", "4580 VBZ\n", "3995 CD\n", "3968 VBN\n", "3329 VBG\n", "3294 MD\n", "3286 TO\n", "3065 PRP$\n", "1007 -RRB-\n", "973 -LRB-\n", "948 WDT\n", "869 WRB\n", "866 :\n", "813 ``\n", "785 ''\n", "760 WP\n", "755 RP\n", "690 UH\n", "684 POS\n", "664 HYPH\n", "502 JJR\n", "498 NNPS\n", "383 JJS\n", "364 EX\n", "338 NFP\n", "294 GW\n", "292 ADD\n", "276 RBR\n", "258 $\n", "175 PDT\n", "169 RBS\n", "161 SYM\n", "117 LS\n", "93 FW\n", "48 AFX\n", "15 WP$\n", "1 XX\n" ] } ], "source": [ "# use a defaultdict to count the number of occurrences of each tag\n", "import collections\n", "tagset = collections.defaultdict(int)\n", "\n", "for tagging in train_taggings:\n", " for tag in tagging:\n", " tagset[tag] += 1\n", "\n", "print('number of different tags:', len(tagset))\n", "\n", "# print count and tag sorted by decreasing count\n", "for tag, count in sorted(tagset.items(), reverse=True, key=lambda x: x[1]):\n", " print(count, tag)" ] }, { "cell_type": "markdown", "metadata": { "id": "h-tHoUc54BOv" }, "source": [ "You can also look at the distribution of sentence length, which will be important when processing the corpus with a model that crunches whole sentences. That distribution is highly skewed towards short sentences, with very few long sentences. It will not be a problem here, but might become one when working with large datasets and large models.\n", "\n", "Questions:\n", "* What is the average sentence length?\n", "* Print the longest sentence." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 282 }, "id": "Tm4hyJjp3rLT", "outputId": "95265e7d-826d-4d85-859f-adef9066ff00" }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAX0AAAD4CAYAAAAAczaOAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAUTklEQVR4nO3df6zd9X3f8eer5kfSJKpNuWGubc1u5rQik2KQC47STSk0YEhVE6mNQFVxMyZ3E0zJFrUziTSaZEjQtWFFSunc4sbpaKhHSLEIHXMJWpU/AlxSYzCEcgtm2DL4piakGRoq9L0/zsfNwb3X94ePzznx9/mQju73+/7+OO/zsc/rnPv9fs+5qSokSd3wQ6NuQJI0PIa+JHWIoS9JHWLoS1KHGPqS1CGnjbqB4zn77LNr9erVo25Dkn6gPProo9+uqomZlo116K9evZrJyclRtyFJP1CSPD/bMg/vSFKHGPqS1CGGviR1iKEvSR1i6EtShxj6ktQhhr4kdYihL0kdYuhLUoeM9SdyT9TqrV9d9Lb7b/rQADuRpPHgO31J6hBDX5I6xNCXpA4x9CWpQwx9SeoQQ1+SOuSUvmTzRHi5p6RTke/0JalDDH1J6pA5Qz/JW5I8nOSxJPuSfLrVv5DkuSR72m1dqyfJrUmmkuxNcn7fvjYneabdNp+8hyVJmsl8jum/BlxUVd9Lcjrw9SR/1pb9WlXddcz6lwFr2+1C4DbgwiRnATcA64ECHk2yq6peHsQDkSTNbc53+tXzvTZ7ervVcTbZBHyxbfcNYGmS5cClwO6qOtKCfjew8cTalyQtxLyO6SdZkmQPcJhecD/UFt3YDuHckuTMVlsBvNC3+YFWm61+7H1tSTKZZHJ6enqBD0eSdDzzCv2qeqOq1gErgQuS/HPgeuAngZ8CzgL+4yAaqqptVbW+qtZPTEwMYpeSpGZBV+9U1XeAB4GNVXWoHcJ5DfhD4IK22kFgVd9mK1tttrokaUjmc/XORJKlbfqtwAeBb7Xj9CQJcAXwRNtkF3B1u4pnA/BKVR0C7gcuSbIsyTLgklaTJA3JfK7eWQ7sSLKE3ovEzqq6N8nXkkwAAfYA/6atfx9wOTAFvAp8FKCqjiT5LPBIW+8zVXVkcA9FkjSXOUO/qvYC581Qv2iW9Qu4dpZl24HtC+xRkjQgfiJXkjrE0JekDjH0JalDDH1J6hBDX5I6xNCXpA4x9CWpQwx9SeoQQ1+SOsTQl6QOMfQlqUMMfUnqEENfkjrE0JekDjH0JalDDH1J6hBDX5I6xNCXpA6Zzx9Gf0uSh5M8lmRfkk+3+pokDyWZSvInSc5o9TPb/FRbvrpvX9e3+tNJLj1ZD0qSNLP5vNN/Dbioqt4LrAM2JtkA3AzcUlX/DHgZuKatfw3wcqvf0tYjybnAlcB7gI3A77Y/ti5JGpI5Q796vtdmT2+3Ai4C7mr1HcAVbXpTm6ctvzhJWv3Oqnqtqp4DpoALBvIoJEnzMq9j+kmWJNkDHAZ2A38NfKeqXm+rHABWtOkVwAsAbfkrwI/212fYpv++tiSZTDI5PT298EckSZrVvEK/qt6oqnXASnrvzn/yZDVUVduqan1VrZ+YmDhZdyNJnbSgq3eq6jvAg8D7gKVJTmuLVgIH2/RBYBVAW/4jwN/012fYRpI0BPO5emciydI2/Vbgg8BT9ML/F9pqm4F72vSuNk9b/rWqqla/sl3dswZYCzw8qAciSZrbaXOvwnJgR7vS5oeAnVV1b5IngTuT/GfgL4Hb2/q3A3+UZAo4Qu+KHapqX5KdwJPA68C1VfXGYB+OJOl45gz9qtoLnDdD/VlmuPqmqv4f8Iuz7OtG4MaFtylJGgQ/kStJHWLoS1KHGPqS1CGGviR1iKEvSR1i6EtShxj6ktQhhr4kdYihL0kdYuhLUocY+pLUIYa+JHWIoS9JHWLoS1KHzOf79LVAq7d+ddHb7r/pQwPsRJLezHf6ktQhhr4kdYihL0kdYuhLUofMGfpJViV5MMmTSfYl+Vir/0aSg0n2tNvlfdtcn2QqydNJLu2rb2y1qSRbT85DkiTNZj5X77wOfKKqvpnkHcCjSXa3ZbdU1W/1r5zkXOBK4D3AjwF/nuTdbfHngQ8CB4BHkuyqqicH8UAkSXObM/Sr6hBwqE3/bZKngBXH2WQTcGdVvQY8l2QKuKAtm6qqZwGS3NnWNfQlaUgWdEw/yWrgPOChVrouyd4k25Msa7UVwAt9mx1otdnqx97HliSTSSanp6cX0p4kaQ7zDv0kbwe+DHy8qr4L3Aa8C1hH7zeB3x5EQ1W1rarWV9X6iYmJQexSktTM6xO5SU6nF/h3VNXdAFX1Ut/y3wfubbMHgVV9m69sNY5TlyQNwXyu3glwO/BUVX2ur768b7UPA0+06V3AlUnOTLIGWAs8DDwCrE2yJskZ9E727hrMw5Akzcd83um/H/hl4PEke1rtk8BVSdYBBewHfhWgqvYl2UnvBO3rwLVV9QZAkuuA+4ElwPaq2jfAxyJJmsN8rt75OpAZFt13nG1uBG6coX7f8baTJJ1cfiJXkjrE0JekDjH0JalDDH1J6hBDX5I6xNCXpA4x9CWpQwx9SeoQQ1+SOsTQl6QOMfQlqUMMfUnqEENfkjrE0JekDjH0JalDDH1J6hBDX5I6xNCXpA4x9CWpQ+YM/SSrkjyY5Mkk+5J8rNXPSrI7yTPt57JWT5Jbk0wl2Zvk/L59bW7rP5Nk88l7WJKkmcznnf7rwCeq6lxgA3BtknOBrcADVbUWeKDNA1wGrG23LcBt0HuRAG4ALgQuAG44+kIhSRqOOUO/qg5V1Tfb9N8CTwErgE3AjrbaDuCKNr0J+GL1fANYmmQ5cCmwu6qOVNXLwG5g40AfjSTpuBZ0TD/JauA84CHgnKo61Ba9CJzTplcAL/RtdqDVZqsfex9bkkwmmZyenl5Ie5KkOcw79JO8Hfgy8PGq+m7/sqoqoAbRUFVtq6r1VbV+YmJiELuUJDXzCv0kp9ML/Duq6u5WfqkdtqH9PNzqB4FVfZuvbLXZ6pKkIZnP1TsBbgeeqqrP9S3aBRy9AmczcE9f/ep2Fc8G4JV2GOh+4JIky9oJ3EtaTZI0JKfNY533A78MPJ5kT6t9ErgJ2JnkGuB54CNt2X3A5cAU8CrwUYCqOpLks8Ajbb3PVNWRgTwKSdK8zBn6VfV1ILMsvniG9Qu4dpZ9bQe2L6RBSdLg+IlcSeoQQ1+SOsTQl6QOMfQlqUMMfUnqEENfkjrE0JekDjH0JalDDH1J6hBDX5I6xNCXpA4x9CWpQwx9SeoQQ1+SOsTQl6QOMfQlqUMMfUnqEENfkjpkPn8YfXuSw0me6Kv9RpKDSfa02+V9y65PMpXk6SSX9tU3ttpUkq2DfyiSpLnM553+F4CNM9Rvqap17XYfQJJzgSuB97RtfjfJkiRLgM8DlwHnAle1dSVJQzSfP4z+F0lWz3N/m4A7q+o14LkkU8AFbdlUVT0LkOTOtu6TC+5YkrRoJ3JM/7oke9vhn2WttgJ4oW+dA602W12SNESLDf3bgHcB64BDwG8PqqEkW5JMJpmcnp4e1G4lSSwy9Kvqpap6o6r+Hvh9vn8I5yCwqm/Vla02W32mfW+rqvVVtX5iYmIx7UmSZrGo0E+yvG/2w8DRK3t2AVcmOTPJGmAt8DDwCLA2yZokZ9A72btr8W1LkhZjzhO5Sb4EfAA4O8kB4AbgA0nWAQXsB34VoKr2JdlJ7wTt68C1VfVG2891wP3AEmB7Ve0b+KORJB3XfK7euWqG8u3HWf9G4MYZ6vcB9y2oO0nSQPmJXEnqEENfkjrE0JekDpnzmL6Ga/XWr57Q9vtv+tCAOpF0KvKdviR1iKEvSR1i6EtShxj6ktQhhr4kdYihL0kdYuhLUocY+pLUIYa+JHWIoS9JHWLoS1KHGPqS1CGGviR1iKEvSR1i6EtSh8wZ+km2Jzmc5Im+2llJdid5pv1c1upJcmuSqSR7k5zft83mtv4zSTafnIcjSTqe+bzT/wKw8ZjaVuCBqloLPNDmAS4D1rbbFuA26L1IADcAFwIXADccfaGQJA3PnKFfVX8BHDmmvAnY0aZ3AFf01b9YPd8AliZZDlwK7K6qI1X1MrCbf/xCIkk6yRZ7TP+cqjrUpl8EzmnTK4AX+tY70Gqz1f+RJFuSTCaZnJ6eXmR7kqSZnPCJ3KoqoAbQy9H9bauq9VW1fmJiYlC7lSSx+NB/qR22of083OoHgVV9661stdnqkqQhWmzo7wKOXoGzGbinr351u4pnA/BKOwx0P3BJkmXtBO4lrSZJGqLT5lohyZeADwBnJzlA7yqcm4CdSa4Bngc+0la/D7gcmAJeBT4KUFVHknwWeKSt95mqOvbksCTpJJsz9KvqqlkWXTzDugVcO8t+tgPbF9SdJGmg/ESuJHWIoS9JHWLoS1KHGPqS1CGGviR1iKEvSR1i6EtShxj6ktQhhr4kdYihL0kdYuhLUocY+pLUIYa+JHXInN+yqR8sq7d+ddHb7r/pQwPsRNI48p2+JHWIoS9JHWLoS1KHGPqS1CGGviR1yAmFfpL9SR5PsifJZKudlWR3kmfaz2WtniS3JplKsjfJ+YN4AJKk+RvEO/2fqap1VbW+zW8FHqiqtcADbR7gMmBtu20BbhvAfUuSFuBkHN7ZBOxo0zuAK/rqX6yebwBLkyw/CfcvSZrFiYZ+Af8ryaNJtrTaOVV1qE2/CJzTplcAL/Rte6DV3iTJliSTSSanp6dPsD1JUr8T/UTuT1fVwSTvBHYn+Vb/wqqqJLWQHVbVNmAbwPr16xe0rSTp+E7onX5VHWw/DwNfAS4AXjp62Kb9PNxWPwis6tt8ZatJkoZk0aGf5G1J3nF0GrgEeALYBWxuq20G7mnTu4Cr21U8G4BX+g4DSZKG4EQO75wDfCXJ0f38cVX9zySPADuTXAM8D3ykrX8fcDkwBbwKfPQE7luStAiLDv2qehZ47wz1vwEunqFewLWLvT9J0onzE7mS1CGGviR1iKEvSR1i6EtShxj6ktQhhr4kdYh/GF3/wD+qLp36fKcvSR1i6EtShxj6ktQhhr4kdYihL0kdYuhLUocY+pLUIYa+JHWIH87SQPjBLukHg+/0JalDDH1J6hBDX5I6ZOjH9JNsBH4HWAL8QVXdNOweNF48HyANz1BDP8kS4PPAB4EDwCNJdlXVk8PsQ6eOE3nBAF801D3Dfqd/ATBVVc8CJLkT2AQY+hqJUf2W4W83GpVhh/4K4IW++QPAhf0rJNkCbGmz30vy9CLu52zg24vq8OQa175gfHsb177IzaPpLTfPucrYjhnj29u49gWL6+2fzrZg7K7Tr6ptwLYT2UeSyapaP6CWBmZc+4Lx7W1c+4Lx7W1c+4Lx7W1c+4LB9zbsq3cOAqv65le2miRpCIYd+o8Aa5OsSXIGcCWwa8g9SFJnDfXwTlW9nuQ64H56l2xur6p9J+GuTujw0Ek0rn3B+PY2rn3B+PY2rn3B+PY2rn3BgHtLVQ1yf5KkMeYnciWpQwx9SeqQUyr0k2xM8nSSqSRbR9zLqiQPJnkyyb4kH2v1s5LsTvJM+7lsRP0tSfKXSe5t82uSPNTG7k/aifZR9LU0yV1JvpXkqSTvG4cxS/Lv27/jE0m+lOQtoxqzJNuTHE7yRF9txjFKz62tx71Jzh9Bb/+l/XvuTfKVJEv7ll3fens6yaXD7Ktv2SeSVJKz2/zIx6zV/10bt31JfrOvfmJjVlWnxI3eieG/Bn4cOAN4DDh3hP0sB85v0+8A/go4F/hNYGurbwVuHlF//wH4Y+DeNr8TuLJN/x7wb0fU1w7gX7fpM4Clox4zeh8qfA54a99Y/cqoxgz4l8D5wBN9tRnHCLgc+DMgwAbgoRH0dglwWpu+ua+3c9vz9ExgTXv+LhlWX62+it6FJc8DZ4/RmP0M8OfAmW3+nYMas6E8aYZxA94H3N83fz1w/aj76uvnHnrfOfQ0sLzVlgNPj6CXlcADwEXAve0/97f7nphvGssh9vUjLVxzTH2kY8b3P0l+Fr0r3u4FLh3lmAGrjwmJGccI+G/AVTOtN6zejln2YeCONv2m52gL3/cNsy/gLuC9wP6+0B/5mNF7Q/GzM6x3wmN2Kh3emekrHlaMqJc3SbIaOA94CDinqg61RS8C54ygpf8K/Drw923+R4HvVNXrbX5UY7cGmAb+sB16+oMkb2PEY1ZVB4HfAv4PcAh4BXiU8Rizo2Ybo3F7Xvwreu+iYcS9JdkEHKyqx45ZNA5j9m7gX7TDh/87yU8NqrdTKfTHUpK3A18GPl5V3+1fVr2X6qFeM5vk54DDVfXoMO93nk6j92vubVV1HvB/6R2q+AcjGrNl9L4YcA3wY8DbgI3D7GEhRjFG85HkU8DrwB1j0MsPA58E/tOoe5nFafR+s9wA/BqwM0kGseNTKfTH7isekpxOL/DvqKq7W/mlJMvb8uXA4SG39X7g55PsB+6kd4jnd4ClSY5+WG9UY3cAOFBVD7X5u+i9CIx6zH4WeK6qpqvq74C76Y3jOIzZUbON0Vg8L5L8CvBzwC+1FyUYbW/vovci/lh7LqwEvpnkn4y4r6MOAHdXz8P0fis/exC9nUqhP1Zf8dBelW8Hnqqqz/Ut2gVsbtOb6R3rH5qqur6qVlbVanpj9LWq+iXgQeAXRtVX6+1F4IUkP9FKF9P72u2Rjhm9wzobkvxw+3c92tfIx6zPbGO0C7i6XZGyAXil7zDQUKT3h5N+Hfj5qnq1b9Eu4MokZyZZA6wFHh5GT1X1eFW9s6pWt+fCAXoXXrzIGIwZ8Kf0TuaS5N30Lmr4NoMYs5N5cmLYN3pn3f+K3hntT424l5+m9yv2XmBPu11O7/j5A8Az9M7OnzXCHj/A96/e+fH2n2cK+B+0qwZG0NM6YLKN258Cy8ZhzIBPA98CngD+iN7VEyMZM+BL9M4t/B29sLpmtjGid5L+8+058TiwfgS9TdE7Dn30efB7fet/qvX2NHDZMPs6Zvl+vn8idxzG7Azgv7f/b98ELhrUmPk1DJLUIafS4R1J0hwMfUnqEENfkjrE0JekDjH0JalDDH1J6hBDX5I65P8DxSdLbSeEKSEAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light", "tags": [] }, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "max length: 159\n" ] } ], "source": [ "from matplotlib import pyplot as plt\n", "\n", "# compute and show histogram for sentence length\n", "plt.hist([len(sentence) for sentence in train_sentences], 20)\n", "plt.show()\n", "\n", "# compute max sentence length\n", "print('max length:', max([len(sentence) for sentence in train_sentences]))" ] }, { "cell_type": "markdown", "metadata": { "id": "BKvuTIK20m3b" }, "source": [ "A Pytorch implementation of the BERT model will be provided by the `transformers` package (see https://github.com/huggingface/transformers). It comes with pretrained models for a range of variants of the BERT architecture and trained on different datasets. The list can be found at https://huggingface.co/transformers/pretrained_models.html. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 104 }, "id": "HDi3zx3Y0enb", "outputId": "65acd416-198d-4d28-ac99-624cb0796489" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[K |████████████████████████████████| 890kB 7.4MB/s \n", "\u001b[K |████████████████████████████████| 890kB 23.0MB/s \n", "\u001b[K |████████████████████████████████| 1.1MB 39.5MB/s \n", "\u001b[K |████████████████████████████████| 3.0MB 41.0MB/s \n", "\u001b[?25h Building wheel for sacremoses (setup.py) ... \u001b[?25l\u001b[?25hdone\n" ] } ], "source": [ "# install transformers package\n", "!pip -q install transformers\n", "\n", "# import relevant classes for pretrained tokenizer and model\n", "from transformers import AutoTokenizer, AutoModel" ] }, { "cell_type": "markdown", "metadata": { "id": "_UDyP2y1ySau" }, "source": [ "Tokenization is a tricky part of natural language processing. It aims at chopping the stream of characters at word boundaries, which while looking straightforward, can be a bit tricky. Of course, different tools / corpora will assume different tokenization rules.\n", "\n", "The UD tokenization cuts the character stream at spaces, punctuation and uses some special rules, for instance \\\"can't\\\" is split as \\\"can n't\\\". Such tokenization results in an unbounded number of different words when corpus size gets larger. This problem is exhacerbated for languages with a very productive morphology such as Finnish.\n", "\n", "The tokenization for BERT models works differently, in particular because the model is trained to predict tokens which would result in prohibitively large model size with UD tokenization. Instead these types of models perform a tokenization in subword units. Frequent words are tokenized normally, but infrequent words are split in smaller factors which often correspond to affixes. In addition, punctuation is split at the character level. This way, the number of different tokens is kept low (arount 40k for BERT) while preserving linguistic information.\n", "\n", "Here is the result of tokenization for a simple sentence. Note how word pieces are prefixed with `'##'`.\n", "\n", "Questions:\n", "* Create a sentence which contains at least 5 partial word pieces (starting in ##)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 133, "referenced_widgets": [ "b3365fd4a4764f5fa0dfc117db6a7b0b", "7d500d2638c641cdaa0cf666cd927c4b", "d48f5241bf054d25a3bd6e768814faf1", "ebc6a62525d34786a4f83b72bf6b931b", "2bff6fed557c4aa59f8b344c1d63aa4d", "09ba4edd35e84e39b069b902e2670a63", "436490e07f3748cfb1adedad564d21b0", "ea0a3e6d7902405298d2f2920cdb240b", "45ce5977384a4fe98578a19507149f32", "5b4839dce34840beb4dbc10dbf2747b0", "4287d910e74849c2adbe1cc1a5890a8e", "dd17cfeaf12f401ca1eb6fe1fc449c4f", "3598fb00f2c4493fa10157fadbc5282a", "b3a23469f3bc4affa42858b17ec59856", "4ecda7d9fd4b47e199c32593a52e85c2", "37b2cb34a6eb4e20bcd17748f000e3e2" ] }, "id": "siRU-r0ly8On", "outputId": "9328f4c0-e5d0-4311-f529-57f2c9eddbad" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "b3365fd4a4764f5fa0dfc117db6a7b0b", "version_major": 2, "version_minor": 0 }, "text/plain": [ "HBox(children=(FloatProgress(value=0.0, description='Downloading', max=433.0, style=ProgressStyle(description_…" ] }, "metadata": { "tags": [] }, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "45ce5977384a4fe98578a19507149f32", "version_major": 2, "version_minor": 0 }, "text/plain": [ "HBox(children=(FloatProgress(value=0.0, description='Downloading', max=213450.0, style=ProgressStyle(descripti…" ] }, "metadata": { "tags": [] }, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n" ] }, { "data": { "text/plain": [ "['This', 'token', '##izer', 'is', 'so', '##oo', '##oo', 'awesome', '.']" ] }, "execution_count": 13, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "# load tokenizer for a specific bert model (bert-base-cased)\n", "tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')\n", "\n", "# tokenize an example sentence\n", "tokenizer.tokenize('This tokenizer is sooooo awesome.')" ] }, { "cell_type": "markdown", "metadata": { "id": "UiGKIajTFwMq" }, "source": [ "BERT will deliver representation vectors at the token level, so if we want to probe them with the tagging task, we need to align the two different tokenizations. Hopefully, the BERT tokenization is a sub-tokenization of the UD tokenization, so a token on the UD side will be composed of one to many subtokens on the BERT side, but not the converse. For example, the token `sooooo` for UD is split as `so #oo #oo` by BERT.\n", "\n", "How can multiple representation vectors generated by BERT for a word be used to predict a single tag (such as `so #oo #oo => RB`)? We choose to align the tag with the last subtoken of the word and convert the rest to the prediction of a special `` tag. The `` tag will have an other role when batching sentences together and we will ignore it when evaluating the accuracy of the predicted tags. Other approaches have been proposed such as using the representation of the first subtoken, or averaging the representation vectors of the subtokens. So for the `sooooo => RB` example, the prediction problem becomes `so => `, `#oo => `, `#oo => RB`. \n", "\n", "Questions:\n", "* What is the difference in average length between original sentences and tokenized sentences?\n", "* How many `` tokens are added to the aligned taggings?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 52 }, "id": "1LswqrD0-U3L", "outputId": "0209beb8-a328-4dde-c250-a43b77a7a8c1" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['There', 'has', 'been', 'talk', 'that', 'the', 'night', 'cu', '##rf', '##ew', 'might', 'be', 'implemented', 'again', '.']\n", "['EX', 'VBZ', 'VBN', 'NN', 'IN', 'DT', 'NN', '', '', 'NN', 'MD', 'VB', 'VBN', 'RB', '.']\n" ] } ], "source": [ "import re\n", "\n", "def align_tokenizations(sentences, taggings):\n", " bert_tokenized_sentences = []\n", " aligned_taggings = []\n", "\n", " for sentence, tagging in zip(sentences, taggings):\n", " # first generate BERT-tokenization\n", " bert_tokenized_sentence = tokenizer.tokenize(' '.join(sentence))\n", "\n", " aligned_tagging = []\n", " current_word = ''\n", " index = 0 # index of current word in sentence and tagging\n", " for token in bert_tokenized_sentence:\n", " current_word += re.sub(r'^##', '', token) # recompose word with subtoken\n", " sentence[index] = sentence[index].replace('\\xad', '') # fix bug in data\n", "\n", " # note that some word factors correspond to unknown words in BERT\n", " assert token == '[UNK]' or sentence[index].startswith(current_word)\n", "\n", " if token == '[UNK]' or sentence[index] == current_word: # if we completed a word\n", " current_word = ''\n", " aligned_tagging.append(tagging[index])\n", " index += 1\n", " else: # otherwise insert padding\n", " aligned_tagging.append('')\n", "\n", " assert len(bert_tokenized_sentence) == len(aligned_tagging)\n", "\n", " bert_tokenized_sentences.append(bert_tokenized_sentence)\n", " aligned_taggings.append(aligned_tagging)\n", "\n", " return bert_tokenized_sentences, aligned_taggings\n", "\n", "train_bert_tokenized_sentences, train_aligned_taggings = align_tokenizations(train_sentences, train_taggings)\n", "valid_bert_tokenized_sentences, valid_aligned_taggings = align_tokenizations(valid_sentences, valid_taggings)\n", "test_bert_tokenized_sentences, test_aligned_taggings = align_tokenizations(test_sentences, test_taggings)\n", "\n", "print(train_bert_tokenized_sentences[42])\n", "print(train_aligned_taggings[42])" ] }, { "cell_type": "markdown", "metadata": { "id": "yWWP-MVhCeWd" }, "source": [ "The next stage consists in converting tokens and tags to ids so that they can be crunched by the neural networks. For BERT tokens, the tokenizer comes with its own mapping from tokens to integers. For tags, we will create a dictionnary that assigns each tag to an integer starting from 0 which we use for the special `` tag.\n", "\n", "An additional step is specific to BERT, it consists in prefixing the text with `[CLS]` and postfixing it with `[SEP]`. They tell the model where the sentence starts ane where it ends because it is trained on full documents.\n", "\n", "Note that we also convert lists of ids to pytorch tensors at that step. The tensors are sent to GPU prior to returning. There is limited memory on the GPU (about 10G) and for large datasets especially containing images, it is better to send the data to the GPU only once instances have been batched together (see the `collate_fn` function bellow). \n", "\n", "Questions:\n", "* What is the identifier, according to BERT's tokenizer, for word \"food\"?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 104 }, "id": "9BH7SMlbJfua", "outputId": "73510442-6962-4d1d-d61a-4b67833f0c7f" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([ 101, 1247, 1144, 1151, 2037, 1115, 1103, 1480, 16408, 11931,\n", " 5773, 1547, 1129, 7042, 1254, 119, 100], device='cuda:0')\n", "tensor([ 0, 29, 22, 19, 9, 10, 8, 9, 0, 0, 9, 13, 14, 19, 23, 11, 0],\n", " device='cuda:0')\n", "num labels: 51\n" ] } ], "source": [ "import torch\n", "device = torch.device('cuda' if torch.cuda.is_available else 'cpu')\n", "\n", "import collections\n", "\n", "label_vocab = collections.defaultdict(lambda: len(label_vocab))\n", "label_vocab[''] = 0\n", "\n", "def convert_to_ids(sentences, taggings):\n", " sentences_ids = []\n", " taggings_ids = []\n", " for sentence, tagging in zip(sentences, taggings):\n", " sentence_tensor = torch.tensor(tokenizer.convert_tokens_to_ids(['[CLS]'] + sentence + ['SEP'])).long()\n", " tagging_tensor = torch.tensor([0] + [label_vocab[tag] for tag in tagging] + [0]).long()\n", "\n", " sentences_ids.append(sentence_tensor.to(device))\n", " taggings_ids.append(tagging_tensor.to(device))\n", " return sentences_ids, taggings_ids\n", "\n", "train_sentences_ids, train_taggings_ids = convert_to_ids(train_bert_tokenized_sentences, train_aligned_taggings)\n", "valid_sentences_ids, valid_taggings_ids = convert_to_ids(valid_bert_tokenized_sentences, valid_aligned_taggings)\n", "test_sentences_ids, test_taggings_ids = convert_to_ids(test_bert_tokenized_sentences, test_aligned_taggings)\n", "\n", "print(train_sentences_ids[42])\n", "print(train_taggings_ids[42])\n", "print('num labels:', len(label_vocab))" ] }, { "cell_type": "markdown", "metadata": { "id": "fdv91SwjGVdZ" }, "source": [ "Torch batching is much easier if data is presented through the Dataset class. As per the [documentation](https://pytorch.org/docs/stable/data.html), our class implements `__getitem__` which returns a pair of sentence and corresponding tagging, and `__len__` which returns the number of instances in the dataset. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_Uc345K5GSqO" }, "outputs": [], "source": [ "from torch.utils.data import Dataset\n", "\n", "class PosTaggingDataset(Dataset):\n", " def __init__(self, sentences, taggings):\n", " assert len(sentences) == len(taggings)\n", " self.sentences = sentences\n", " self.taggings = taggings\n", "\n", " def __getitem__(self, i):\n", " return self.sentences[i], self.taggings[i]\n", "\n", " def __len__(self):\n", " return len(self.sentences)" ] }, { "cell_type": "markdown", "metadata": { "id": "pETHIYJyHGxF" }, "source": [ "Now, we need a function to create a batch from a list of instances. Each instance is a pair of same-sized tensors containing token ids and tag ids. We must return two tensors, one with all sentences, and one with all taggings. Since sentences can be of different length, we will pad them to the longest sentence in the batch. Conveniently, the padding token for BERT and our tagging has id 0, so we can create zero tensors and fill them with data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "id": "hLD1ct3S-uNh", "outputId": "8424620f-cf2c-426f-85f2-33d4137c9063" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([2, 3]) torch.Size([2, 3])\n" ] } ], "source": [ "def collate_fn(items):\n", " max_len = max(len(item[0]) for item in items)\n", "\n", " sentences = torch.zeros((len(items), max_len), device=items[0][0].device).long().to(device)\n", " taggings = torch.zeros((len(items), max_len)).long().to(device)\n", "\n", " for i, (sentence, tagging) in enumerate(items):\n", " sentences[i][0:len(sentence)] = sentence\n", " taggings[i][0:len(tagging)] = tagging\n", "\n", " return sentences, taggings\n", "\n", "\n", "x, y = collate_fn([[torch.tensor([1, 2, 3]), torch.tensor([4, 5, 6])], [torch.tensor([1, 2]), torch.tensor([3, 4])]])\n", "print(x.shape, y.shape)" ] }, { "cell_type": "markdown", "metadata": { "id": "rRm76NynMXAQ" }, "source": [ "The torch `DataLoader` class handles batching and shuffling instances from a torch `Dataset`. We create one for each subset and only shuffle the training set. A larger `batch_size` leads to faster processing at from parallelization at the cost of higher memory usage, but it also acts as regularization, making convergence slower." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YTmnv5OQcfU9" }, "outputs": [], "source": [ "from torch.utils.data import DataLoader\n", "\n", "batch_size = 64\n", "\n", "train_loader = DataLoader(PosTaggingDataset(train_sentences_ids, train_taggings_ids), batch_size=batch_size, collate_fn=collate_fn, shuffle=True)\n", "valid_loader = DataLoader(PosTaggingDataset(valid_sentences_ids, valid_taggings_ids), batch_size=batch_size, collate_fn=collate_fn)\n", "test_loader = DataLoader(PosTaggingDataset(test_sentences_ids, test_taggings_ids), batch_size=batch_size, collate_fn=collate_fn)" ] }, { "cell_type": "markdown", "metadata": { "id": "92NNBEUjNRsa" }, "source": [ "For the sake of comparison, let's start with training a fully supervised classifier from the data. This classifier will be a simple RNN with an embedding layer which projects token ids in a vector space, a bidirectional GRU as recurrent layer, and a decision layer which projects hidden representations from the RNN to the space of POS tags. The model uses [GELU](https://arxiv.org/abs/1606.08415) non linearity and a dropout of 30\\%.\n", "\n", "Don't expect good performance from this model. It is very simple and POS tagging is known for two difficulties that it cannot handle well, ambiguity and generalization. In order to get state-of-the-art performance (about 97\\% accuracy on this dataset), the model would need to account for character sequences (to learn relevant morphemes for classifying words not seen in training), and a label sequence model such as conditional random fields.\n", "\n", "Questions:\n", "* Create a batch containing the first 10 sentences of the validation corpus and pass them to the model. What is the shape of the returned sensor?\n", "* What would change in the model if the GRU was not bidirectional?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "id": "_727qYKlvWXt", "outputId": "3068f140-58cd-4460-8897-8c08728b6c94" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([2, 3, 51])\n" ] } ], "source": [ "import torch.nn as nn\n", "import torch.nn.functional as F\n", "\n", "class RNNClassifier(nn.Module):\n", " def __init__(self, num_labels, embed_size=128, hidden_size=128):\n", " super().__init__()\n", " self.embedding = nn.Embedding(tokenizer.vocab_size, embed_size, padding_idx=tokenizer.pad_token_id)\n", " self.rnn = nn.GRU(embed_size, hidden_size, num_layers=1, bidirectional=True, batch_first=True)\n", " self.decision = nn.Linear(1 * 2 * hidden_size, num_labels) # size output by GRU is number of layers * number of directions * hidden size\n", " self.to(device)\n", " \n", " def forward(self, sentences):\n", " embed_rep = self.embedding(sentences)\n", " word_rep, sentence_rep = self.rnn(embed_rep)\n", " return self.decision(F.dropout(F.gelu(word_rep), 0.3))\n", "\n", "# check that model works on an arbitrary batch that contains two sentences of length 3\n", "rnn_model = RNNClassifier(len(label_vocab))\n", "with torch.no_grad():\n", " y = rnn_model(torch.tensor([[0, 1, 2], [3, 4, 5]]).to(device))\n", "\n", "# the expected shape is (batch size, max sentence length, number of labels)\n", "print(y.shape)" ] }, { "cell_type": "markdown", "metadata": { "id": "FEreoj5MRkwa" }, "source": [ "The following function computes the performance of a model for a given data loader. It returns two values: average batch-level loss, and token-level accuracy. For that, it needs to compute inference on all instances of the dataset provded by the loader. The main loop to perform inference looks like this:\n", "\n", "```\n", "for x, y in loader:\n", " y_scores = model(x)\n", " loss = criterion(y_scores, y)\n", "```\n", "\n", "`x` and `y` are tensors containing sentences and corresponding tags for a batch (as provided by `collate_fn`). `model(x)` calls `model.forward(x)` and returns a tensor of shape (batch-size, sequence-length, num-labels) with scores, also called loggits because they haven't yet been through the softmax which is computed by the criterion, for each possible tag for each word in the sequence for each sequence in the batch. `criterion(y_scores, y)` computes the loss between the predictions represented by `y_scores` and the reference in `y`.\n", "\n", "While the average loss is easy to compute, more care is taken for accuracy. First, we compute the argmax of `y_scores` to get the highest scoring tag for each prediction. Then we create a mask with non-zero entries of `y`. This can be used to ignore `` in computations.\n", "\n", "Questions:\n", "* What is accuracy of the model on the test set?\n", "* What is the expected accuracy of a random model given the number of different tags?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "id": "v5__p0uHDXvE", "outputId": "05b2c02b-5b35-4781-dda2-15073228845f" }, "outputs": [ { "data": { "text/plain": [ "(3.8916915878653526, 0.03785288270377734)" ] }, "execution_count": 24, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "def perf(model, loader):\n", " criterion = nn.CrossEntropyLoss()\n", " model.eval() # do not apply training-specific steps such as dropout\n", " total_loss = correct = num_loss = num_perf = 0\n", " for x, y in loader:\n", " with torch.no_grad(): # no need to store computation graph for gradients\n", " # perform inference and compute loss\n", " y_scores = model(x)\n", " loss = criterion(y_scores.view(-1, len(label_vocab)), y.view(-1)) # requires tensors of shape (num-instances, num-labels) and (num-instances)\n", "\n", " # gather loss statistics\n", " total_loss += loss.item()\n", " num_loss += 1\n", "\n", " # gather accuracy statistics\n", " y_pred = torch.max(y_scores, 2)[1] # compute highest-scoring tag\n", " mask = (y != 0) # ignore tags\n", " correct += torch.sum((y_pred == y) * mask) # compute number of correct predictions\n", " num_perf += torch.sum(mask).item()\n", " return total_loss / num_loss, correct.item() / num_perf\n", "\n", "# without training, accuracy should be a bit less than 2% (chance of getting a label correct)\n", "perf(rnn_model, valid_loader)" ] }, { "cell_type": "markdown", "metadata": { "id": "z4Cxten9-pYk" }, "source": [ "Training is very similar to evaluation as it also performs inference. In addition it uses an optimizer which modifies the parameters of the neural network to minimize the `criterion` thanks to the gradients accumulated through the forward pass of the model. At each epoch, we perform inference, modify model weights after each batch, and finally use `perf` to compute loss and accuracy on the validatin data.\n", "\n", "Note that training is successful when the training loss gets lower after every epoch. It might fluctuate on validation data because of overtraining or generalization noise." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "CU7yXFdZDQU9" }, "outputs": [], "source": [ "import torch.optim as optim\n", "\n", "def fit(model, epochs):\n", " criterion = nn.CrossEntropyLoss()\n", " optimizer = optim.Adam(model.parameters(), lr=1e-2)\n", " for epoch in range(epochs):\n", " model.train()\n", " total_loss = num = 0\n", " for x, y in train_loader:\n", " optimizer.zero_grad() # start accumulating gradients\n", " y_scores = model(x)\n", " loss = criterion(y_scores.view(-1, len(label_vocab)), y.view(-1))\n", " loss.backward() # compute gradients though computation graph\n", " optimizer.step() # modify model parameters\n", " total_loss += loss.item()\n", " num += 1\n", " print(1 + epoch, total_loss / num, *perf(model, valid_loader))\n" ] }, { "cell_type": "markdown", "metadata": { "id": "-lZzxlVfAzoo" }, "source": [ "Let's train the rnn classifier. It is fully supervised, so accuracy should rise quickly. Due to factors already mentionned, it is probably not going to reach state-of-the-art accuracy, but something good enough given the effort we put in the model and surely better than chance. Running more epochs with a learning rate schedule might give a more accurate model but it will not severly improve just by tuning the learning setup.\n", "\n", "Questions:\n", "* What happens if you train the model for 10 epochs? " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 104 }, "id": "iMrImhAdcXMs", "outputId": "2a995b69-e440-4e40-a551-bfb093212f67" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1 0.306389457660214 0.140928921289742 0.834910536779324\n", "2 0.0908740681158949 0.11043157370295376 0.8741948310139165\n", "3 0.05610049066456909 0.10764240776188672 0.8817892644135189\n", "4 0.04206655440585954 0.11061940481886268 0.8841749502982107\n", "5 0.0353463868353972 0.11211540910881013 0.8899005964214711\n" ] } ], "source": [ "rnn_model = RNNClassifier(len(label_vocab))\n", "fit(rnn_model, 5)" ] }, { "cell_type": "markdown", "metadata": { "id": "hJs77NwuBhTI" }, "source": [ "We will now explore how BERT performs at the same POS tagging task except that BERT was not trained explicitly to do it. The principle consists in training a linear model on top of representation vectors generated by BERT for each token. It is important that although we will run BERT inference, we will not touch BERT parameters and only train the linear model.\n", "\n", "Before starting, we will define a baseline which inputs random vectors, of the same size as those generated by BERT, to the linear classifier. So if a word is always labelled with the same POS tag, or if a tag has a particularly high prior, the linear model should be able to learn it. We will call this baseline `LinearProbeRandom`. Another popular baseline consists in generating an alternate tagging task with the same number of labels but with all occurences of a given word associated with a single random label. It helps understanding how the model exploits word identities but is also more involved to implement.\n", "\n", "The `LinearProbeRandom` consists of an embedding layer which projects words to vectors of size 768 (same as BERT) and a linear model which predicts POS tags from those vectors. The embedding layer is not included in model parameters so that only the probe is trained. Every instanciation of this model will yield a different random embedding. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "id": "nzms_zk9b2XJ", "outputId": "28d50e46-e013-4ee6-8138-e2c852a099fe" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([2, 3, 51])\n" ] } ], "source": [ "class LinearProbeRandom(nn.Module):\n", " def __init__(self, num_labels):\n", " super().__init__()\n", " self.embedding = nn.Embedding(tokenizer.vocab_size, 768)\n", " self.probe = nn.Linear(768, num_labels)\n", " self.to(device)\n", "\n", " def parameters(self):\n", " return self.probe.parameters()\n", " \n", " def forward(self, sentences):\n", " with torch.no_grad(): # embeddings are not trained\n", " word_rep = self.embedding(sentences)\n", " return self.probe(word_rep)\n", "\n", "# the model should return a tensor of shape (batch size, sequence length, number of labels)\n", "random_model = LinearProbeRandom(len(label_vocab))\n", "with torch.no_grad():\n", " y = random_model(torch.tensor([[0, 1, 2], [3, 4, 5]]).to(device))\n", "print(y.shape)" ] }, { "cell_type": "markdown", "metadata": { "id": "kf9JATnAHeyS" }, "source": [ "Training the baseline leads to non-trivial performance compared to randomly outputing tags, which means that there is already a lot of regularity in the tag distribution. The question is now whether BERT representations contain more linguistic knowledge directly accessible to a linear classifier.\n", "\n", "Question:\n", "* What happens if you train the model for 10 epochs?\n", "* How do you explain the difference in training time betwen the RNN and the probe?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 104 }, "id": "olJAt0JJcAGC", "outputId": "aa5e97d6-73cd-44db-be75-813d492779b4" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1 0.4851789587006277 0.40942691499367356 0.6808349900596421\n", "2 0.3683222181030682 0.3855127966962755 0.6943538767395626\n", "3 0.33951936472131283 0.34710615081712604 0.6848906560636183\n", "4 0.3297648745379886 0.34240866359323263 0.6805566600397615\n", "5 0.32912805486394436 0.3386711673811078 0.6791252485089463\n" ] } ], "source": [ "random_model = LinearProbeRandom(len(label_vocab))\n", "fit(random_model, 5)" ] }, { "cell_type": "markdown", "metadata": { "id": "J62HgiBUIMhh" }, "source": [ "The linear probe for the BERT model follows the exact same architecture than the baseline probe, except that the pretrained BERT model provided by the transformers package is used. Again, we do not train BERT parameters and only use the generated representations. Given enough memory, they could be pre-computed before training the linear probe." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 84, "referenced_widgets": [ "1a1162e79fb74c6d91020d21f27a96b0", "c8ee3faac13b43399294c06b13a34965", "3f396f3016554b38a61739b6c8f75a07", "501839f4a4324a5ab78130170beaa7a9", "e4edd2f141df45ac93a2e6b5bdbbeff9", "333656f4a59d4fd5a39bd88982e62e18", "f6cb83cba5134788a89ce2979294c991", "fcf5e12f5cae4275ad5fb0a16680522c" ] }, "id": "WcPM8iBJcDtr", "outputId": "4a2d8a3f-7edc-4f45-8731-6a6203e137ca" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "1a1162e79fb74c6d91020d21f27a96b0", "version_major": 2, "version_minor": 0 }, "text/plain": [ "HBox(children=(FloatProgress(value=0.0, description='Downloading', max=435779157.0, style=ProgressStyle(descri…" ] }, "metadata": { "tags": [] }, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "torch.Size([2, 3, 51])\n" ] } ], "source": [ "class LinearProbeBert(nn.Module):\n", " def __init__(self, num_labels):\n", " super().__init__()\n", " self.bert = AutoModel.from_pretrained('bert-base-cased')\n", " self.probe = nn.Linear(self.bert.config.hidden_size, num_labels)\n", " self.to(device)\n", "\n", " def parameters(self):\n", " return self.probe.parameters()\n", " \n", " def forward(self, sentences):\n", " with torch.no_grad(): # no training of BERT parameters\n", " word_rep, sentence_rep = self.bert(sentences, return_dict=False)\n", " return self.probe(word_rep)\n", "\n", "# the model should return a tensor of shape (batch size, sequence length, number of labels)\n", "bert_model = LinearProbeBert(len(label_vocab))\n", "y = bert_model(torch.tensor([[0, 1, 2], [3, 4, 5]]).to(device))\n", "print(y.shape)" ] }, { "cell_type": "markdown", "metadata": { "id": "MvURZQzvJHHp" }, "source": [ "Training the BERT probe is much slower than the baseline probe because BERT representations are recomputed every time a sentence is presented to the model. \n", "\n", "Questions:\n", "* How does accuracy on the development set compare with the random probe?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 104 }, "id": "QudYFSOUSRXy", "outputId": "d535e589-c849-4e51-ce7d-9f9226e1a38c" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1 0.24444428475915778 0.09562666842248291 0.8998011928429424\n", "2 0.09677041964415385 0.078243094147183 0.9149105367793241\n", "3 0.08461419627906716 0.07228884345386177 0.9227435387673957\n", "4 0.08005621008650989 0.0706025876570493 0.9218290258449304\n", "5 0.076806691000048 0.06890617276076227 0.9247316103379721\n" ] } ], "source": [ "bert_model = LinearProbeBert(len(label_vocab))\n", "fit(bert_model, 5)" ] }, { "cell_type": "markdown", "metadata": { "id": "mnCXeSoEJpMP" }, "source": [ "Recap of performance on the test set. The good accuracy of the probe trained on BERT representations suggests that pretraining on large quantities of text with language modeling tasks (masking words, guessing whether two sentences follow eachother) leads to representations that embed some sort of linguistic knowledge. \n", "\n", "It is difficult to compare the performance of the BERT probe to the RNN but others have shown that ELMO, a RNN pre-trained in a similar fashion than BERT exhibit good linguistic probing accuracy, suggesting that the neural network architecture is not as important as the pre-training.\n", "\n", "Questions:\n", "* What can you conclude from this set of results?\n", "* What confounding factors might hinder these conclusions?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 69 }, "id": "32W8KCDSREvM", "outputId": "e7f147f9-d5d3-4f91-a186-07e88ceeb249" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RNN representation (supervised) 0.10681300356306812 0.886799219030163\n", "RANDOM representation (unsupervised) 0.327771899262161 0.6761764354305295\n", "BERT representation (unsupervised) 0.06551418730029554 0.915966051719329\n" ] } ], "source": [ "print('RNN representation (supervised)', *perf(rnn_model, test_loader))\n", "print('RANDOM representation (unsupervised)', *perf(random_model, test_loader))\n", "print('BERT representation (unsupervised)', *perf(bert_model, test_loader))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Exercise 1\n", "--------\n", "Download and probe GloVe embeddings for the POS tagging task (https://nlp.stanford.edu/projects/glove/)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Exercise 2\n", "-----\n", "Probe the `bert-base-multilingual-cased` model from huggingface for French POS tagging (pageperso.lis-lab.fr/benoit.favre/files/ftb.tgz)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "id": "n_KDqqNbLDwq" }, "source": [ "Ideas to go further:\n", "* Implement the alternative baseline probe which replaces the taggings with a random labelling where each word is always assocated with the same label. \n", "* Train a BERT probe with an untrained BERT model (is there anything to the transformer architecture itself?)\n", "* Fine-tune a BERT model for POS-tagging in the fully supervised setting (use a learning rate of 2e-5, and use a linear warmup [learning rate schedule](https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.LambdaLR) on 10% of the updates as suggested by BERT authors)" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "bert-probing.ipynb", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.0" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "09ba4edd35e84e39b069b902e2670a63": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "1a1162e79fb74c6d91020d21f27a96b0": { "model_module": "@jupyter-widgets/controls", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_3f396f3016554b38a61739b6c8f75a07", "IPY_MODEL_501839f4a4324a5ab78130170beaa7a9" ], "layout": "IPY_MODEL_c8ee3faac13b43399294c06b13a34965" } }, "2bff6fed557c4aa59f8b344c1d63aa4d": { "model_module": "@jupyter-widgets/controls", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "initial" } }, "333656f4a59d4fd5a39bd88982e62e18": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "3598fb00f2c4493fa10157fadbc5282a": { "model_module": "@jupyter-widgets/controls", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "initial" } }, "37b2cb34a6eb4e20bcd17748f000e3e2": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "3f396f3016554b38a61739b6c8f75a07": { "model_module": "@jupyter-widgets/controls", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "Downloading: 100%", "description_tooltip": null, "layout": "IPY_MODEL_333656f4a59d4fd5a39bd88982e62e18", "max": 435779157, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_e4edd2f141df45ac93a2e6b5bdbbeff9", "value": 435779157 } }, "4287d910e74849c2adbe1cc1a5890a8e": { "model_module": "@jupyter-widgets/controls", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "Downloading: 100%", "description_tooltip": null, "layout": "IPY_MODEL_b3a23469f3bc4affa42858b17ec59856", "max": 213450, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_3598fb00f2c4493fa10157fadbc5282a", "value": 213450 } }, "436490e07f3748cfb1adedad564d21b0": { "model_module": "@jupyter-widgets/controls", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "45ce5977384a4fe98578a19507149f32": { "model_module": "@jupyter-widgets/controls", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_4287d910e74849c2adbe1cc1a5890a8e", "IPY_MODEL_dd17cfeaf12f401ca1eb6fe1fc449c4f" ], "layout": "IPY_MODEL_5b4839dce34840beb4dbc10dbf2747b0" } }, "4ecda7d9fd4b47e199c32593a52e85c2": { "model_module": "@jupyter-widgets/controls", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "501839f4a4324a5ab78130170beaa7a9": { "model_module": "@jupyter-widgets/controls", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_fcf5e12f5cae4275ad5fb0a16680522c", "placeholder": "​", "style": "IPY_MODEL_f6cb83cba5134788a89ce2979294c991", "value": " 436M/436M [00:12<00:00, 34.4MB/s]" } }, "5b4839dce34840beb4dbc10dbf2747b0": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "7d500d2638c641cdaa0cf666cd927c4b": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "b3365fd4a4764f5fa0dfc117db6a7b0b": { "model_module": "@jupyter-widgets/controls", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_d48f5241bf054d25a3bd6e768814faf1", "IPY_MODEL_ebc6a62525d34786a4f83b72bf6b931b" ], "layout": "IPY_MODEL_7d500d2638c641cdaa0cf666cd927c4b" } }, "b3a23469f3bc4affa42858b17ec59856": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "c8ee3faac13b43399294c06b13a34965": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "d48f5241bf054d25a3bd6e768814faf1": { "model_module": "@jupyter-widgets/controls", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "Downloading: 100%", "description_tooltip": null, "layout": "IPY_MODEL_09ba4edd35e84e39b069b902e2670a63", "max": 433, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_2bff6fed557c4aa59f8b344c1d63aa4d", "value": 433 } }, "dd17cfeaf12f401ca1eb6fe1fc449c4f": { "model_module": "@jupyter-widgets/controls", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_37b2cb34a6eb4e20bcd17748f000e3e2", "placeholder": "​", "style": "IPY_MODEL_4ecda7d9fd4b47e199c32593a52e85c2", "value": " 213k/213k [00:00<00:00, 571kB/s]" } }, "e4edd2f141df45ac93a2e6b5bdbbeff9": { "model_module": "@jupyter-widgets/controls", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "initial" } }, "ea0a3e6d7902405298d2f2920cdb240b": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "ebc6a62525d34786a4f83b72bf6b931b": { "model_module": "@jupyter-widgets/controls", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_ea0a3e6d7902405298d2f2920cdb240b", "placeholder": "​", "style": "IPY_MODEL_436490e07f3748cfb1adedad564d21b0", "value": " 433/433 [04:35<00:00, 1.57B/s]" } }, "f6cb83cba5134788a89ce2979294c991": { "model_module": "@jupyter-widgets/controls", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "fcf5e12f5cae4275ad5fb0a16680522c": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } } } } }, "nbformat": 4, "nbformat_minor": 1 }