Software:GPT-2

From HandWiki
Revision as of 16:38, 14 February 2024 by JMinHep (talk | contribs) (linkage)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: 2019 text-generating language model
Generative Pre-trained Transformer 2 (GPT-2)
GPT2-talks-about-GPT2.png
GPT-2 completion using the Hugging Face Write With Transformer website, prompted with text from this Wikipedia article (All highlighted text after the initial prompt is machine-generated from the first suggested completion, without further editing.)
Original author(s)OpenAI
Initial release14 February 2019; 5 years ago (14 February 2019)
Repositoryhttps://github.com/openai/gpt-2
TypeTransformer language model
Websiteopenai.com/blog/gpt-2-1-5b-release/

Generative Pre-trained Transformer 2 (GPT-2) is an open-source artificial intelligence created by OpenAI in February 2019.[1][2][3][4] GPT-2 translates text, answers questions, summarizes passages,[5] and generates text output on a level that, while sometimes indistinguishable from that of humans,[6] can become repetitive or nonsensical when generating long passages.[7] It is a general-purpose learner; it was not specifically trained to do any of these tasks, and its ability to perform them is an extension of its general ability to accurately synthesize the next item in an arbitrary sequence.[8][5] GPT-2 was created as a "direct scale-up" of OpenAI's 2018 GPT model,[9] with a ten-fold increase in both its parameter count and the size of its training dataset.[4]

The GPT architecture implements a deep neural network, specifically a transformer model,[9] which uses attention in place of previous recurrence- and convolution-based architectures.[10][11] Attention mechanisms allow the model to selectively focus on segments of input text it predicts to be the most relevant.[12][13] This model allows for greatly increased parallelization, and outperforms previous benchmarks for RNN/CNN/LSTM-based models.[9]

OpenAI released the complete version of the GPT-2 language model (with 1.5 billion parameters) in November 2019.[14] GPT-2 was to be followed by the 175-billion-parameter GPT-3,[15] revealed to the public in 2020[16] (whose source code has never been made available). Access to GPT-3 is provided exclusively through APIs offered by OpenAI and Microsoft.[17]

Background

Since the origins of computing, artificial intelligence has been an object of study; the "imitation game", postulated by Alan Turing in 1950 (and often called the "Turing test") proposed to establish an electronic or mechanical system's capacity for intelligent action by an evaluator's ability to distinguish its behavior from that of a human.[18] The term "machine learning" was first used to describe a possible approach to artificial intelligence as early as 1959 by IBM researcher Arthur Samuel;[19] current use of the term encompasses a broad variety of statistical learning, data science and neural network approaches to computational problems (often falling under the aegis of artificial intelligence).

Computational linguistics

Natural language processing using computers, a task originally conceived as a subfield of computational linguistics, was attempted as soon as computing hardware had the capacity; the first application of a dictionary look-up table was developed at Birkbeck College in London in 1948.[20] The 1954 Georgetown Experiment was a demonstration of fully automated machine translation, in which sixty Russian sentences were translated into English (mostly by replacement of words with their English synonyms).[21][22] The translations were often crude; the system had only 6 grammar rules and a 250-word vocabulary,[23] and no attempt was made to analyze or translate syntactic structure.[24] However, the experiment proved to the public that computers could interpret and process natural language,[25] and secured CIA funding for further research.[21] Direct substitution remains a standard against which machine translation programs are evaluated.

Systems for using natural language in human-computer interaction (HCI) also began to emerge in the mid-20th century. SHRDLU, a program developed at MIT in 1968–1970, consisted of a virtual environment of several objects which a user interacted with through commands in natural language (e.g."Find a block which is taller than the one you are holding and put it into the box").[26][27] ELIZA, a chatterbot written in 1966, analyzed a human interlocutor's text for keywords and provided conversationally appropriate responses.[28] While many subjects claimed an inability to distinguish ELIZA's conversation from that of a human, the question of whether this constituted intelligence proved contentious (the most famous script parodied a psychotherapist by, largely, repeating what the user had said back to them).[29]

While initial attempts at machine translation had been purely computational, by the 1950s the dominant approach to computational linguistics had come to emphasize Noam Chomsky's concept of universal grammar;[20] NLP research in that era, accordingly, consisted largely of attempts to reduce statements in arbitrary languages to putative underlying language-agnostic logical structures. In the 1970s, semantic NLP systems would begin to eschew syntactic encodings in favor of more general semantic encodings.[30] However, until the advent of neural networks, most systems continued to rely on large (and increasingly unwieldly) sets of manually programmed rules, which failed to scale up as initially predicted.[20]

The field of artificial intelligence continued to develop in the late 20th century, but occasional periods of stagnation known as "AI winters" occurred. Various sources posit AI winters as having occurred at different times; in 1994, Howe described one as having started in 1973 and lasting a decade,[31] while Russell & Norvig in 2003 described another as starting soon after 1988.[32]

Neural networks

An early concept in artificial intelligence, connectionism, sought to produce intelligent behavior through artificial neural networks designed to simulate the behavior of neurons in biological brains. The first example of an artificial neural network was the SNARC, built in 1951. The perceptron (a type of binary classifier) was introduced in 1957 by psychologist Frank Rosenblatt;[33] his machine was designed for image recognition using 400 photocells connected to "neurons", with weightings determined by potentiometers (and adjusted with electric motors during its learning process).[34] Perceptron systems became the subject of great interest; a The New York Times article described the perceptron as "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence".[35] Perceptron systems, however, fell out of favor for decades following a 1969 book by Marvin Minsky and Seymour Papert (Perceptrons: an introduction to computational geometry),[36] which pointed out several shortcomings of the then-present state of the art (single-layer perceptrons), including an inability to encode the exclusive or (XOR) function. The book was considered, at the time, to discredit the perceptron approach (as well as neural networks in general) as a promising area of research.[35]

Neural networks become capable of classifying different inputs (i.e. sorting them into distinct categories) through a process known as "learning". This begins with the network's weights (the amount by which each neuron's "activation" influences the activation of each specific neuron in the subsequent layer) being initialized to random quantities; in this state, the output of the network is similarly random. An objective function, like a loss function, is defined, which is capable of quantitatively measuring how close the output of the network is to its desired performance (for example, how often an input consisting of a handwritten number results in the sole activation of the output neuron corresponding to that number).[37] From this, and from the performance of the network, the weights can be adjusted in order to improve its performance.[38]

Backpropagation, a supervised algorithm first applied to machine learning systems in Paul Werbos' 1974 dissertation,[39] efficiently calculates "gradients", which are vector fields describing the optimal adjustment of all weights in the entire network for a given input/output example.[38][37] The use of these gradients to train neural networks, a practice known as gradient descent, enabled the creation of much more complex systems, and wide-scale application of neural networks to natural language processing would occur in the 1980s.[40][32] In 1985, D.B. Parker would rediscover Werbos' method;[41] in 1986, Rumelhart, Hinton and Williams would apply it to generate internal representations of incoming data in neural networks with hidden layers,[42] referred to as "deep learning" networks; this research would later form the basis for recurrent neural networks.

Traditional feed-forward neural networks (FFNNs) are so named because each layer takes in output from the previous layer, and feeds it into the next; a FFNN's structure contains no "cycles" where information flows backwards. In contrast, a recurrent neural network (RNN) has at least one cycle of activation flow.[37] RNNs are often used for processing sequences of data (and predicting future sequence items), since the network can process each item using both the item itself and its own output from processing the previous item.[37]

The neocognitron, proposed by Kunihiko Fukushima in 1979[43] based on models of neural architecture in the mammalian visual cortex, provided the basis for convolutional neural networks (CNNs),[44] often used in image processing. By "sliding" a small layer over a larger input, a CNN can perform deeper processing with less computation. For example, a 100×100 image has 10,000 pixels, which would require 10,000 weights to process with a fully connected layer; a convolutional layer consisting of a 5×5 "window" sliding over the image can perform edge detection using only 25 learnable parameters. Convolutional layers are combined by "pooling layers", and processed by "fully connected" layers (which are typically multilayer perceptrons).

Machine learning for natural language processing

Due to their ability to process sequential information, recurrent neural networks have seen use in many NLP applications; unlike FFNNs, they are capable of encoding different weights (and giving different output) for identical items based on their surroundings in a sequence—that is to say, a RNN system that parsed one word at a time could still associate a "black dog" with fuzzy paws, a "corn dog" with ketchup, and a "sun dog" with refraction. Moreover, since the retention of information from previous sequence items can be performed recursively, RNN systems can be designed that recall items arbitrarily far back in a sequence: for example, being able to continue the sequences "Tom looked at the black dog", "Tom looked at the corn dog", and "Tom looked at the sun dog" with "fondly", "hungrily", and "indirectly", respectively.[45][11]

While capable of impressive solutions, many-layered FFNNs and RNNs both proved vulnerable to the vanishing gradient problem: since gradients (encoded as finite-precision numbers) are required to backpropagate across all layers of a model, they can "vanish" to zero (or "explode" to infinity) over a sufficiently large number of layers. The long short-term memory network (LSTM), first proposed by Sepp Hochreiter and Jürgen Schmidhuber in 1995–1997,[46][47][48] sought to resolve this issue by introducing a novel architecture consisting of multiple distinct "cells" with "input", "output" and "forget" gates. In 2009, an LSTM-based model submitted by Alex Graves' team won the ICDAR competition for handwriting recognition;[49] another was the most accurate model in the competition and a third was the fastest.[50]

Another issue RNNs and LSTMs encounter is that they can only take into account the context of previous sequence items.[45][51] This can create issues when parsing sentences like "Tom rode his bike to the store, put out the kickstand, and turned off the engine", in which the necessary context of the "bike" being a motorcycle is revealed only at the end. One method of solving problems like this is the bidirectional LSTM, which proceeds in both directions simultaneously, giving access to both "past" and "future" input features.[45] Conditional random fields use tags to connect inputs directly to outputs.[45] There exist combinations of the above approaches, like the LSTM-CRF network and the BI-LSTM-CRF network.[45] Other improvements on the RNN model include neural Turing machines, adaptive computation time, neural programmers, and attention mechanisms, the latter of which form the basis for GPT-2 and related technologies.[11]

Selective focusing

By the early 2010s, the best performance in neural machine translation was achieved with the encoder–decoder model, in which a RNN or LSTM "encoder network" encoded source sentences into vectors, and a "decoder network" of similar architecture processed these vectors into translated output.[12] 2014 saw the introduction of significantly more complex "attention" mechanisms, which vastly augmented these models' performance. Attention mechanisms gave these models the ability to adaptively focus their decoder networks' "attention" on specific aspects of the source text, rather than forcing them to parse the entire text as one vector.[12][13]

2017 then saw the introduction of "transformer" models, which went a step further by using attention mechanisms to replace the RNN/LSTM architecture entirely.[10][11]

Attention mechanisms

Main page: Attention (machine learning)

One constraint of encoder–decoder models was the difficulty of compressing the encodings of larger sentences into fixed-length vectors; performance often deteriorated on larger inputs. In 2014, Bahdanau et al.[12] introduced an extension to the encoder–decoder model that could "align and translate jointly".[13] For each word of the source sentence that was translated, the Bahdanau model's encoder (a bidirectional RNN with 1000 hidden units in each direction) searched the entire rest of that sentence for the positions of relevant information. Rather than giving the decoder a fixed-length vector encoding of the entire input sequence (like previous models), it produced "context vectors", associated with those positions as well as previously generated target words.[12] The decoder (which also had 1000 hidden units) then used these context vectors to decide where to focus its "attention".[12][13][11]

Research into "attention" mechanisms was continued by Luong et al. in a 2015 paper.[13] A "global" approach based on the Bahdanau paper was attempted, as well as a "local" approach wherein only a subset of source words were "considered" at a time; the local approach, while more architecturally complicated, was less computationally expensive and easier to train.[13] It took 7–10 days to fully train an English–German translation model, which was specifically designed to be capable of translating 1,000 target words per second; its accuracy was tested against the 2014 ACL Workshop on Machine Translation (WMT'14) task for English–German sentence pairs, and achieved a result of 23.0 BLEU—a 2.1 BLEU improvement on the previous best result achieved by previous attempts, a phrase-based language model from Buck et al. 2014.[52][13]

Transformers

Main page: Transformer (machine learning model)

While attention mechanisms were effective in improving performance when used to augment existing convolutional and recurrent neural network architectures, it was soon discovered that performant models could be built using attention mechanisms on their own, without anything else underlying them.[10]

In June 2017, the transformer architecture was first introduced, in a paper released by researchers from Google Brain, Google Research, and University of Toronto.[10] Transformers are a type of model based solely on attention mechanisms, discarding convolution and recurrence altogether. Unlike previous RNN-based models, transformers can process sequential input without needing to perform computation on each item in sequence; this means they can be massively parallelized.[10] On the WMT'14 French–English task, a specifically trained French–English translation model using the transformer architecture was able to establish a new single-model benchmark of 41.8 BLEU.[10] Since their introduction, transformers have seen use in many NLP applications.[53]

Generative Pre-trained Transformer

The GPT model
architecture parameter count training data
GPT-1 12-level, 12-headed Transformer decoder (no encoder), followed by linear-softmax. 0.12 billion BookCorpus:[54] 4.5 GB of text, from 7000 unpublished books of various genres.
GPT-2 GPT-1, but with modified normalization 1.5 billion WebText: 40 GB of text, 8 million documents, from 45 million webpages upvoted on Reddit.
GPT-3 GPT-2, but with modification to allow larger scaling. 175 billion 570 GB plaintext, 0.4 trillion tokens. Mostly CommonCrawl, WebText, English Wikipedia, and two books corpora (Books1 and Books2).

On June 11, 2018, OpenAI released a paper entitled "Improving Language Understanding by Generative Pre-Training", in which they introduced the Generative Pre-trained Transformer (GPT).[9] At this point, the best-performing neural NLP models primarily employed supervised learning from large amounts of manually labeled data. This reliance on supervised learning limited their use on datasets that were not well-annotated, in addition to making it prohibitively expensive and time-consuming to train extremely large models;[9][55] many languages (such as Swahili or Haitian Creole) are difficult to translate and interpret using such models due to a lack of available text for corpus-building.[55] In contrast, GPT's "semi-supervised" approach involved two stages: an unsupervised generative "pre-training" stage in which a language modeling objective was used to set initial parameters, and a supervised discriminative "fine-tuning" stage in which these parameters were adapted to a target task.[9]

The use of a transformer architecture, as opposed to previous techniques involving attention-augmented RNNs, provided GPT with a more structured memory than could be achieved through recurrent mechanisms; this resulted in "robust transfer performance across diverse tasks".[9]

During transfer, we utilize task-specific input adaptations derived from traversal-style approaches, which process structured text input as a single contiguous sequence of tokens.[9]

Corpus

The unsupervised pre-training was performed using BooksCorpus,[56] a dataset of over 7,000 unpublished fiction books from various genres; this dataset was chosen in part because its long passages of continuous text conditioned the model to handle long-range information. Other available datasets, while larger, were rejected on the basis that they lacked this long-range structure (being "shuffled" at a sentence level).[9] The ftfy library was used to clean the BooksCorpus text (standardize punctuation and whitespace); it was tokenized using spaCy.[9]

Architecture

GPT's architecture itself was a twelve-layer decoder-only transformer, using twelve masked self-attention heads, with 64 dimensional states each (for a total of 768). Rather than simple stochastic gradient descent, the Adam optimization algorithm was used; the learning rate was increased linearly from zero over the first 2,000 updates, to a maximum of 2.5×10−4, and annealed to 0 using a cosine schedule.[9]

We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm is used extensively throughout the model, a simple weight initialization of N(0,0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53]and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in Loshchilov et al. 2017, with w = 0.01 on all non bias or gain weights.

[...]
We used learned position embeddings instead of the sinusoidal version proposed in the original work.

[...]
Unless specified, we reuse the hyperparameter settings from unsupervised pre-training. We add dropout to the classifier with a rate of 0.1. For most tasks, we use a learning rate of 6.25e-5 and a batchsize of 32. Our model finetunes quickly and 3 epochs of training was sufficient for most cases. We use a linear learning rate decay schedule with warmup over 0.2% of training. λ was set to 0.5.[9]

While GPT's fine-tuning was adapted to specific tasks, its pre-training was not; to perform the various tasks, minimal changes were performed to its underlying task-agnostic model architecture.[9] Despite this, GPT still improved on previous benchmarks in several language processing tasks, outperforming discriminatively-trained models with task-oriented architectures on a number of diverse tasks.[9]

Performance

On natural language inference (also known as textual entailment) tasks, models are evaluated on their ability to interpret pairs of sentences from various datasets and classify the relationship between them as "entailment", "contradiction" or "neutral".[9] Examples of such datasets include QNLI (Wikipedia articles) and MultiNLI (transcribed speech, popular fiction and government reports, among other sources);[57] on these GPT achieved, respectively, a 5.8% and 1.5% improvement over previous best results.[9] It similarly outperformed previous models on two tasks related to question answering and commonsense reasoning—by 5.7% on RACE,[58] a dataset of written question–answer pairs from middle and high school exams, and by 8.9% on the Story Cloze Test.[59]

Another task, semantic similarity (or paraphrase detection), assesses whether a model can predict whether two sentences are paraphrases of one another; on the Quora Question Pairs (QQP) dataset, GPT improved on previous best-performing models by 4.2%.[9] In a text classification task using the Corpus of Linguistic Acceptability (CoLA), GPT achieved a score of 45.4, versus a previous best of 35.0. Finally, on GLUE, a multi-task test,[60] GPT achieved an overall score of 72.8 (compared to a previous record of 68.9).[9]

Scale-up

GPT-2 was created as a direct scale-up of GPT, with both its parameter count and dataset size increased by a factor of 10.[8][9][4] Both are unsupervised transformer models trained to generate text by predicting the next word in a sequence of tokens. The GPT-2 model has 1.5 billion parameters, and was trained on a dataset of 8 million web pages.[8] While GPT-2 was reinforced on very simple criteria (interpreting a sequence of words in a text sample and predicting the most likely next word), it produces full sentences and paragraphs by continuing to predict additional words, generating fully comprehensible (and semantically meaningful) statements in natural language.[8] Notably, GPT-2 was evaluated on its performance on tasks in a zero-shot setting.

Training

Since the transformer architecture enabled massive parallelization, GPT-series models could be trained on larger corpora than previous NLP models. While the initial GPT model demonstrated that the approach was viable, GPT-2 would further explore the emergent properties of networks trained on extremely large corpora. CommonCrawl, a large corpus produced by web crawling and previously used in training NLP systems,[61] was considered due to its large size, but was rejected after further review revealed large amounts of unintelligible content.[8][61] Instead, OpenAI developed a new corpus, known as WebText; rather than scraping content indiscriminately from the World Wide Web, WebText was generated by scraping only pages linked to by Reddit posts that had received at least three upvotes prior to December 2017. The corpus was subsequently cleaned; HTML documents were parsed into plain text, duplicate pages were eliminated, and Wikipedia pages were removed (since their presence in many other datasets could have induced overfitting).[8]

While the cost of training GPT-2 is known to have been $256 per hour,[62][63] the amount of hours it took to complete training is unknown; therefore, the overall training cost cannot be estimated accurately.[64] However, comparable large language models using transformer architectures have had their costs documented in more detail; the training processes for BERT and XLNet consumed, respectively, $6,912 and $245,000 of resources.[63]

Performance

GPT-2 writing a fictional news article about Edward Snowden's actions after winning the 2020 United States presidential election (all highlighted text is machine-generated). While Snowden had (at the time of generation) never been elected to public office, the generated sample is grammatically and stylistically valid.

Due to the broadness of its dataset, and the broadness of its approach, GPT-2 became capable of performing a diverse range of tasks beyond simple text generation: answering questions, summarizing, and even translating between languages in a variety of specific domains, without being instructed in anything beyond how to predict the next word in a sequence.[65][66]

One example of generalized learning is GPT-2's ability to perform machine translation between French and English, for which task GPT-2's performance was assessed using WMT-14 translation tasks. GPT-2's training corpus included virtually no French text; non-English text was deliberately removed while cleaning the dataset prior to training, and as a consequence, only 10MB of French of the remaining 40,000MB was available for the model to learn from (mostly from foreign-language quotations in English posts and articles).[8] Despite this, GPT-2 achieved 5 BLEU on the WMT-14 English-to-French test set (slightly below the score of a translation via word-for-word substitution). It was also able to outperform several contemporary (2017) unsupervised machine translation baselines on the French-to-English test set, where GPT-2 achieved 11.5 BLEU. This remained below the highest-performing contemporary unsupervised approach (2019), which had achieved 33.5 BLEU.[8] However, other models used large amounts of French text to achieve these results; GPT-2 was estimated to have used a monolingual French corpus approximately 1/500 the size of comparable approaches.[8]

Release

GPT-2 was first announced on 14 February 2019. A February 2019 article in The Verge by James Vincent said that, while "[the] writing it produces is usually easily identifiable as non-human", it remained "one of the most exciting examples yet" of language generation programs:[65]

Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt.[65]

The Guardian described this output as "plausible newspaper prose";[7] Kelsey Piper of Vox said "one of the coolest AI systems I’ve ever seen may also be the one that will kick me out of my job".[66] GPT-2's flexibility was described as "impressive" by The Verge; specifically, its ability to translate text between languages, summarize long articles, and answer trivia questions were noted.[65]

A study by the University of Amsterdam employing a modified Turing test found that at least in some scenarios, participants were unable to distinguish poems generated by GPT-2 from those written by humans.[67]

Restrictions and partial release

While "Skub" is not a real product, even the reduced-size model used in DistilGPT2 is capable of creating plausible arguments both for and against it.

While previous OpenAI models had been made immediately available to the public, OpenAI initially refused to make a public release of GPT-2's source code when announcing it in February, citing the risk of malicious use;[7] limited access to the model (i.e. an interface that allowed input and provided output, not the source code itself) was allowed for selected press outlets on announcement.[7] One commonly-cited justification was that, since generated text was usually completely novel, it could be used by spammers to evade automated filters; OpenAI demonstrated a version of GPT-2 fine-tuned to "generate infinite positive – or negative – reviews of products".[7] Another was that GPT-2 could be used to generate text that was obscene or racist. Researchers such as Jeremy Howard warned of "the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter".[65] The Allen Institute for Artificial Intelligence, in response to GPT-2, announced a tool to detect "neural fake news".[68]

However, opinion was divided. A February 2019 article in The Verge argued that the threat posed by GPT-2 had been exaggerated;[69] Anima Anandkumar, a professor at Caltech and director of machine learning research at Nvidia, said that there was no evidence that GPT-2 had the capabilities to pose the threats described by OpenAI, and that what they did was the "opposite of open", characterizing their refusal to release the full model as "malicious BS".[69] The Gradient published an open letter to OpenAI requesting that they release the model publicly, comparing the threat posed by text-generation AI to the threat posed by the printing press, and giving Photoshop as an example of "a technology that has (thankfully) not destroyed modern society despite its potential for chaos":[70]

Thirty years later, society has emerged relatively unscathed despite Photoshop being simple enough for high school students to use and ubiquitous enough to commandeer its own verb. Why? Precisely because everyone knows about Photoshop.[70]

774M release

While OpenAI did not release the fully-trained model or the corpora it was trained on, description of their methods in prior publications (and the free availability of underlying technology) made it possible for GPT-2 to be replicated by others as free software; one such replication, OpenGPT-2, was released in August 2019, in conjunction with a freely licensed version of WebText called OpenWebText. The cloud compute costs for OpenGPT-2 were given as approximately $50,000.[71]

On August 20, 2019, OpenAI released a partial version of GPT-2, with 774 million parameters (roughly half the size of the full 1.5 billion parameter model).[2]

Full 1.5B release

Initial concerns that GPT-2 would lend itself to widespread misuse did not come to pass; The Verge said that "there are reasons to be skeptical about claims that AI technology will usher in some sort of ‘infopocalypse.’ For a start, we already have programs that can generate plausible text at high volume for little cost: humans."[72] By November 2019, OpenAI said that they had "seen no strong evidence of misuse so far", and the full version, with 1.5 billion parameters, was released on November 5, 2019.[3][14]

Limitations

GPT-2 can generate thematically-appropriate text for a range of scenarios, even surreal ones like a CNN article about Donald Trump giving a speech praising the anime character Asuka Langley Soryu. Here, the tendency to generate nonsensical and repetitive text with increasing output length (even in the full 1.5B model) can be seen; in the second paragraph, grammar begins to deteriorate, and the output eventually becomes one incoherent sentence repeated over and over.

While GPT-2's ability to generate plausible passages of natural language text were generally remarked on positively, its shortcomings were noted as well, especially when generating texts longer than a couple paragraphs; Vox said "the prose is pretty rough, there’s the occasional non-sequitur, and the articles get less coherent the longer they get".[66] The Verge similarly noted that longer samples of GPT-2 writing tended to "stray off topic" and lack overall coherence;[65] The Register opined that "a human reading it should, after a short while, realize something's up", and noted that "GPT-2 doesn't answer questions as well as other systems that rely on algorithms to extract and retrieve information."[62]

GPT-2 deployment is resource-intensive; the full version of the model is larger than five gigabytes, making it difficult to embed locally into applications, and consumes large amounts of RAM. In addition, performing a single prediction "can occupy a CPU at 100% utilization for several minutes", and even with GPU processing, "a single prediction can take seconds".[6] To alleviate these issues, the company Hugging Face created DistilGPT2, using knowledge distillation to produce a smaller model that "scores a few points lower on some quality benchmarks", but is "33% smaller and twice as fast".[6]

Implementations and subsequent research

Possible applications of GPT-2 described by journalists included aiding humans in writing text like news articles.[7] Even before the release of the full version, GPT-2 was used for a variety of applications and services, as well as for entertainment. In June 2019, a subreddit named r/SubSimulatorGPT2 was created in which a variety of GPT-2 instances trained on different subreddits made posts and replied to each other's comments, creating a situation where one could observe "an AI personification of r/Bitcoin argue with the machine learning-derived spirit of r/ShittyFoodPorn";[72] by July of that year, a GPT-2-based software program released to autocomplete lines of code in a variety of programming languages was described by users as a "game-changer".[73]

In 2019, AI Dungeon was launched, which used GPT-2 to generate dynamic text adventures based on user input.[74] AI Dungeon now offers access to the largest release of GPT-3 API as an optional paid upgrade, the free version of the site uses the 2nd largest release of GPT-3.[75] Latitude, the company formed around AI Dungeon, raised $3.3 million in seed funding in 2021.[76] Several websites host interactive demonstrations of different instances of GPT-2 and other transformer models.[77][78][79]

In February 2021, a crisis center for troubled teens announced that they would begin using a GPT-2-derived chatbot to help train counselors by allowing them to have conversations with simulated teens (this use was purely for internal purposes, and did not involve having GPT-2 communicate with the teens themselves).[80]

References

  1. Piper, Kelsey (15 May 2019). "A poetry-writing AI has just been unveiled. It's ... pretty good.". Vox. https://www.vox.com/2019/5/15/18623134/openai-language-ai-gpt2-poetry-try-it. 
  2. 2.0 2.1 Johnson, Khari (20 August 2019). "OpenAI releases curtailed version of GPT-2 language model". VentureBeat. https://venturebeat.com/2019/08/20/openai-releases-curtailed-version-of-gpt-2-language-model/. 
  3. 3.0 3.1 Vincent, James (7 November 2019). "OpenAI has published the text-generating AI it said was too dangerous to share". The Verge. https://www.theverge.com/2019/11/7/20953040/openai-text-generation-ai-gpt-2-full-model-release-1-5b-parameters. 
  4. 4.0 4.1 4.2 "Better Language Models and Their Implications". OpenAI. 14 February 2019. https://openai.com/blog/better-language-models/. 
  5. 5.0 5.1 Hegde, Chaitra; Patil, Shrikumar (9 June 2020). "Unsupervised Paraphrase Generation using Pre-trained Language Models". arXiv:2006.05477 [cs.CL].
  6. 6.0 6.1 6.2 Kaiser, Caleb (31 January 2020). "Too big to deploy: How GPT-2 is breaking servers". https://towardsdatascience.com/too-big-to-deploy-how-gpt-2-is-breaking-production-63ab29f0897c. 
  7. 7.0 7.1 7.2 7.3 7.4 7.5 Hern, Alex (14 February 2019). "New AI fake text generator may be too dangerous to release, say creators". The Guardian. https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction. 
  8. 8.0 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David; Amodei, Dario; Sutskever, Ilua (14 February 2019). Language models are unsupervised multitask learners. 1. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf. Retrieved 19 December 2020. 
  9. 9.00 9.01 9.02 9.03 9.04 9.05 9.06 9.07 9.08 9.09 9.10 9.11 9.12 9.13 9.14 9.15 9.16 9.17 9.18 Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training". OpenAI. pp. 12. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf. 
  10. 10.0 10.1 10.2 10.3 10.4 10.5 Polosukhin, Illia; Kaiser, Lukasz; Gomez, Aidan N.; Jones, Llion; Uszkoreit, Jakob; Parmar, Niki; Shazeer, Noam; Vaswani, Ashish (2017-06-12). "Attention Is All You Need". arXiv:1706.03762 [cs.CL].
  11. 11.0 11.1 11.2 11.3 11.4 Olah, Chris; Carter, Shan (8 September 2016). "Attention and Augmented Recurrent Neural Networks". Distill 1 (9). doi:10.23915/distill.00001. https://distill.pub/2016/augmented-rnns/. Retrieved 22 January 2021. 
  12. 12.0 12.1 12.2 12.3 12.4 12.5 Bahdanau, Dzmitry; Cho, Kyunghyun; Bengio, Yoshua (1 September 2014). "Neural Machine Translation by Jointly Learning to Align and Translate". arXiv:1409.0473 [cs.CL].
  13. 13.0 13.1 13.2 13.3 13.4 13.5 13.6 Luong, Minh-Thang; Pham, Hieu; Manning, Christopher D. (17 August 2015). "Effective Approaches to Attention-based Neural Machine Translation". arXiv:1508.04025 [cs.CL].
  14. 14.0 14.1 "GPT-2: 1.5B Release" (in en). 2019-11-05. https://openai.com/blog/gpt-2-1-5b-release/. 
  15. Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (July 22, 2020). "Language Models are Few-Shot Learners". arXiv:2005.14165 [cs.CL].
  16. Arram (July 9, 2020). "GPT-3: An AI that's eerily good at writing almost anything". Arram Sabeti. https://arr.am/2020/07/09/gpt-3-an-ai-thats-eerily-good-at-writing-almost-anything/. 
  17. Hao, Karen (September 23, 2020). "OpenAI is giving Microsoft exclusive access to its GPT-3 language model" (in en). MIT Technology Review. https://www.technologyreview.com/2020/09/23/1008729/openai-is-giving-microsoft-exclusive-access-to-its-gpt-3-language-model/. Retrieved 2020-09-25. ""The companies say OpenAI will continue to offer its public-facing API, which allows chosen users to send text to GPT-3 or OpenAI’s other models and receive its output. Only Microsoft, however, will have access to GPT-3’s underlying code, allowing it to embed, repurpose, and modify the model as it pleases."". 
  18. Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423 
  19. Samuel, Arthur (1959). "Some Studies in Machine Learning Using the Game of Checkers". IBM Journal of Research and Development 3 (3): 210–229. doi:10.1147/rd.33.0210. 
  20. 20.0 20.1 20.2 Hancox, P.J. (26 January 1996). "SEM1A5 – Part 1 – A brief history of NLP". University of Birmingham. https://www.cs.bham.ac.uk/~pjh/sem1a5/pt1/pt1_history.html. 
  21. 21.0 21.1 Nye, Mary Jo (2016). "Speaking in Tongues: Science's centuries-long hunt for a common language". Distillations 2 (1): 40–43. https://www.sciencehistory.org/distillations/magazine/speaking-in-tongues. Retrieved 22 March 2018. 
  22. Gordin, Michael D. (2015). Scientific Babel: How Science Was Done Before and After Global English. Chicago, Illinois: University of Chicago Press. ISBN 9780226000299. 
  23. John Hutchins. The first public demonstration of machine translation: the Georgetown-IBM system, 7th January 1954. 
  24. Reifler, Erwin (February 2–5, 1960). "The solution of MT linguistic problems through lexicography.". Proceedings of the National Symposium on Machine Translation. 
  25. Hutchins, John (1997). "From first conception to first demonstration: the nascent years of machine translation, 1947–1954. A chronology.". Machine Translation 12, 195–252 12 (3): 195–252. doi:10.1023/A:1007969630568. 
  26. Winograd, Terry (1971-01-01) (in en-US). Procedures as a Representation for Data in a Computer Program for Understanding Natural Language. https://dspace.mit.edu/handle/1721.1/7095. Retrieved 2021-01-12. 
  27. "SHRDLU". http://hci.stanford.edu/winograd/shrdlu/. 
  28. Weizenbaum, Joseph (January 1966), "ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine", Communications of the ACM 9 (1): 36–45, doi:10.1145/365153.365168 
  29. Bassett, Caroline (2019). "The computational therapeutic: exploring Weizenbaum's ELIZA as a history of the present". AI & Society 34 (4): 803–812. doi:10.1007/s00146-018-0825-9. 
  30. Hancox, P.J. (26 January 1996). "SEM1A5 – Part 1 – The state-of-the-art". University of Birmingham. https://www.cs.bham.ac.uk/~pjh/sem1a5/pt1/pt1_art.html. 
  31. Howe, J. (November 1994). "Artificial Intelligence at Edinburgh University : a Perspective". http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html. "Lighthill's [1973] report provoked a massive loss of confidence in AI by the academic establishment in the UK (and to a lesser extent in the US). It persisted for a decade ― the so-called 'AI Winter'" 
  32. 32.0 32.1 Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, p. 24, ISBN 0-13-790395-2, http://aima.cs.berkeley.edu/, retrieved 2021-01-12, ""Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988. Soon after that came a period called the 'AI Winter'"" 
  33. Rosenblatt, Frank (1957). "The Perceptron—a perceiving and recognizing automaton". Report 85-460-1 (Cornell Aeronautical Laboratory). 
  34. Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning. Springer. ISBN 0-387-31073-8. 
  35. 35.0 35.1 Olazaran, Mikel (1996). "A Sociological Study of the Official History of the Perceptrons Controversy". Social Studies of Science 26 (3): 611–659. doi:10.1177/030631296026003005. 
  36. Minsky, Marvin; Papert, Seymour (1969), Perceptrons: An Introduction to Computational Geometry, MIT Press, ISBN 0-262-63022-2 
  37. 37.0 37.1 37.2 37.3 Wilson, Bill (24 June 2012). "The Machine Learning Dictionary". http://www.cse.unsw.edu.au/~billw/mldict.html. 
  38. 38.0 38.1 Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). Deep Learning. MIT Press. pp. 200–220. ISBN 9780262035613. http://www.deeplearningbook.org. Retrieved 2021-03-14. 
  39. Werbos, Paul J. (1994). The Roots of Backpropagation : From Ordered Derivatives to Neural Networks and Political Forecasting. New York: John Wiley & Sons. ISBN 0-471-59897-6. 
  40. Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN:0-465-02997-3 
  41. Parker, D.B. (1985). Learning Logic. Cambridge MA: Massachusetts Institute of Technology. 
  42. Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986a). "Learning representations by back-propagating errors". Nature 323 (6088): 533–536. doi:10.1038/323533a0. Bibcode1986Natur.323..533R. 
  43. Fukushima, Kunihiko (October 1979). "位置ずれに影響されないパターン認識機構の神経回路のモデル --- ネオコグニトロン ---" (in ja). Trans. IECE J62-A (10): 658–665. https://search.ieice.org/bin/summary.php?id=j62-a_10_658. Retrieved 2021-01-20. 
  44. LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (2015). "Deep learning". Nature 521 (7553): 436–444. doi:10.1038/nature14539. PMID 26017442. Bibcode2015Natur.521..436L. 
  45. 45.0 45.1 45.2 45.3 45.4 Bajpai, Akash (23 February 2019). "Recurrent Neural Networks: Deep Learning for NLP". https://towardsdatascience.com/recurrent-neural-networks-deep-learning-for-nlp-37baa188aef5. 
  46.  , Wikidata Q98967430
  47.  , Wikidata Q77698282
  48. Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276. https://www.researchgate.net/publication/13853244. Retrieved 2021-01-20. 
  49. Graves, A.; Liwicki, M.; Fernández, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (May 2009). "A Novel Connectionist System for Unconstrained Handwriting Recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence 31 (5): 855–868. doi:10.1109/tpami.2008.137. ISSN 0162-8828. PMID 19299860. 
  50. Märgner, Volker; Abed, Haikal El (July 2009). "ICDAR 2009 Arabic Handwriting Recognition Competition". 2009 10th International Conference on Document Analysis and Recognition: 1383–1387. doi:10.1109/ICDAR.2009.256. ISBN 978-1-4244-4500-4. 
  51. Olah, Chris (27 August 2015). "Understanding LSTM Networks". https://colah.github.io/posts/2015-08-Understanding-LSTMs/. 
  52. Buck, Christian; Heafield, Kenneth; van Ooyen, Bas (May 2014). "N-gram Counts and Language Models from the Common Crawl". pp. 3579–3584. https://www.aclweb.org/anthology/L14-1074/. 
  53. Wolf, Thomas; Debut, Lysandre; Sanh, Victor; Chaumond, Julien; Delangue, Clement; Moi, Anthony; Cistac, Pierric; Rault, Tim et al. (2020). "Transformers: State-of-the-Art Natural Language Processing". Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 38–45. doi:10.18653/v1/2020.emnlp-demos.6. 
  54. Zhu, Yukun; Kiros, Ryan; Zemel, Rich; Salakhutdinov, Ruslan; Urtasun, Raquel; Torralba, Antonio; Fidler, Sanja (2015). Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. pp. 19–27. https://www.cv-foundation.org/openaccess/content_iccv_2015/html/Zhu_Aligning_Books_and_ICCV_2015_paper.html. 
  55. 55.0 55.1 Tsvetkov, Yulia (22 June 2017). "Opportunities and Challenges in Working with Low-Resource Languages". Carnegie Mellon University. http://www.cs.cmu.edu/~ytsvetko/jsalt-part1.pdf. 
  56. Zhu, Yukun; Kiros, Ryan; Zemel, Richard; Salakhutdinov, Ruslan; Urtasun, Raquel; Torralba, Antonio; Fidler, Sanja (22 June 2015). "Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books". arXiv:1506.06724 [cs.CV]. # of books: 11,038 / # of sentences: 74,004,228 / # of words: 984,846,357 / mean # of words per sentence: 13 / median # of words per sentence: 11
  57. Williams, Adina; Nangia, Nikita; Bowman, Samuel (1 June 2018). "A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference". Association for Computational Linguistics. https://www.aclweb.org/anthology/N18-1101.pdf. "At 433k examples, this resource is one of the largest corpora available for natural language inference (a.k.a. recognizing textual entailment), [...] offering data from ten distinct genres of written and spoken English [...] while supplying an explicit setting for evaluating cross-genre domain adaptation." 
  58. Lai, Guokun; Xie, Qizhe; Hanxiao, Liu; Yang, Yiming; Hovy, Eduard (15 April 2017). "RACE: Large-scale ReAding Comprehension Dataset From Examinations". arXiv:1704.04683 [cs.CL].
  59. Mostafazadeh, Nasrin; Roth, Michael; Louis, Annie; Chambers, Nathanael; Allen, James F. (3 April 2017). "LSDSem 2017 Shared Task: The Story Cloze Test". Association for Computational Linguistics. https://www.aclweb.org/anthology/W17-0906.pdf. "The LSDSem’17 shared task is the Story Cloze Test, a new evaluation for story understanding and script learning. This test provides a system with a four-sentence story and two possible endings, and the system must choose the correct ending to the story. Successful narrative understanding (getting closer to human performance of 100%) requires systems to link various levels of semantics to commonsense knowledge." 
  60. Wang, Alex; Singh, Amanpreet; Michael, Julian; Hill, Felix; Levy, Omar; Bowman, Samuel R. (20 April 2018). "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding". arXiv:1804.07461 [cs.CL].
  61. 61.0 61.1 Trinh, Trieu H.; Le, Quoc V. (7 Jun 2018). "A Simple Method for Commonsense Reasoning". arXiv:1806.02847 [cs.CL].
  62. 62.0 62.1 Quach, Katyanna (14 February 2019). "Roses are red, this is sublime: We fed OpenAI's latest chat bot a classic Reg headline". https://www.theregister.com/2019/02/14/open_ai_language_bot/. 
  63. 63.0 63.1 "The Staggering Cost of Training SOTA AI Models". 27 June 2019. https://syncedreview.com/2019/06/27/the-staggering-cost-of-training-sota-ai-models/. 
  64. Wiggers, Kyle (23 March 2020). "Google open-sources framework that reduces AI training costs by up to 80%". https://venturebeat.com/2020/03/23/google-open-sources-framework-that-reduces-ai-training-costs-by-up-to-80/. 
  65. 65.0 65.1 65.2 65.3 65.4 65.5 Vincent, James (14 February 2019). "OpenAI's new multitalented AI writes, translates, and slanders". The Verge. https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-openai-gpt2. 
  66. 66.0 66.1 66.2 Piper, Kelsey (14 February 2019). "An AI helped us write this article". Vox. https://www.vox.com/future-perfect/2019/2/14/18222270/artificial-intelligence-open-ai-natural-language-processing. 
  67. Köbis, Nils; Mossink, Luca D. (1 January 2021). "Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry". Computers in Human Behavior 114: 106553. doi:10.1016/j.chb.2020.106553. 
  68. Schwartz, Oscar (4 July 2019). "Could 'fake text' be the next global political threat?". The Guardian. https://www.theguardian.com/technology/2019/jul/04/ai-fake-text-gpt-2-concerns-false-information. 
  69. 69.0 69.1 Vincent, James (21 February 2019). "AI researchers debate the ethics of sharing potentially harmful programs". The Verge. https://www.theverge.com/2019/2/21/18234500/ai-ethics-debate-researchers-harmful-programs-openai. 
  70. 70.0 70.1 Zhang, Hugh (19 February 2019). "OpenAI: Please Open Source Your Language Model". The Gradient. https://thegradient.pub/openai-please-open-source-your-language-model/. 
  71. Gokaslan, Aaron; Cohen, Vanya; Pavlick, Ellie; Tellex, Stefanie (22 August 2019). "OpenGPT-2: We Replicated GPT-2 Because You Can Too". Noteworthy. https://blog.usejournal.com/opengpt-2-we-replicated-gpt-2-because-you-can-too-45e34e6d36dc?gi=4c998b75b4da. 
  72. 72.0 72.1 Vincent, James (6 June 2019). "There's a subreddit populated entirely by AI personifications of other subreddits". https://www.theverge.com/2019/6/6/18655212/reddit-ai-bots-gpt2-openai-text-artificial-intelligence-subreddit. 
  73. Vincent, James (24 July 2019). "This AI-powered autocompletion software is Gmail's Smart Compose for coders". https://www.theverge.com/2019/7/24/20708542/coding-autocompleter-deep-tabnine-ai-deep-learning-smart-compose. 
  74. Olson, Mathew (17 December 2019). "AI Dungeon 2, the Text Adventure Where You Can do Nearly Anything, Is Now on Mobile". https://www.usgamer.net/articles/ai-dungeon-2-the-text-adventure-where-you-can-do-nearly-anything-is-now-on-mobile. 
  75. Nelius, Joanna (3 August 2020). "This AI-Powered Choose-Your-Own-Adventure Text Game Is Super Fun and Makes No Sense". https://gizmodo.com/this-ai-powered-choose-your-own-adventure-text-game-is-1844593111. 
  76. Ha, Anthony (4 February 2021). "AI Dungeon-maker Latitude raises $3.3M to build games with 'infinite' story possibilities". TechCrunch. https://techcrunch.com/2021/02/04/latitude-seed-funding/. 
  77. "Write With Transformer". https://transformer.huggingface.co/. 
  78. "Talk to Transformer". https://talktotransformer.com/. 
  79. "CreativeEngines". https://creativeengines.ai/. 
  80. Ohlheiser, Abby; Hao, Karen (26 February 2021). "An AI is training counselors to deal with teens in crisis". MIT Technology Review. https://www.technologyreview.com/2021/02/26/1020010/trevor-project-ai-suicide-hotline-training/.