Attention (machine learning)

From HandWiki
Short description: Machine learning technique

In neural networks, attention is a technique that mimics cognitive attention. The effect enhances some parts of the input data while diminishing other parts — the motivation being that the network should devote more focus to the small, but important, parts of the data. Learning which part of the data is more important than another depends on the context, and this is trained by gradient descent.

Attention-like mechanisms were introduced in the 1990s under names like multiplicative modules, sigma pi units, and hypernetworks.[1] Its flexibility comes from its role as "soft weights" that can change during runtime, in contrast to standard weights that must remain fixed at runtime. Uses of attention include memory in neural Turing machines, reasoning tasks in differentiable neural computers,[2] language processing in transformers, and multi-sensory data processing (sound, images, video, and text) in perceivers.[3][4][5][6]

General idea

Given a sequence of tokens labeled by the index [math]\displaystyle{ i }[/math], a neural network computes a soft weight [math]\displaystyle{ w_i }[/math] for each token [math]\displaystyle{ i }[/math] with the property that [math]\displaystyle{ w_i }[/math] is nonnegative and [math]\displaystyle{ \sum_i w_i=1 }[/math]. Each token is assigned a value vector [math]\displaystyle{ v_i }[/math] which is computed from the Word embedding of the [math]\displaystyle{ i }[/math]th token. The weighted average [math]\displaystyle{ \sum_i w_i v_i }[/math] is the output of the attention mechanism.

The query-key mechanism computes the soft weights. From the word embedding of each token it computes its corresponding query vector [math]\displaystyle{ q_i }[/math] and key vector [math]\displaystyle{ k_i }[/math]. The weights are obtained by taking the Softmax function of the dot product [math]\displaystyle{ q_i k_j }[/math] where [math]\displaystyle{ i }[/math] represents the current token and [math]\displaystyle{ j }[/math] represents the token that's being attended to.

In some architectures, there are multiple heads of attention, each operating independently with their own queries, keys and values.

A language translation example

To build a machine that translates English to French, one takes the basic an Encoder-Decoder and graft an attention unit to it (diagram below). In the simplest case, the attention unit can consists of dot products of the recurrent encoder states and does not need training. In practice, the attention unit consists of 3 fully connected neural network layers called Query-Key-Value that need to be trained. See the Variants section below.

Encoder-Decoder with attention. The left part (black lines) is the Encoder-Decoder, the middle part (orange lines) is the attention unit, and the right part (in grey & colors) is the computed data. Grey regions in H matrix and w vector are zero values. Numerical subscripts indicate vector sizes while lettered subscripts i and i-1 indicate time steps.
Encoder-Decoder with attention. The left part (black lines) is the Encoder-Decoder, the middle part (orange lines) is the attention unit, and the right part (in grey & colors) is the computed data. Grey regions in H matrix and w vector are zero values. Numerical subscripts indicate vector sizes while lettered subscripts i and i-1 indicate time steps.

Click here for the static image: == Summary == Submitted to commons.wikimedia.org

Licensing

This file is licensed under the Attribution-Share Alike 3.0 Unported (CC BY-SA 3.0) license. You are free:

  • to share – to copy, distribute and transmit the work
  • to remix – to adapt the work

Under the following conditions:

  • attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
  • share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
Legend
label description
100 max sentence length
300 embedding size (word dimension)
500 length of hidden vector
9k, 10k dictionary size of input & output languages respectively.
x, Y 9k and 10k 1-hot dictionary vectors. x → x implemented as a lookup table rather than vector multiplication. Y is the 1-hot maximizer of the linear Decoder layer D; that is, it takes the argmax of D's linear layer output.
x 300-long word embedding vector. The vectors are usually pre-calculated from other projects such as GloVe or Word2Vec.
h 500-long encoder hidden vector. At each point in time, this vector summarizes all the preceding words before it. The final h can be viewed as a "sentence" vector, or a thought vector as Hinton calls it.
s 500-long decoder hidden state vector.
E 500 neuron RNN encoder. 500 outputs. Input count is 800–300 from source embedding + 500 from recurrent connections. The encoder feeds directly into the decoder only to initialize it, but not thereafter; hence, that direct connection is shown very faintly.
D 2-layer decoder. The recurrent layer has 500 neurons and the fully connected linear layer has 10k neurons (the size of the target vocabulary).[7] The linear layer alone has 5 million (500 * 10k) weights -- ~10 times more weights than the recurrent layer.
score 100-long alignment score
w 100-long vector attention weight. These are "soft" weights which changes during the forward pass, in contrast to "hard" neuronal weights that change during the learning phase.
A Attention module — this can be a dot product of recurrent states, or the Query-Key-Value fully connected layers. The output is a 100-long vector w.
H 500x100. 100 hidden vectors h concatenated into a matrix
c 500-long context vector = H * w. c is a linear combination of h vectors weighted by w.

Viewed as a matrix, the attention weights show how the network adjusts its focus according to context.

I love you
je .94 .02 .04
t' .11 .01 .88
aime .03 .95 .02

This view of the attention weights addresses the "explainability" problem that neural networks are criticized for. Networks that perform verbatim translation without regard to word order would have a diagonally dominant matrix if they were analyzable in these terms. The off-diagonal dominance shows that the attention mechanism is more nuanced. On the first pass through the decoder, 94% of the attention weight is on the first English word "I", so the network offers the word "je". On the second pass of the decoder, 88% of the attention weight is on the third English word "you", so it offers "t'". On the last pass, 95% of the attention weight is on the second English word "love", so it offers "aime".

Variants

There are many variants of attention: dot product, query-key-value,[3] hard, soft, self, cross, Luong,[8] and Bahdanau[9] to name a few. These variants recombine the encoder-side inputs to redistribute those effects to each target output. Often, a correlation-style matrix of dot products provides the re-weighting coefficients (see legend).

1. encoder-decoder dot product 2. encoder-decoder QKV 3. encoder-only dot product 4. encoder-only QKV 5. Pytorch tutorial
Both Encoder & Decoder are needed to calculate Attention.[8]
Both Encoder & Decoder are needed to calculate Attention.[10]
Decoder is NOT used to calculate Attention. With only 1 input into corr, W is an auto-correlation of dot products. wij = xi * xj [11]
Decoder is NOT used to calculate Attention.[12]
A FC layer is used to calculate Attention instead of dot product correlation. [13]
Legend
label description
variables X,H,S,T upper case variables represent the entire sentence, and not just the current word. For example, H is a matrix of the encoder hidden state—one word per column.
S, T S = decoder hidden state, T = target word embedding. In the Pytorch Tutorial variant training phase, T alternates between 2 sources depending on the level of teacher forcing used. T could be the embedding of the network's output word, i.e. embedding(argmax(FC output)). Alternatively with teacher forcing, T could be the embedding of the known correct word which can occur with a constant forcing probability, say 1/2.
X, H H = encoder hidden state, X = input word embeddings
W attention coefficients
Qw, Kw, Vw, FC weight matrices for query, key, vector respectively. FC is a fully connected weight matrix.
circled +, circled x circled + = vector concatenation. circled x = matrix multiplication
corr column wise softmax( matrix of all combinations of dot products ). The dot products are xi * xj in variant # 3, hi * sj in variant 1, and column i ( Kw*H ) * column j ( Qw*S ) in variant 2, and column i (Kw*X) * column j (Qw*X) in variant 4. variant 5 uses a fully connected layer to determine the coefficients. If the variant is QKV, then the dot products are normalized by the sqrt(d) where d is the height of the QKV matrices.

See also

References

  1. Yann Lecun (2020). Deep Learning course at NYU, Spring 2020, video lecture Week 6. Event occurs at 53:00. Retrieved 2022-03-08.
  2. Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward et al. (2016-10-12). "Hybrid computing using a neural network with dynamic external memory" (in en). Nature 538 (7626): 471–476. doi:10.1038/nature20101. ISSN 1476-4687. PMID 27732574. Bibcode2016Natur.538..471G. https://ora.ox.ac.uk/objects/uuid:dd8473bd-2d70-424d-881b-86d9c9c66b51. 
  3. 3.0 3.1 Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N.; Kaiser, Lukasz; Polosukhin, Illia (2017-12-05). "Attention Is All You Need". arXiv:1706.03762 [cs.CL].
  4. Ramachandran, Prajit; Parmar, Niki; Vaswani, Ashish; Bello, Irwan; Levskaya, Anselm; Shlens, Jonathon (2019-06-13). "Stand-Alone Self-Attention in Vision Models". arXiv:1906.05909 [cs.CV].
  5. Jaegle, Andrew; Gimeno, Felix; Brock, Andrew; Zisserman, Andrew; Vinyals, Oriol; Carreira, Joao (2021-06-22). "Perceiver: General Perception with Iterative Attention". arXiv:2103.03206 [cs.CV].
  6. Ray, Tiernan. "Google's Supermodel: DeepMind Perceiver is a step on the road to an AI machine that could process anything and everything" (in en). https://www.zdnet.com/article/googles-supermodel-deepmind-perceiver-is-a-step-on-the-road-to-an-ai-machine-that-could-process-everything/. 
  7. "Pytorch.org seq2seq tutorial". https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html. 
  8. 8.0 8.1 Luong, Minh-Thang (2015-09-20). "Effective Approaches to Attention-based Neural Machine Translation". arXiv:1508.04025v5 [cs.CL].
  9. Bahdanau, Dzmitry (2016-05-19). "NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE". arXiv:1409.0473.
  10. Neil Rhodes (2021). CS 152 NN—27: Attention: Keys, Queries, & Values. Event occurs at 06:30. Retrieved 2021-12-22.
  11. Alfredo Canziani & Yann Lecun (2021). NYU Deep Learning course, Spring 2020. Event occurs at 05:30. Retrieved 2021-12-22.
  12. Alfredo Canziani & Yann Lecun (2021). NYU Deep Learning course, Spring 2020. Event occurs at 20:15. Retrieved 2021-12-22.
  13. Robertson, Sean. "NLP From Scratch: Translation With a Sequence To Sequence Network and Attention". https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html. 

External links