## onmt.GlobalAttention

Global attention takes a matrix and a query vector. It then computes a parameterized convex combination of the matrix based on the input query.

H_1 H_2 H_3 ... H_n
q   q   q       q
|  |   |       |
\ |   |      /
.....
\   |  /
a


Constructs a unit mapping: Where H is of batch x n x dim and q is of batch x dim.

The full function is .

### onmt.GlobalAttention(dim)

A nn-style module computing attention.

Parameters:

• dim - dimension of the context vectors.