Generalized Attention Mechanism and Relative Position for Transformer
In this paper, we propose generalized attention mechanism (GAM) by first suggesting a new interpretation for self-attention mechanism of Vaswani et al. Following the interpretation, we provide description for different variants of attention mechanism which together form GAM. Further, we propose a new relative position representation within the framework of GAM. This representation can be easily utilized for cases in which elements next to each other in input sequence can be at random locations in actual dataset/corpus.
Copyright (c) 2022 R. V. R. Pandya
This work is licensed under a Creative Commons Attribution 4.0 International License.