Having spent some time on early attempts to bring together neural networks and symbol-oriented knowledge representation, I am intrigued by the more recent work on deep learning and knowledge graphs. While many of the approaches appear conceptually interesting, I have some reservations about their use in large, noisy real-world graphs. This survey looks at attention as a way to tame some of these issues. The authors review publications that use attention in graphs and machine learning. Their review is structured around three taxonomies: the type of attention, the task, and the problem setting.
The authors provide rigorous definitions of different types of graphs and graph attention. The latter identifies nodes that are of particular interest, and ranks them according to a relevance function. These could be nodes in the vicinity of a given node or distant nodes with similar properties to the current one. Multiple relevance functions can be used to shift the perspective of attention accordingly. The notion of attention has been used for a variety of contexts and tasks, including computer vision, natural language processing, and information retrieval, with node or link classification, link prediction, node ranking and alignment, graph classification, and generation of sequences from graphs as particular tasks.
While I had difficulties following the intricacies of all the approaches evaluated according to their taxonomies, I found the survey very valuable. It offers a systematic approach to comparing and analyzing attention in graph-based methods. In addition, the authors point out interesting venues for further work, especially concerning the suitability of such approaches for large-scale real-world problems.