Considering that the traditional deep learning event extraction method ignores the correlation between word features and sequence information, it cannot fully explore the hidden associations between events and events and between events and primary attributes. To solve these problems, we developed a new framework for event extraction called the masked attention-guided dynamic graph aggregation network. On the one hand, to obtain effective word representation and sequence representation, an interaction and complementary relationship are established between word vectors and character vectors. At the same time, a squeeze layer is introduced in the bidirectional independent recurrent unit to model the sentence sequence from both positive and negative directions while retaining the local spatial details to the maximum extent and establishing practical long-term dependencies and rich global context representations. On the other hand, the designed masked attention mechanism can effectively balance the word vector features and sequence semantics and refine these features. The designed dynamic graph aggregation module establishes effective connections between events and events, and between events and essential attributes, strengthens the interactivity and association between them, and realizes feature transfer and aggregation on graph nodes in the neighborhood through dynamic strategies to improve the performance of event extraction. We designed a reconstructed weighted loss function to supervise and adjust each module individually to ensure the optimal feature representation. Finally, the proposed MaskDGNets framework is evaluated on two baseline datasets, DuEE and CCKS2020. It demonstrates its robustness and event extraction performance, with F1 of 81.443% and 87.382%, respectively.
Copyright: © 2024 Zhang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.