Research article Special Issues

Advancing document-level event extraction: Integration across texts and reciprocal feedback

  • Received: 17 August 2023 Revised: 16 October 2023 Accepted: 22 October 2023 Published: 03 November 2023
  • The primary objective of document-level event extraction is to extract relevant event information from lengthy texts. However, many existing methods for document-level event extraction fail to fully incorporate the contextual information that spans across sentences. To overcome this limitation, the present study proposes a document-level event extraction model called Integration Across Texts and Reciprocal Feedback (IATRF). The proposed model constructs a heterogeneous graph and employs a graph convolutional network to enhance the connection between document and entity information. This approach facilitates the acquisition of semantic information enriched with document-level context. Additionally, a Transformer classifier is introduced to transform multiple event types into a multi-label classification task. To tackle the challenge of event argument recognition, this paper introduces the Reciprocal Feedback Argument Extraction strategy. Experimental results conducted on both our COSM dataset and the publicly available ChFinAnn dataset demonstrate that the proposed model outperforms previous methods in terms of F1 value, thus confirming its effectiveness. The IATRF model effectively solves the problems of long-distance document context-aware representation and cross-sentence argument dispersion.

    Citation: Min Zuo, Jiaqi Li, Di Wu, Yingjun Wang, Wei Dong, Jianlei Kong, Kang Hu. Advancing document-level event extraction: Integration across texts and reciprocal feedback[J]. Mathematical Biosciences and Engineering, 2023, 20(11): 20050-20072. doi: 10.3934/mbe.2023888

    Related Papers:

  • The primary objective of document-level event extraction is to extract relevant event information from lengthy texts. However, many existing methods for document-level event extraction fail to fully incorporate the contextual information that spans across sentences. To overcome this limitation, the present study proposes a document-level event extraction model called Integration Across Texts and Reciprocal Feedback (IATRF). The proposed model constructs a heterogeneous graph and employs a graph convolutional network to enhance the connection between document and entity information. This approach facilitates the acquisition of semantic information enriched with document-level context. Additionally, a Transformer classifier is introduced to transform multiple event types into a multi-label classification task. To tackle the challenge of event argument recognition, this paper introduces the Reciprocal Feedback Argument Extraction strategy. Experimental results conducted on both our COSM dataset and the publicly available ChFinAnn dataset demonstrate that the proposed model outperforms previous methods in terms of F1 value, thus confirming its effectiveness. The IATRF model effectively solves the problems of long-distance document context-aware representation and cross-sentence argument dispersion.



    加载中


    [1] X. Wu, J. Wu, X. Fu, J. Li, P. Zhou, X. Jiang, Automatic knowledge graph construction: A report on the 2019 icdm/icbk contest, in 2019 IEEE International Conference on Data Mining (ICDM), (2019), 1540–1545. https://doi.org/10.1109/ICDM.2019.00204
    [2] Z. Chen, H. Yu, J. Li, X. Luo, Entity representation by neighboring relations topology for inductive relation prediction, in PRICAI 2022: Trends in Artificial Intelligence, Springer, (2022), 59–72. https://doi.org/10.1007/978-3-031-20865-2_5
    [3] C. Y. Liu, C. Zhou, J. Wu, H. Xie, Y. Hu, L. Guo, CPMF: A collective pairwise matrix factorization model for upcoming event recommendation, in 2017 International Joint Conference on Neural Networks (IJCNN), (2017), 1532–1539. https://doi.org/10.1109/IJCNN.2017.7966033
    [4] L. Gao, J. Wu, Z. Qiao, C. Zhou, H. Yang, Y. Hu, Collaborative social group influence for event recommendation, in Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, (2016), 1941–1944. https://doi.org/10.1145/2983323.2983879
    [5] J. Liu, Y. Chen, K. Liu, W. Bi, X. Liu, Event extraction as machine reading comprehension, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), (2020), 1641–1651. https://doi.org/10.18653/v1/2020.emnlp-main.128
    [6] F. Li, W. Peng, Y. Chen, Q. Wang, L. Pan, Y. Lyu, et al., Event extraction as multi-turn question answering, in Findings of the Association for Computational Linguistics: EMNLP 2020, (2020), 829–838. https://doi.org/10.18653/v1/2020.findings-emnlp.73
    [7] X. Ma, J. Wu, S. Xue, J. Yang, C. Zhou, Q. Z. Sheng, et al., A comprehensive survey on graph anomaly detection with deep learning, IEEE Trans. Knowl. Data Eng., 2021 (2021). https://doi.org/10.1109/tkde.2021.3118815 doi: 10.1109/tkde.2021.3118815
    [8] L. Li, L. Jin, Z. Zhang, Q. Liu, X. Sun, H. Wang, Graph convolution over multiple latent context-aware graph structures for event detection, IEEE Access, 8 (2020), 171435–171446. https://doi.org/10.1109/access.2020.3024872 doi: 10.1109/access.2020.3024872
    [9] Y. Diao, H. Lin, L. Yang, X. Fan, D. Wu, Z. Yang, et al., FBSN: A hybrid fine-grained neural network for biomedical event trigger identification, Neurocomputing, 381 (2020), 105–112. https://doi.org/10.1016/j.neucom.2019.09.042 doi: 10.1016/j.neucom.2019.09.042
    [10] W. Yu, M. Yi, X. Huang, X. Yi, Q. Yuan, Make it directly: Event extraction based on tree-LSTM and Bi-GRU, IEEE Access, 8 (2020), 14344–14354. https://doi.org/10.1109/access.2020.2965964 doi: 10.1109/access.2020.2965964
    [11] L. Huang, H. Ji, K. Cho, C. R. Voss, Zero-shot transfer learning for event extraction, arXiv preprint, (2017), arXiv: 1707.01066. https://doi.org/10.48550/arXiv.1707.01066
    [12] W. Shi, F. Li, J. Li, H. Fei, D. Ji, Effective token graph modeling using a novel labeling strategy for structured sentiment analysis, in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, (2022), 4232–4241.
    [13] Y. Wang, N. Xia, X. Luo, H. Yu, Event extraction based on the fusion of dynamic prompt information and multi-dimensional features, in 2023 International Joint Conference on Neural Networks (IJCNN), (2023), 1–9. https://doi.org/10.1109/IJCNN54540.2023.10191308
    [14] Z. Zhao, H. Yu, X. Luo, J. Gao, X. Xu, S. Guo, Ia-icgcn: Integrating prior knowledge via intra-event association and inter-event causality for chinese causal event extraction, in Artificial Neural Networks and Machine Learning–ICANN 2022, (2022), 519–531. https://doi.org/10.1007/978-3-031-15931-2_43
    [15] H. Zhang, D. Zhang, Z. Wei, Y. Li, S. Wu, Z. Mao, et al., Analysis of public opinion on food safety in Greater China with big data and machine learning, Curr. Res. Food Sci., 6 (2023), 100468. https://doi.org/10.1016/j.crfs.2023.100468 doi: 10.1016/j.crfs.2023.100468
    [16] M. Siegrist, C. Hartmann, Consumer acceptance of novel food technologies, Nat. Food, 1 (2020), 343–350. https://doi.org/10.1038/s43016-020-0094-x doi: 10.1038/s43016-020-0094-x
    [17] M. Zuo, Y. Wang, W. Dong, Q. Zhang, Y. Cai, J. Kong, Visual description augmented integration network for multimodal entity and relation extraction, Appl. Sci., 13 (2023), 6178. https://doi.org/10.3390/app13106178 doi: 10.3390/app13106178
    [18] W. Lu, D. Roth, Automatic event extraction with structured preference modeling, in Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, (2012), 835–844.
    [19] H. Fei, Y. Ren, D. Ji, Boundaries and edges rethinking: An end-to-end neural model for overlapping entity relation extraction, Management, 57 (2020), 102311. https://doi.org/10.1016/j.ipm.2020.102311 doi: 10.1016/j.ipm.2020.102311
    [20] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, arXiv preprint, (2016), arXiv: 1609.02907. https://doi.org/10.48550/arXiv.1609.02907
    [21] K. Shalini, H. B. Ganesh, M. A. Kumar, K. Soman, Sentiment analysis for code-mixed Indian social media text with distributed representation, in 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), (2018), 1126–1131. https://doi.org/10.1109/ICACCI.2018.8554835
    [22] R. Zhao, K. Mao, Fuzzy bag-of-words model for document representation, IEEE Trans. Fuzzy Syst., 26 (2018), 794–804. https://doi.org/10.1109/tfuzz.2017.2690222 doi: 10.1109/tfuzz.2017.2690222
    [23] L. Zhang, S. Wang, B. Liu, Deep learning for sentiment analysis: A survey, WIREs Data Min. Knowl. Discovery, 8 (2018), e1253. https://doi.org/10.1002/widm.1253 doi: 10.1002/widm.1253
    [24] T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space, arXiv preprint, (2013), arXiv: 1301.3781. https://doi.org/10.48550/arXiv.1301.3781
    [25] J. Pennington, R. Socher, C. D. Manning, Glove: Global vectors for word representation, in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), (2014), 1532–1543. https://doi.org/10.3115/v1/D14-1162
    [26] M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, et al., Deep contextualized word representations, in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (2018), 2227–2237.
    [27] J. Devlin, M. W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint, (2018), arXiv: 1810.04805. https://doi.org/10.48550/arXiv.1810.04805
    [28] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, (2023), arXiv: 1706.03762. https://doi.org/10.48550/arXiv.1706.03762
    [29] S. Liu, Y. Chen, S. He, K. Liu, J. Zhao, Leveraging framenet to improve automatic event detection, in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, (2016), 2134–2143. https://doi.org/10.18653/v1/P16-1201
    [30] Y. Hong, J. Zhang, B. Ma, J. Yao, G. Zhou, Q. Zhu, Using cross-entity inference to improve event extraction, in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, (2011), 1127–1136.
    [31] H. Fei, F. Li, B. Li, D. Ji, Encoder-decoder based unified semantic role labeling with label-aware syntax, in Proceedings of the AAAI Conference on Artificial Intelligence, (2021), 12794–12802. https://doi.org/10.1609/aaai.v35i14.17514
    [32] J. Li, H. Fei, J. Liu, S. Wu, M. Zhang, C. Teng, et al., Unified named entity recognition as word-word relation classification, in Proceedings of the AAAI Conference on Artificial Intelligence, 36 (2022), 10965–10973.
    [33] Y. Chen, L. Xu, K. Liu, D. Zeng, J. Zhao, Event extraction via dynamic multi-pooling convolutional neural networks, in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, (2015), 167–176. https://doi.org/10.3115/v1/P15-1017
    [34] Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE, 86 (1998), 2278–2324. https://doi.org/10.1109/5.726791 doi: 10.1109/5.726791
    [35] T. H. Nguyen, K. Cho, R. Grishman, Joint event extraction via recurrent neural networks, in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (2016), 300–309. https://doi.org/10.18653/v1/N16-1034
    [36] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, et al., Learning phrase representations using RNN encoder-decoder for statistical machine translation, arXiv preprint, (2014), arXiv: 1406.1078. https://doi.org/10.48550/arXiv.1406.1078
    [37] X. Liu, Z. Luo, H. Huang, Jointly multiple events extraction via attention-based graph information aggregation, arXiv preprint, (2018), arXiv: 1809.09078. https://doi.org/10.18653/v1/D18-1156
    [38] S. Yang, D. Feng, L. Qiao, Z. Kan, D. Li, Exploring pre-trained language models for event extraction and generation, in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, (2019), 5284–5294. https://doi.org/10.18653/v1/P19-1522
    [39] X. Du, C. Cardie, Event extraction by answering (almost) natural questions, arXiv preprint, (2020), arXiv: 2004.13625. https://doi.org/10.48550/arXiv.2004.13625
    [40] Y. Zhou, Y. Chen, J. Zhao, Y. Wu, J. Xu, J. Li, What the role is vs. what plays the role: Semi-supervised event argument extraction via dual question answering, in Proceedings of the AAAI Conference on Artificial Intelligence, (2021), 14638–14646. https://doi.org/10.1609/aaai.v35i16.17720
    [41] A. P. B. Veyseh, M. Van Nguyen, F. Dernoncourt, B. Min, T. Nguyen, Document-level event argument extraction via optimal transport, in Findings of the Association for Computational Linguistics: ACL 2022, (2022), 1648–1658. https://doi.org/10.18653/v1/2022.findings-acl.130
    [42] Y. Ren, Y. Cao, F. Fang, P. Guo, Z. Lin, W. Ma, et al., CLIO: Role-interactive Multi-event Head Attention Network for Document-level Event Extraction, in Proceedings of the 29th International Conference on Computational Linguistics, (2022), 2504–2514.
    [43] F. Wang, F. Li, H. Fei, J. Li, S. Wu, F. Su, et al., Entity-centered cross-document relation extraction, in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, (2022), 9871–9881. https://doi.org/10.48550/arXiv.2210.16541
    [44] H. Yang, Y. Chen, K. Liu, Y. Xiao, J. Zhao, Dcfee: A document-level chinese financial event extraction system based on automatically labeled training data, in Proceedings of ACL 2018, System Demonstrations, (2018), 50–55. https://doi.org/10.18653/v1/P18-4009
    [45] S. Zheng, W. Cao, W. Xu, J. Bian, Doc2EDAG: An end-to-end document-level framework for Chinese financial event extraction, in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), (2019), 337–346.
    [46] R. Xu, T. Liu, L. Li, B. Chang, Document-level event extraction via heterogeneous graph-based interaction model with a tracker, arXiv preprint, (2021), arXiv: 2105.14924. https://doi.org/10.48550/arXiv.2105.14924
    [47] H. Yang, D. Sui, Y. Chen, K. Liu, J. Zhao, T. Wang, Document-level event extraction via parallel prediction networks, in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, (2021), 6298–6308. https://doi.org/10.18653/v1/2021.acl-long.492
    [48] Q. Wan, C. Wan, K. Xiao, D. Liu, C. Li, B. Zheng, et al., Joint document-level event extraction via token-token bidirectional event completed graph, in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, (2023), 10481–10492. https://doi.org/10.18653/v1/2023.acl-long.584
    [49] J. Li, K. Xu, F. Li, H. Fei, Y. Ren, D. Ji, MRN: A locally and globally mention-based reasoning network for document-level relation extraction, in Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, (2021), 1359–1370.
    [50] Y. Huang, W. Jia, Exploring sentence community for document-level event extraction, in Findings of the Association for Computational Linguistics: EMNLP 2021, (2021), 340–351. https://doi.org/10.18653/v1/2021.findings-emnlp.32
    [51] R. Hu, H. Liu, H. Zhou, Role knowledge prompting for document-level event argument extraction, Appl. Sci., 13 (2023), 3041. https://doi.org/10.3390/app13053041 doi: 10.3390/app13053041
    [52] J. Lafferty, A. Mccallum, F. Pereira, Conditional random fields: Probabilistic models for segmenting and labeling sequence data, in Proceedings of the Eighteenth International Conference on Machine Learning, (2001), 282–289.
    [53] Z. Huang, W. Xu, K. Yu, Bidirectional LSTM-CRF models for sequence tagging, arXiv preprint, (2015), arXiv: 1508.01991. https://doi.org/10.48550/arXiv.1508.01991
    [54] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint, (2014), arXiv: 1412.6980. https://doi.org/10.48550/arXiv.1412.6980
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(592) PDF downloads(44) Cited by(0)

Article outline

Figures and Tables

Figures(7)  /  Tables(8)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog