AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

A Survey of the Application of Neural Networks to Event Extraction

College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266000, China
Shandong Provincial University Laboratory for Protected Horticulture, Weifang University of Science and Technology, Weifang 261000, China
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210000, China
School of Computer Science, Qufu Normal University, Rizhao 276800, China
Show Author Information

Abstract

Event extraction is an important part of natural language information extraction, and it’s widely employed in other natural language processing tasks including question answering and machine reading comprehension. However, there is a lack of recent comprehensive survey papers on event extraction. In the past few years, numerous high-quality and innovative event extraction methods have been proposed, making it necessary to consolidate these new developments with previous work in order to provide a clear overview for researchers and serve as a reference for future studies. In addition, event detection is a fundamental sub-task in event extraction, previous survey papers have often overlooked the related work on event detection. Therefore, this paper aims to bridge these gaps by presenting a comprehensive survey of event extraction, including recent advancements and an analysis of previous research on event detection. The resources for event extraction are first introduced in this research, and then the numerous neural network models currently employed in event extraction tasks are divided into four types: word sequence-based methods, graph-based neural network methods, external knowledge-based approaches, and prompt-based approaches. We compare and contrast them in depth, pointing out the flaws and difficulties with existing research. Finally, we discuss the future of event extraction development.

References

[1]
Y. Hong, J. Zhang, B. Ma, J. Yao, G. Zhou, and Q. Zhu, Using cross-entity inference to improve event extraction, in Proc. 49th Annual Meeting of the Association for Computational Linguistics : Human Language Technologies, Portland, OR, USA, 2011, pp. 1127–1136.
[2]
H. Ji and R. Grishman, Refining event extraction through cross-document inference, in Proc. ACL-08 : Hlt, Columbus, OH, USA, 2008, pp. 254–262.
[3]
Q. Li, H. Ji, and L. Huang, Joint event extraction via structured prediction with global features, in Proc. 51st Annual Meeting of the Association for Computational Linguistics (Volume 1 : Long Papers), 2013, Sofia, Bulgaria, pp. 73–82.
[4]
L. Zhan and X. Jiang, Survey on event extraction technology in information extraction research area, in Proc. IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conf. (ITNEC), Chengdu, China, 2019, pp. 2121–2126.
[5]
Q. Li, J. Li, J. Sheng, S. Cui, J. Wu, Y. Hei, H. Peng, S. Guo, L. Wang, A. Beheshti, et al., A survey on deep learning event extraction: Approaches and applications, IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 5, pp. 6301–6321, 2022.
[6]
V. D. Lai, Event extraction: A survey, arXiv preprint arXiv: 2210.03419, 2022.
[7]
J. Liu, L. Min, and X. Huang, An overview of event extraction and its applications, arXiv preprint arXiv: 2111.03212, 2021.
[8]
G. R. Doddington, A. Mitchell, M. A. Przybocki, L. A. Ramshaw, S. M. Strassel, and R. M. Weischedel, The automatic content extraction (ace) program-tasks, data, and evaluation, Lrec, vol. 2, no. 1, 2004, pp. 837–840.
[9]
S. Kulick, A. Bies, and J. Mott, Inter-annotator agreement for ERE annotation, in Proc. Second Workshop on EVENTS : Definition, Detection, Coreference, and Representation, Stroudsburg, PA, USA, 2014, pp. 21–25.
[10]
J. Ellis, J. Getman, and S. M. Strassel, Overview of linguistic resources for the tac kbp 2014 evaluations: Planning, execution, and results, in Proc. TAC KBP 2014 Workshop, National Institute of Standards and Technology, Miyazaki, Japan, 2014, pp. 17–18.
[11]
Y. Zhang and J. Yang, Chinese ner using lattice lstm, arXiv preprint arXiv:1805.02023, 2018.
[12]
A. Ghaddar and P. Langlais, Robust lexical features for improved neural network named-entity recognition, arXiv preprint arXiv: 1806.03489, 2018.
[13]
J. Shang, L. Liu, X. Ren, X. Gu, T. Ren, and J. Han, Learning named entity tagger using domain-specific dictionary, arXiv preprint arXiv: 1809.03599, 2018.
[14]
J. Yu, B. Bohnet, and M. Poesio, Named entity recognition as dependency parsing, arXiv preprint arXiv: 2005.07150, 2020.
[15]
J. Li, H. Fei, J. Liu, S. Wu, M. Zhang, C. Teng, D. Ji, and F. Li, Unified named entity recognition as wordword relation classification, in Proc. AAAI Conf. on Artificial Intelligence, Virtual, 2022, pp. 10965–10973.
[16]
A. Culotta and J. Sorensen, Dependency tree kernels for relation extraction, in Proc. 42nd Annual Meeting on Association for Computational Linguistics - ACL '04, Barcelona, Spain, 2004, pp. 423–429.
[17]
G. Zhou, J. Su, J. Zhang, and M. Zhang, Exploring various knowledge in relation extraction, in Proc. 43rd Annual Meeting on Association for Computational Linguistics - ACL '05, Morristown, NJ, USA, 2005, pp. 427–434.
[18]
R. C. Bunescu and R. J. Mooney, A shortest path dependency kernel for relation extraction, in Proc. Conf. Human Language Technology and Empirical Methods in Natural Language Processing - HLT '05, Vancouver, Canada, 2005, pp. 724–731.
[19]
M. Zhang, J. Zhang, and J. Su, Exploring syntactic features for relation extraction using a convolution tree kernel, in Proc. Conf. Human Language Technology Conf. North American Chapter of the Association of Computational Linguistics, New York, NY, USA, 2006, pp. 288–295.
[20]
M. Miwa and M. Bansal, End-to-end relation extraction using lstms on sequences and tree structures, arXiv preprint arXiv: 1601.00770, 2016.
[21]
K. Zaporojets, J. Deleu, C. Develder, and T. Demeester, DWIE: An entity-centric dataset for multi-task document-level information extraction, Inf. Process. Manag., vol. 58, no. 4, p. 102563, 2021.
[22]
Z. Zhang and H. Ji, Abstract meaning representation guided graph encoding and decoding for joint information extraction, in Proc. 2021 Conf. North American Chapter of the Association for Computational Linguistics : Human Language Technologies, Online, 2021, pp. 39–49.
[23]
Z. Song, A. Bies, S. Strassel, T. Riese, J. Mott, J. Ellis, J. Wright, S. Kulick, N. Ryant, and X. Ma, From light to rich ERE: Annotation of entities, relations, and events, in Proc. 3rd Workshop on EVENTS : Definition, Detection, Coreference, and Representation, Stroudsburg, PA, USA, 2015, pp. 89–98.
[24]
T. Mitamura, Y. Yamakawa, S. Holm, Z. Song, A. Bies, S. Kulick, and S. Strassel, Event nugget annotation: Processes and issues, in Proc. 3rd Workshop on EVENTS : Definition, Detection, Coreference, and Representation, Stroudsburg, PA, USA, 2015, pp. 66–76.
[25]
J. Liu, Y. Chen, K. Liu, and J. Zhao, Neural cross-lingual event detection with minimal parallel resources, in Proc. 2019 Conf. Empirical Methods in Natural Language Processing and the 9th Int. Joint Conf. Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 2019, pp. 738–748.
[26]
M. Dredze, P. McNamee, D. Rao, A. Gerber, and T. Finin, Entity disambiguation for knowledge base population, in Proc. 23rd Int. Conf. on Computational Linguistics, Beijing, China, 2010, pp. 277–285.
[27]
H. Ji and R. Grishman, Knowledge base population: Successful approaches and challenges, in Proc. 49th Annual Meeting of the Association for Computational Linguistics : Human Language Technologies, Stroudsburg, PA, USA, 2011, pp. 1148–1158.
[28]
Y. Bengio, R. Ducharme, and P. Vincent, A neural probabilistic language model, J. Mach. Learn.Refs., vol. 3, pp. 1137−1155. 2003.
[29]
Y. Chen, L. Xu, K. Liu, D. Zeng, and J. Zhao, Event extraction via dynamic multi-pooling convolutional neural networks, in Proc. 53rd Annual Meeting of the Association for Computational Linguistics and 7th International Joint Conference on Natural Language Processing (Volume 1 : Long Papers), Beijing, China, 2015, pp. 167–176.
[30]
B. Yang and T. Mitchell, Joint extraction of events and entities within a document context, arXiv preprint arXiv: 1609.03632, 2016.
[31]
T. H. Nguyen, K. Cho, and R. Grishman, Joint event extraction via recurrent neural networks, in Proc. 2016 Conf. North American Chapter of the Association for Computational Linguistics : Human Language Technologies, San Diego, CA, USA, 2016, pp. 300–309.
[32]

L. Sha, F. Qian, B. Chang, and Z. Sui, Jointly extracting event triggers and arguments by dependency-bridge RNN and tensor-based argument interaction, Proc. AAAI Conf. Artif. Intell., vol. 32, no. 1, pp. 5916–5923, 2018.

[33]
I. Hsu, Z. Xie, K.-H. Huang, P. Natarajan, and N. Peng, Ampere: Amr-aware prefix for generationbased event argument extraction model, arXiv preprint arXiv: 2305.16734, 2023.
[34]
L. Banarescu, C. Bonial, S. Cai, M. Georgescu, K. Griffitt, U. Hermjakob, K. Knight, P. Koehn, M. Palmer, and N. Schneider, Abstract meaning representation for sembanking, in Proc. 7th Linguistic Annotation Workshop and Interoperability with Discourse, Sofia, Bulgaria, 2013, pp. 178–186.
[35]
S. Zheng, W. Cao, W. Xu, and J. Bian, Doc2edag: An end-to-end document-level framework for chinese financial event extraction, arXiv preprint arXiv: 1904.07535, 2019.
[36]
H. Yang, D. Sui, Y. Chen, K. Liu, J. Zhao, and T. Wang, Document-level event extraction via parallel prediction networks, in Proc. 59th Annual Meeting of the Association for Computational Linguistics and 11th Int. Joint Conf. on Natural Language Processing (Volume 1 : Long Papers), Online, 2021, pp. 6298–6308.
[37]
X. Yang, Y. Lu, and L. Petzold, Few-shot documentlevel event argument extraction, arXiv preprint arXiv:2209.02203, 2022.
[38]
Q. Wan, C. Wan, K. Xiao, D. Liu, C. Li, B. Zheng, X. Liu, and R. Hu, Joint document-level event extraction via token-token bidirectional event completed graph, in Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1 : Long Papers), Toronto, Canada, 2023, pp. 10481–10492.
[39]
R. Ghaeini, X. Z. Fern, L. Huang, and P. Tadepalli, Event nugget detection with forward-backward recurrent neural networks, arXiv preprint arXiv:1802.05672, 2018.
[40]

X. Feng, B. Qin, and T. Liu, A language-independent neural network for event detection, Sci. China Inf. Sci., vol. 61, no. 9, p. 092106, 2018.

[41]

T. Nguyen and R. Grishman, Graph convolutional networks with argument-aware pooling for event detection, Proc. AAAI Conf. Artif. Intell., vol. 32, no. 1, pp. 5900–5907, 2018.

[42]
X. Liu, Z. Luo, and H. Huang, Jointly multiple events extraction via attention-based graph information aggregation, arXiv preprint arXiv: 1809.09078, 2018.
[43]
A. P. B. Veyseh, T. N. Nguyen, and T. H. Nguyen, Graph transformer networks with syntactic and semantic structures for event argument extraction, arXiv preprint arXiv:2010.13391, 2020.
[44]
W. U. Ahmad, N. Peng, and K.-W. Chang, GATE: Graph attention transformer encoder for cross-lingual relation and event extraction, Proc. AAAI Conf. Artif. Intell., vol. 35, no. 14, pp. 12462–12470, 2021.
[45]
R. Xu, T. Liu, L. Li, and B. Chang, Document-level event extraction via heterogeneous graph-based interaction model with a tracker, arXiv preprint arXiv: 2105.14924, 2021.
[46]
Y. Huang and W. Jia, Exploring sentence community for document-level event extraction, in Proc. Findings of the Association for Computational Linguistics : EMNLP 2021, Punta Cana, Dominican Republic, 2021, pp. 340–351.
[47]
Y. Yang, Q. Guo, X. Hu, Y. Zhang, X. Qiu, and Z. Zhang, An amr-based link prediction approach for document-level event argument extraction, arXiv preprint arXiv: 2305.19162, 2023.
[48]
C. Chen and V. Ng, Joint modeling for chinese event extraction with rich linguistic features, in Proc. COLING 2012, Mumbai, India, 2012, pp. 529–544.
[49]
H. Lin, Y. Lu, X. Han, and L. Sun, Nugget proposal networks for chinese event detection, arXiv preprint arXiv: 1805.00249, 2018.
[50]
X. Wu, T. Wang, Y. Fan, and F. Yu, Chinese event extraction via graph attention network, Transactions on Asian and Low-Resource Language Information Processing, vol. 21, no. 4, pp. 1–12, 2022.
[51]
H. Yan, X. Jin, X. Meng, J. Guo, and X. Cheng, Event detection with multi-order graph convolution and aggregated attention, in Proc. 2019 Conf. Empirical Methods in Natural Language Processing and the 9th Int. Joint Conf. Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 2019, pp. 5766–5770.
[52]
R. Han, Y. Zhou, and N. Peng, Domain knowledge empowered structured neural net for end-to-end event temporal relation extraction, arXiv preprint arXiv:2009.07373, 2020.
[53]
K. Wei, X. Sun, Z. Zhang, J. Zhang, G. Zhi, and L. Jin, Trigger is not sufficient: Exploiting frame-aware knowledge for implicit event argument extraction, in Proc. 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int. Joint Conf. Natural Language Processing (Volume 1 : Long Papers), Online, 2021, pp. 4672–4682.
[54]
J. Gao, C. Yu, W. Wang, H. Zhao, and R. Xu, Mask-thenfill: A flexible and effective data augmentation framework for event extraction, arXiv preprint arXiv: 2301.02427, 2023.
[55]
X. Du and C. Cardie, Event extraction by answering (almost) natural questions, arXiv preprint arXiv: 2004.13625, 2020.
[56]
Y. Lu, H. Lin, J. Xu, X. Han, J. Tang, A. Li, L. Sun, M. Liao, and S. Chen, Text2event: Controllable sequenceto-structure generation for end-to-end event extraction, arXiv preprint arXiv: 2106.09232, 2021.
[57]
K. Guo, T. Jiang, and H. Zhang, Knowledge graph enhanced event extraction in financial documents, in Proc. IEEE Int. Conf. Big Data (Big Data), Atlanta, GA, USA, 2020, pp. 1322–1329.
[58]
Y. Lu, H. Lin, X. Han, and L. Sun, Distilling discrimination and generalization knowledge for event detection via delta-representation learning, in Proc. 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 2019, pp. 4366–4376.
[59]

J. Liu, Y. Chen, and K. Liu, Exploiting the ground-truth: An adversarial imitation based knowledge distillation approach for event detection, Proc. AAAI Conf. Artif. Intell., vol. 33, no. 1, pp. 6754–6761, 2019.

[60]
M. Tong, B. Xu, S. Wang, Y. Cao, L. Hou, J. Li, and J. Xie, Improving event detection via open-domain event trigger knowledge, in Proc. 58th Annual Meeting of the Association for Computational Linguistic, Online, 2020, pp. 5887−5897.
[61]
Y. Chen, S. Liu, X. Zhang, K. Liu, and J. Zhao, Automatically labeled data generation for large scale event extraction, in Proc. 55th Annual Meeting of the Association for Computational Linguistics (Volume 1 : Long Papers), Vancouver, Canada, 2017, pp. 409–419.
[62]
H. He and X. Sun, A unified model for cross-domain and semi-supervised named entity recognition in Chinese social media, in Proc. Thirty-First AAAI Conf. Artificial Intelligence, San Francisco, CA, USA, 2017, pp. 3216–3222.
[63]

P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing, ACM Comput. Surv., vol. 55, no. 9, p. 195, 2023.

[64]
J. Lin, J. Jian, and Q. Chen, Eliciting knowledge from language models for event extraction, arXiv preprint arXiv: 2109.05190, 2021.
[65]
J. Zhou, Q. Zhang, Q. Chen, L. He, and X. Huang, A multi-format transfer learning model for event argument extraction via variational information bottleneck, arXiv preprint arXiv: 2208.13017, 2022.
[66]
S. Dong, W. Yu, H. Tu, X. Wang, Y. Zhou, H. Li, J. Zhou, and T. Chang, ArgumentPrompt: Activating multi-category of information for event argument extraction with automatically generated prompts, in Natural Language Processing and Chinese Computing, W. Lu, S. Huang, Y. Hong, and X. Zhou, eds. Cham, Switzerland: Springer, 2022, pp. 311–323.
[67]
Y. Ma, Z. Wang, Y. Cao, M. Li, M. Chen, K. Wang, and J. Shao, Prompt for extraction? paie: Prompting argument interaction for event argument extraction, arXiv preprint arXiv: 2202.12109, 2022.
[68]
J. Liu, Y. Chen, and J. Xu, Machine reading comprehension as data augmentation: A case study on implicit event argument extraction, in Proc. 2021 Conf. on Empirical Methods in Natural Language Processing, Online, 2021, pp. 2716–2725.
[69]
C. Nguyen, H. Man, and T. Nguyen, Contextualized soft prompts for extraction of event arguments, in Proc. Findings of the Association for Computational Linguistics : ACL 2023, Toronto, Canada, 2023, pp. 4352–4361.
[70]

P. Cao, Z. Jin, Y. Chen, K. Liu, and J. Zhao, Zero-shot cross-lingual event argument extraction with language-oriented prefix-tuning, Proc. AAAI Conf. Artif. Intell., vol. 37, no. 11, pp. 12589–12597, 2023.

[71]
J. Xie, H. Sun, J. Zhou, W. Qu, and X. Dai, Event detection as graph parsing, in Proc. Findings of the Association for Computational Linguistics : ACL-IJCNLP 2021, Online, 2021, pp. 1630–1640.
[72]
F. Li, W. Peng, Y. Chen, Q. Wang, L. Pan, Y. Lyu, and Y. Zhu, Event extraction as multi-turn question answering, in Proc. Findings of the Association for Computational Linguistics : EMNLP 2020, Online, 2020, pp. 829–838.
[73]
J. Liu, Y. Chen, K. Liu, W. Bi, and X. Liu, Event extraction as machine reading comprehension, in Proc. 2020 Conf. Empirical Methods in Natural Language Processing (EMNLP), Online, 2020, pp. 1641–1651.
[74]
S. Wang, M. Yu, S. Chang, L. Sun, and L. Huang, Query and extract: Refining event extraction as type-oriented binary decoding, arXiv preprint arXiv:2110.07476, 2021.
[75]
D. Lu, S. Ran, J. Tetreault, and A. Jaimes, Event extraction as question generation and answering, arXiv preprint arXiv: 2307.05567, 2023.
[76]
Y. Bengio, J. Louradour, R. Collobert, and J. Weston, Curriculum learning, in Proc. 26th Annual Int. Conf. Machine Learning, Montreal, Canada, 2009, pp. 41–48.
[77]
Y. Huang and J. Du, Self-attention enhanced CNNs and collaborative curriculum learning for distantly supervised relation extraction, in Proc. 2019 Conf. Empirical Methods in Natural Language Processing and 9th Int. Joint Conf. Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 2019, pp. 389–398.
Tsinghua Science and Technology
Pages 748-768
Cite this article:
Xie J, Zhang Y, Kou H, et al. A Survey of the Application of Neural Networks to Event Extraction. Tsinghua Science and Technology, 2025, 30(2): 748-768. https://doi.org/10.26599/TST.2023.9010139

38

Views

5

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 23 August 2023
Revised: 10 October 2023
Accepted: 24 October 2023
Published: 09 December 2024
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return