AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

Augmenting Trigger Semantics to Improve Event Coreference Resolution

School of Computer Science and Technology, Soochow University, Suzhou 215000, China
Show Author Information

Abstract

Due to the small size of the annotated corpora and the sparsity of the event trigger words, the event coreference resolver cannot capture enough event semantics, especially the trigger semantics, to identify coreferential event mentions. To address the above issues, this paper proposes a trigger semantics augmentation mechanism to boost event coreference resolution. First, this mechanism performs a trigger-oriented masking strategy to pre-train a BERT (Bidirectional Encoder Representations from Transformers)-based encoder (Trigger-BERT), which is fine-tuned on a large-scale unlabeled dataset Gigaword. Second, it combines the event semantic relations from the Trigger-BERT encoder with the event interactions from the soft-attention mechanism to resolve event coreference. Experimental results on both the KBP2016 and KBP2017 datasets show that our proposed model outperforms several state-of-the-art baselines.

Electronic Supplementary Material

Download File(s)
JCST-2011-11143-Highlights.pdf (260.6 KB)

References

[1]

Fouad M M, Atyah M A. Efficient topic detection system for online Arabic news. International Journal of Computer Applications, 2018, 180(12): 7–12. DOI: 10.5120/ijca2018916236.

[2]
Weissenborn D, Wiese G, Seiffe L. Making neural QA as simple as possible but not simple. In Proc. the 21st Conference on Computational Natural Language Learning, Aug. 2017, pp.271–280. DOI: 10.18653/v1/K17-1028.
[3]
Liu S L, Chen Y B, Liu K, Zhao J. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proc. the 55th Annual Meeting of the Association for Computational Linguistics, Jul. 2017, pp.1789–1798. DOI: 10.18653/v1/P17-1164.
[4]
Devlin J, Chang M W, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Jun. 2019, pp.4171–4186. DOI: 10.18653/v1/N19-1423.
[5]
Ellis J, Getman J, Kuster N, Song Z, Bies A, Strassel S. Overview of linguistic resources for the TAC KBP 2016 evaluations: Methodologies and results. In Proc. Text Analysis Conference, November 2016.
[6]
Getman J, Ellis J, Song Z, Tracey J, Strassel S. Overview of linguistic resources for the TAC KBP 2017 evaluations: Methodologies and results. In Proc. Text Analysis Conference,November 2017.
[7]
Linguistic Data Consortium. ACE (Automatic Content Extraction) English annotation guidelines for events. Technical Report, Linguistic Data Consortium, 2005.https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/english-events-guidelines-v5.4.3.pdf, March 2023.
[8]
Mitamura T, Liu Z Z, Hovy E H. Overview of TAC KBP 2015 event nugget track. In Proc. the 2015 Text Analysis Conference, Nov. 2015.
[9]
Ahn D. The stages of event extraction. In Proc. the Workshop on Annotating and Reasoning about Time and Events, Jul. 2006.
[10]
Bejan A, Harabagiu S. Unsupervised event coreference resolution with rich linguistic features. In Proc. the 48th Annual Meeting of the Association for Computational Linguistics, Jul. 2010, pp.1412–1422.
[11]
Chen Z, Ji H, Haralick R. A pairwise event coreference model, feature impact and evaluation for event coreference resolution. In Proc. the 2009 Workshop on Events in Emerging Text Types, Sept. 2009, pp.17–22.
[12]
Liu Z Z, Araki J, Hovy E H, Mitamura T. Supervised within-document event coreference using information propagation. In Proc. the 9th International Conference on Language Resources and Evaluation, May 2014, pp.4539–4544.
[13]
Ng V, Cardie C. Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution. In Proc. the 19th International Conference on Computational Linguistics, Aug. 2002. DOI: 10.3115/1072228.1072367.
[14]
Peng H R, Song Y Q, Roth D. Event detection and co-reference with minimal supervision. In Proc. the 2016 Conference on Empirical Methods in Natural Language Processing, Nov. 2016, pp.392–402. DOI: 10.18653/v1/D16-1038.
[15]
Lu J, Ng V. Joint learning for event coreference resolution. In Proc. the 55th Annual Meeting of the Association for Computational Linguistics, Jul. 2017, pp.90–101. DOI: 10.18653/v1/P17-1009.
[16]
Huang Y J, Lu J, Kurohashi S, Ng V. Improving event coreference resolution by learning argument compatibility from unlabeled data. In Proc. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Jun. 2019, pp.785–795. DOI: 10.18653/v1/N19-1085.
[17]

Cui Y M, Che W X, Liu T, Qin B, Yang Z Q. Pre-training with whole word masking for Chinese BERT. IEEE/ACM Trans. Audio, Speech, and Language Processing, 2021, 29: 3504–3514. DOI: 10.1109/TASLP.2021.3124365.

[18]
Lin T Y, Goyal P, Girshick R, He K M, Dollár P. Focal loss for dense object detection. In Proc. the 2017 IEEE International Conference on Computer Vision, Oct. 2017, pp.2999–3007. DOI: 10.1109/ICCV.2017.324.
[19]
Choubey P K, Huang R H. Improving event coreference resolution by modeling correlations between event coreference chains and document topic structures. In Proc. the 56th Annual Meeting of the Association for Computational Linguistics, Jul. 2018, pp.485–495. DOI: 10.18653/v1/P18-1045.
[20]
Vilain M B, Burger J D, Aberdeen J S, Connolly D, Hirschman L. A model-theoretic coreference scoring scheme. In Proc. the 6th Conference on Message Understanding, Nov. 1995, pp.45–52.
[21]
Bagga A, Baldwin B. Algorithms for scoring coreference chains. In Proc. the 1st International Conference on Language Resources and Evaluation, May 1998, pp.563–566.
[22]

Recasens M, Hovy E. BLANC: Implementing the rand index for coreference evaluation. Natural Language Engineering, 2011, 17(4): 485–510. DOI: 10.1017/S135132491000029X.

[23]
Luo X Q. On coreference resolution performance metrics. In Proc. the 2005 Conference on Human Language Technology and Empirical Methods in Natural Language Processing, Oct. 2005, pp.25–32. DOI: 10.3115/1220575.1220579.
Journal of Computer Science and Technology
Pages 600-611
Cite this article:
Huan M, Xu S, Li P-F. Augmenting Trigger Semantics to Improve Event Coreference Resolution. Journal of Computer Science and Technology, 2023, 38(3): 600-611. https://doi.org/10.1007/s11390-022-1143-8

327

Views

2

Crossref

1

Web of Science

2

Scopus

0

CSCD

Altmetrics

Received: 09 November 2020
Accepted: 11 April 2022
Published: 30 May 2023
© Institute of Computing Technology, Chinese Academy of Sciences 2023
Return