[2]
C. Li, B. Zhang, D. Hong, J. Yao, and J. Chanussot, LRR-net: An interpretable deep unfolding network for hyperspectral anomaly detection, IEEE Trans. Geosci. Remote Sensing, vol. 61, pp. 1–12, 2023.
[3]
X. Wu, D. Hong, and J. Chanussot, UIU-net: U-net in U-net for infrared small object detection, IEEE Trans. Image Process., vol. 32, pp. 364–376, 1809.
[6]
X. Du and C. Cardie, Event extraction by answering (almost) natural questions, in Proc. 2020 Conf. Empirical Methods in Natural Language Processing (EMNLP ), https://doi.org/10.48550/arXiv.2004.13625, 2023.
[7]
F. Li, W. Peng, Y. Chen, Q. Wang, L. Pan, Y. Lyu, and Y. Zhu, Event extraction as multi-turn question answering, in Proc. Findings of the Association for Computational Linguistics (EMNLP 2020), Virtual Event, pp. 829–838.
[8]
S. Wang, M. Yu, S. Chang, L. Sun, and L. Huang, Query and extract: Refining event extraction as type-oriented binary decoding, in Proc. Findings of the Association for Computational Linguistics (ACL 2022), Dublin, Ireland, 2022. pp. 169–182.
[9]
S. Yang, D. Feng, L. Qiao, Z. Kan, and D. Li, Exploring pre-trained language models for event extraction and generation, in Proc. 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 2019, pp. 5284–5294.
[10]
Y. Lin, H. Ji, F. Huang, and L. Wu, A joint neural model for information extraction with global features, in Proc. 58th Annual Meeting of the Association for Computational Linguistics, Virtual Event, 2020. pp. 7999–8009.
[12]
A. Ramponi, R. van der Goot, R. Lombardo, and B. Plank, Biomedical event extraction as sequence labeling, in Proc. 2020 Conf. Empirical Methods in Natural Language Processing (EMNLP), Virtual Event, 2020, pp. 5357–5367
[13]
D. Wadden, U. Wennberg, Y. Luan, and H. Hajishirzi, Entity, relation, and event extraction with contextualized span representations, in Proc. 2019 Conf. Empirical Methods in Natural Language Processing and the 9th Int. Joint Conf. Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 2019, pp. 5784–5789.
[15]
Y. Cao, Z. Zhou, C. Chakraborty, M. Wang, Q. M. Jonathan Wu, X. Sun, and K. Yu, Generative steganography based on long readable text generation, IEEE Trans. Comput. Soc. Syst. doi:10.1109/TCSS.2022.3174013.
[17]
I.-H. Hsu, K.-H. Huang, E. Boschee, S. Miller, P. Natarajan, K.-W. Chang, and N. Peng, DEGREE: A data-efficient generation-based event extraction model, in Proc. 2022 Conf. North American Chapter of the Association for Computational Linguistics : Human Language Technologies, Seattle, WA, USA, 2022, pp. 1890–1908.
[18]
X. Liu, H. Huang, G. Shi, and B. Wang, Dynamic prefix-tuning for generative template-based event extraction, in Proc. 60th Annual Meeting of the Association for Computational Linguistics (Volume 1 : Long Papers), Dublin, Ireland, 2022, pp. 5216–5228.
[19]
G. Paolini, B. Athiwaratkun, J. Krone, J. Ma, A. Achille, R. Anubhai, C. N. D. Santos, B. Xiang, and S. Soatto, Structured prediction as translation between augmented natural languages, in Proc. International Conference on Learning Representations, Virtual Event, 2021, pp. 1−26.
[20]
Y. Lu, Q. Liu, D. Dai, X. Xiao, H. Lin, X. Han, L. Sun, and H. Wu, Unified structure generation for universal information extraction, in Proc. 60th Annual Meeting of the Association for Computational Linguistics (Volume 1 : Long Papers), Dublin, Ireland, 2022, pp. 5755–5772.
[21]
G. Doddington, A. Mitchell, M. Przybocki, L. Ramshaw, S. Strassel, and R. Weischedel, The automatic content extraction (ACE) program— tasks, data, and evaluation, in Proceedings of the Fourth International Conference on Language Resources and Evaluation, Lisbon, Portugal, 2004, pp. 1−4.
[22]
Z. Song, A. Bies, S. Strassel, T. Riese, J. Mott, J. Ellis, J. Wright, S. Kulick, N. Ryant, and X. Ma, From light to rich ERE: Annotation of entities, relations, and events, in Proc. the 3rd Workshop on EVENTS : Definition, Detection, Coreference, and Representation, Denver, CO, USA, 2015. pp. 89–98.
[23]
H. Cao, J. Li, F. Su, F. Li, H. Fei, S. Wu, B. Li, L. Zhao, and D. Ji, OneEE: A one-stage framework for fast overlapping and nested event extraction, in Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, 2022, pp. 1953–1964.
[24]
Y. Lu, H. Lin, J. Xu, X. Han, J. Tang, A. Li, L. Sun, M. Liao, and S. Chen, Text2Event: Controllable sequence-to-structure generation for end-to-end event extraction, in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Virtual Event, 2021, pp. 2795–2806.
[25]
S. Li, H. Ji, and J. Han, Document-level event argument extraction by conditional generation, in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics : Human Language Technologies, Virtual Event, 2021, pp. 894–908.
[26]
X. L. Li and P. Liang, Prefix-tuning: Optimizing continuous prompts for generation, in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Virtual Event, 2021, pp. 4582–4597.
[27]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, Attention is all you need, arXiv preprint arXiv: 1706.03762, 2017.
[30]
T. Schick and H. Schütze, Exploiting cloze-questions for few-shot text classification and natural language inference, in Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics : Main Volume, Virtual Event, 2021, pp. 255–269.
[31]
T. Gao, A. Fisch, and D. Chen, Making pre-trained language models better few-shot learners, in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Virtual Event, 2021, pp. 3816–3830.
[32]
S. Kumar and P. Talukdar, Reordering examples helps during priming-based few-shot learning, in Proc. Findings of the Association for Computational Linguistics : ACL-IJCNLP 2021, Virtual Event, 2021, pp. 4507−4518.
[33]
X. Han, W. Zhao, N. Ding, Z. Liu, and M. Sun, PTR: Prompt tuning with rules for text classification, AI Open, vol. 3, pp. 182–192, 2022.
[34]
L. Cui, Y. Wu, J. Liu, S. Yang, and Y. Zhang, Template-based named entity recognition using BART, in Proc. Findings of the Association for Computational Linguistics : ACLIJCNLP 2021, Virtual Event, 2021, pp. 1835–1845.
[35]
M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension, in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Virtual Event, 2020, pp. 7871–7880.
[36]
M. Matena, Y. Zhou, W. Li, and P. J. Liu, Exploring the limits of transfer learning with a unified text-to-text transformer, Journal of Machine Learning Research, vol. 21, no. 1, pp. 140:1−140:67, 2020.
[37]
E. Cohen and C. Beck, Empirical analysis of beam search performance degradation in neural sequence models, in Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 2019, pp. 1290–1299.
[38]
M. Zhang, Y. Su, Z. Meng, Z. Fu, and N. Collier, COFFEE: A contrastive oracle-free framework for event extraction, arXiv preprint arXiv: 2303.14452, 2023.
[39]
T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, et al., Transformers: State-of-the-art natural language processing, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing : System Demonstrations, Virtual Event, 2020, pp. 38–45.
[40]
I. Loshchilov and F. Hutter, Decoupled weight decay regularization, in Proc. 7th International Conference on Learning Representations, New Orleans, LA, USA, 2019, pp. 1−18.