[1]
Canhasi E. Graph-based models for multi-document summarization [Ph.D. Thesis]. Doktora Tezi, Ljubljana Universitesi, 2014.
[2]
Mihalcea R, Tarau P. TextRank: Bringing order into text. In Proc. the 2004 Conference on Empirical Methods in Natural Language Processing, July 2004, pp.404-411.
[4]
Berg-Kirkpatrick T, Gillick D, Klein D. Jointly learning to extract and compress. In Proc. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, June 2011, pp.481-490.
[9]
Chu E, Liu P. MeanSum: A neural model for unsupervised multi-document abstractive summarization. In Proc. the 36th International Conference on Machine Learning, June 2019, pp.1223-1232.
[11]
Lin C. ROUGE: A package for automatic evaluation of summaries. In Proc. the Workshop on Text Summarization Branches Out, July 2004, pp.74-81.
[16]
Socher R, Perelygin A, Wu J, Chuang J, Manning C D, Ng A, Potts C. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. the 2013 Conference on Empirical Methods in Natural Language Processing, October 2013, pp.1631-1642.
[18]
Zhu X D, Sobihani P, Guo H Y. Long short-term memory over recursive structures. In Proc. the 32nd International Conference on Machine Learning, July 2015, pp.1604-1612.
[19]
Lai S W, Xu L H, Liu K, Zhao J. Recurrent convolutional neural networks for text classification. In Proc. the 29th AAAI Conference on Artificial Intelligence, January 2015, pp.2267-2273.
[20]
Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks. In Proc. the Annual Conference on Neural Information Processing Systems, December 2014, pp.3104-3112.
[21]
Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In Proc. the 3rd International Conference on Learning Representations, May 2015.
[23]
Xu K, Ba J, Kiros R, Cho K, Courville A, Salakhudinov R, Zemel R, Bengio Y. Show, attend and tell: Neural image caption generation with visual attention. In Proc. the 32nd International Conference on Machine Learning, July 2015, pp.2048-2057.
[27]
Vinyals O, Fortunato M, Jaitly N. Pointer networks. In Proc. the Annual Conference on Neural Information Processing Systems, December 2015, pp.2692-2700.
[30]
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser L, Polosukhin I. Attention is all you need. In Proc. the Annual Conference on Neural Information Processing Systems, December 2017, pp.5998-6008.
[31]
Hermann K M, Kocisky T, Grefenstette E, Espeholt L, Kay W, Suleyman M, Blunsom P. Teaching machines to read and comprehend. In Proc. the Annual Conference on Neural Information Processing Systems, December 2015, pp.1693-1701.
[32]
Nallapati R, Zhai F F, Zhou B W. SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents. In Proc. the 31st AAAI Conference on Artificial Intelligence, February 2017, pp.3075-3081.
[37]
Liu P J, Saleh M, Pot E, Goodrich B, Sepassi R, Kaiser L, Shazeer N. Generating Wikipedia by summarizing long sequences. In Proc. the 6th International Conference on Learning Representations, May 2018.
[50]
Goodfellow I, Bengio Y, Courville A. Deep Learning. MIT Press, 2016.
[51]
Chen Q, Zhu X D, Ling Z H, Wei S, Jiang H. Distractionbased neural networks for modeling document. In Proc. the 25th International Joint Conference on Artificial Intelligence, July 2016, pp.2754-2760.
[70]
Sukhbaatar S, Szlam A, Fergus R. Learning multiagent communication with backpropagation. In Proc. the Annual Conference on Neural Information Processing Systems, December 2016, pp.2244-2252.
[71]
Liu L Q, Lu Y, Yang M, Qu Q, Zhu J, Li H Y. Generative adversarial network for abstractive text summarization. In Proc. the 32nd AAAI Conference on Artificial Intelligence, February 2018, pp.8109-8110.
[73]
Cao Z Q, Wei F R, Li W J, Li S J. Faithful to the original: Fact aware neural abstractive summarization. In Proc. the 32nd AAAI Conference on Artificial Intelligence, February 2018, pp.4784-4791.
[77]
Dong L, Yang N, Wang W H, Wei F R, Liu X D, Wang Y, Gao J F, Zhou M, Hon H. Unified language model pretraining for natural language understanding and generation. In Proc. the Annual Conference on Neural Information Processing Systems, December 2019, pp.13042-13054.
[78]
Song K T, Tan X, Qin T, Lu J F, Liu T. MASS: Masked sequence to sequence pre-training for language generation. In Proc. the 36th International Conference on Machine Learning, June 2019, pp.5926-5936.
[88]
Wu Y X, Hu B T. Learning to extract coherent summary via deep reinforcement learning. In Proc. the 32nd AAAI Conference on Artificial Intelligence, February 2018, pp.5602-5609.
[92]
Sukhbaatar S, Weston J, Fergus R et al. End-to-end memory networks. In Proc. the Annual Conference on Neural Information Processing Systems, December 2015, pp.2440-2448.
[93]
Peters M E, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L. Deep contextualized word representations. In Proc. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, June 2018, pp.2227-2237.
[98]
Socher R, Huang E H, Pennington J, Ng A Y, Manning C D. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proc. 25th Annual Conference on Neural Information Processing Systems, December 2011, pp.801-809.
[99]
Lin H, Bilmes J. A class of submodular functions for document summarization. In Proc. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, June 2011, pp.510-520.
[100]
YinWP, Pei Y L. Optimizing sentence modeling and selection for document summarization. In Proc. the 24th International Joint Conference on Artificial Intelligence, July 2015, pp.1383-1389.
[102]
Mnih A, Teh Y. A fast and simple algorithm for training neural probabilistic language models. In Proc. the 29th International Conference on Machine Learning, June 2012.
[103]
Cao Z Q, Wei F R, Dong L, Li S J, Zhou M. Ranking with recursive neural networks and its application to multidocument summarization. In Proc. the 29th AAAI Conference on Artificial Intelligence, January 2015, pp.2153-2159.
[104]
Cao Z Q, Li W J, Li S J, Wei F R. Improving multidocument summarization via text classification. In Proc. the 31st AAAI Conference on Artificial Intelligence, February 2017, pp.3053-3059.
[105]
Christensen J, Soderland S, Etzioni O. Towards coherent multi-document summarization. In Proc. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, June 2013, pp.1163-1173.
[107]
Ren P J, Wei F R, Chen Z M, Ma J, Zhou M. A redundancy-aware sentence regression framework for extractive summarization. In Proc. the 26th International Conference on Computational Linguistics, December 2016, pp.33-43.
[109]
Li P J, Wang Z H, Lam W, Ren Z C, Bing L D. Salience estimation via variational auto-encoders for multi-document summarization. In Proc. the 31st AAAI Conference on Artificial Intelligence, February 2017, pp.3497-3503.
[111]
Hinton G E, Sabour S, Frosst N. Matrix capsules with EM routing. In Proc. the 6th International Conference on Learning Representations, April 2018.
[117]
Gehring J, Auli M, Grangier D, Yarats D, Dauphin Y N. Convolutional sequence to sequence learning. In Proc. the 34th International Conference on Machine Learning, August 2017, pp.1243-1252.
[119]
Devlin J, Chang M W, Lee K, Toutanova K. BERT: Pretraining of deep bidirectional transformers for language understanding. In Proc. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, June 2019, pp.4171-4186.