AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1.6 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Trajectory distributions: A new description of movement for trajectory prediction

School of Information Engineering, Zhengzhou University, Zhengzhou 450001, China
Show Author Information

Graphical Abstract

Abstract

Trajectory prediction is a fundamental and challenging task for numerous applications, such as autonomous driving and intelligent robots. Current works typically treat pedestrian trajectories as a series of 2D point coordinates. However, in real scenarios, the trajectory often exhibits randomness, and has its own probability distribution. Inspired by this observation and other movement characteristics of pedestrians, we propose a simple and intuitive movement description called a trajectory distribution, which maps the coordinates of the pedestrian trajectory to a 2D Gaussian distribution in space. Based on this novel description, we develop a new trajectory prediction method, which we call the social probability method. The method combines trajectory distributions and powerful convolutional recurrent neural networks. Both the input and output of our method are trajectory distributions, which provide the recurrent neural network with sufficient spatial and random information about moving pedestrians. Furthermore, the social probability method extracts spatio-temporal features directly from the new movement description to generate robust and accurate predictions. Experiments on public benchmark datasets show the effectiveness of the proposed method.

References

[1]
Alahi, A.; Goel, K.; Ramanathan, V.; Robicquet, A.; Li, F. F.; Savarese, S. Social LSTM: Human trajectory prediction in crowded spaces. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 961-971, 2016.
[2]
Helbing, D.; Molnár, P. Social force model for pedestrian dynamics. Physical Review E Vol. 51, No. 5, 4282, 1995.
[3]
Vemula, A.; Muelling, K.; Oh, J. Social attention: Modeling attention in human crowds. In: Proceedings of the IEEE International Conference on Robotics and Automation, 4601-4607, 2018.
[4]
Yi, S.; Li, H. S.; Wang, X. G. Pedestrian behavior understanding and prediction with deep neural networks. In: Computer Vision - ECCV 2016. Lecture Notes in Computer Science, Vol. 9905. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 263-279, 2016.
[5]
Zhang, P.; Ouyang, W. L.; Zhang, P. F.; Xue, J. R.; Zheng, N. N. SR-LSTM: State refinement for LSTM towards pedestrian trajectory prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12077-12086, 2019.
[6]
Xu, M. L.; Li, C. X.; Lv, P.; Lin, N.; Hou, R.; Zhou, B. An efficient method of crowd aggregation computation in public areas. IEEE Transactions on Circuits and Systems for Video Technology Vol. 28, No. 10, 2814-2825, 2018.
[7]
Liang, J. W.; Jiang, L.; Niebles, J. C.; Hauptmann, A.; Fei-Fei, L. Peeking into the future: Predicting future person activities and locations in videos. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2960-2963, 2019.
[8]
Sadeghian, A.; Kosaraju, V.; Sadeghian, A.; Hirose, N.; Rezatofighi, H.; Savarese, S. SoPhie: An attentive GAN for predicting paths compliant to social and physical constraints. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1349-1358, 2019.
[9]
Wong, S. K.; Wang, Y. S.; Tang, P. K.; Tsai, T. Y. Optimized evacuation route based on crowd simulation. Computational Visual Media Vol. 3, No. 3, 243-261, 2017.
[10]
Gupta, A.; Johnson, J.; Li, F. F.; Savarese, S.; Alahi, A. Social GAN: Socially acceptable trajectories with generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2255-2264, 2018.
[11]
Liang, J. W.; Jiang, L.; Murphy, K.; Yu, T.; Hauptmann, A. The garden of forking paths: Towards multi-future trajectory prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10505-10515, 2020.
[12]
Chai, Y. N.; Sapp, B.; Bansal, M.; Anguelov, D. Multipath: Multiple probabilistic anchor trajectory hypotheses for behavior prediction. arXiv preprint arXiv:1910.05449, 2019.
[13]
Li, Y. K. Which way are you going? Imitative decision learning for path forecasting in dynamic scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 294-303, 2019.
[14]
Makansi, O.; Ilg, E.; Çiçek, Ö.; Brox, T. Overcoming limitations of mixture density networks: A sampling and fitting framework for multimodal future prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7137-7146, 2019.
[15]
Tang, Y. C.; Salakhutdinov, R. Multiple futures prediction. In: Proceedings of the 33rd Conference on Neural Information Processing Systems, 15424-15434, 2019.
[16]
Xue, H.; Huynh, D. Q.; Reynolds, M. SS-LSTM: A hierarchical LSTM model for pedestrian trajectory prediction. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 1186-1194, 2018.
[17]
Huang, Y. F.; Bi, H. K.; Li, Z. X.; Mao, T. L.; Wang, Z. Q. STGAT: Modeling spatial-temporal interactions for human trajectory prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 6271-6280, 2019.
[18]
Sun, J. H.; Jiang, Q. H.; Lu, C. W. Recursive social behavior graph for trajectory prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 657-666, 2020.
[19]
Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In: Proceedings of the 27th Conference on Neural Information Processing Systems, 2672-2680, 2014.
[20]
Thiede, L.; Brahma, P. Analyzing the variety loss in the context of probabilistic trajectory prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 9953-9962, 2019.
[21]
Mehran, R.; Oyama, A.; Shah, M. Abnormal crowd behavior detection using social force model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 935-942, 2009.
[22]
Pellegrini, S.; Ess, A.; Schindler, K.; van Gool, L. You’ll never walk alone: Modeling social behavior for multi-target tracking. In: Proceedings of the IEEE 12th International Conference on Computer Vision, 261-268, 2009.
[23]
Su, H.; Zhu, J.; Dong, Y.; Zhang, B. Forecast the plausible paths in crowd scenes. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, 2772-2778, 2017.
[24]
Xu, Y. Y.; Piao, Z. X.; Gao, S. H. Encoding crowd interaction with deep neural network for pedestrian trajectory prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5275-5284, 2018.
[25]
Tang, Q. C.; Yang, M. N.; Yang, Y. ST-LSTM: A deep learning approach combined spatio-temporal features for short-term forecast in rail transit. Journal of Advanced Transportation Vol. 2019, Article ID 8392592, 2019.
[26]
Zheng, C. P.; Fan, X. L.; Wang, C.; Qi, J. Z. GMAN: A graph multi-attention network for traffic prediction. Proceedings of the AAAI Conference on Artificial Intelligence Vol. 34, No. 1, 1234-1241, 2020.
[27]
Yu, B.; Yin, H. T.; Zhu, Z. X. Spatiotemporal graph convolutional networks: A deep learning framework for traffic forecasting. arXiv preprint arXiv:1709.04875, 2017.
[28]
Henaff, M.; Bruna, J.; LeCun, Y. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
[29]
Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Computation Vol. 9, No. 8, 1735-1780, 1997.
[30]
Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
[31]
Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[32]
Karpathy, A.; Joulin, A.; Fei-Fei, L. Deep fragment embeddings for bidirectional image sentence mapping. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, 1889-1897, 2014.
[33]
Vinyals, O.; Toshev, A.; Bengio, S.; Erhan, D. Show and tell: A neural image caption generator. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3156-3164, 2015.
[34]
Chorowski, J.; Bahdanau, D.; Cho, K.; Bengio, Y. End-to-end continuous speech recognition using attention-based recurrent NN: First results. arXiv preprint arXiv:1412.1602, 2014.
[35]
Chung, J.; Kastner, K.; Dinh, L.; Goel, K.; Courville, A.; Bengio, Y. A recurrent latent variable model for sequential data. arXiv preprint arXiv:1506.02216,2015.
[36]
Graves, A.; Jaitly, N. Towards end-to-end speechrecognition with recurrent neural networks. In: Proceedings of the 31st International Conference on Machine Learning, 1764-1772, 2014.
[37]
Yang, B. L.; Sun, S. L.; Li, J. Y.; Lin, X. X.; Tian, Y. Traffic flow prediction using LSTM with feature enhancement. Neurocomputing Vol. 332, 320-327, 2019.
[38]
Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A.; Salakhudinov, R.; Zemel, R.; Bengio, Y. Show, attend and tell: Neural image caption generation with visual attention. In: Proceedings of the 32nd International Conference on Machine Learning, 2048-2057, 2015.
[39]
You, Q. Z.; Jin, H. L.; Wang, Z. W.; Fang, C.; Luo, J. B. Image captioning with semantic attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4651-4659, 2016.
[40]
Donahue, J.; Hendricks, L. A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Darrell, T.; Saenko, K. Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2625-2634, 2015.
[41]
Srivastava, N.; Mansimov, E.; Salakhudinov, R. Unsupervised learning of video representations using LSTMs. In: Proceedings of the 32nd International Conference on Machine Learning, 843-852, 2015.
[42]
Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.; Woo, W.-C. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, 802-810, 2015.
[43]
Feng, Z. H.; Kittler, J.; Awais, M.; Huber, P.; Wu, X. J. Wing loss for robust facial landmark localisation with convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2235-2245, 2018.
[44]
Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging Vol. 3, No. 1, 47-57, 2017.
[45]
Lerner, A.; Chrysanthou, Y.; Lischinski, D. Crowds by example. Computer Graphics Forum Vol. 26, No. 3, 655-664, 2007.
[46]
Kingma, D. P.; Ba, J. L. ADAM: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[47]
Woo, S.; Park, J.; Lee, J. Y.; Kweon, I. S. CBAM: Convolutional block attention module. In: Computer Vision-ECCV 2018. Lecture Notes in Computer Science, Vol. 11211. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 3-19, 2018.
Computational Visual Media
Pages 213-224
Cite this article:
Lv P, Wei H, Gu T, et al. Trajectory distributions: A new description of movement for trajectory prediction. Computational Visual Media, 2022, 8(2): 213-224. https://doi.org/10.1007/s41095-021-0236-6

655

Views

100

Downloads

8

Crossref

5

Web of Science

7

Scopus

1

CSCD

Altmetrics

Received: 21 January 2021
Accepted: 20 April 2021
Published: 06 December 2021
© The Author(s) 2021.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return