AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

M-LSM: An Improved Multi-Liquid State Machine for Event-Based Vision Recognition

College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
Show Author Information

Abstract

Event-based computation has recently gained increasing research interest for applications of vision recognition due to its intrinsic advantages on efficiency and speed. However, the existing event-based models for vision recognition are faced with several issues, such as large network complexity and expensive training cost. In this paper, we propose an improved multi-liquid state machine (M-LSM) method for high-performance vision recognition. Specifically, we introduce two methods, namely multi-state fusion and multi-liquid search, to optimize the liquid state machine (LSM). Multi-state fusion by sampling the liquid state at multiple timesteps could reserve richer spatiotemporal information. We adapt network architecture search (NAS) to find the potential optimal architecture of the multi-liquid state machine. We also train the M-LSM through an unsupervised learning rule spike-timing dependent plasticity (STDP). Our M-LSM is evaluated on two event-based datasets and demonstrates state-of-the-art recognition performance with superior advantages on network complexity and training cost.

Electronic Supplementary Material

Download File(s)
JCST-2101-11326-Highlights.pdf (149.5 KB)

References

[1]

Rathi N, Panda P, Roy K. STDP-based pruning of connections and weight quantization in spiking neural networks for energy-efficient recognition. IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems, 2019, 38(4): 668–677. DOI: 10.1109/TCAD.2018.2819366.

[2]

Maass W. Networks of spiking neurons: The third generation of neural network models. Neural Networks, 1997, 10(9): 1659–1671. DOI: 10.1016/S0893-6080(97)00011-7.

[3]

Lee C, Srinivasan G, Panda P, Roy K. Deep spiking convolutional neural network trained with unsupervised spike-timing-dependent plasticity. IEEE Trans. Cognitive and Developmental Systems, 2019, 11(3): 384–394. DOI: 10.1109/TCDS.2018.2833071.

[4]

Querlioz D, Bichler O, Dollfus P, Gamrat C. Immunity to device variations in a spiking neural network with memristive nanodevices. IEEE Trans. Nanotechnology, 2013, 12(3): 288–295. DOI: 10.1109/TNANO.2013.2250995.

[5]

Merolla P A, Arthur J V, Alvarez-Icaza R et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 2014, 345(6197): 668–673. DOI: 10.1126/science.1254 642.

[6]

Davies M, Srinivasa N, Lin T H et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 2018, 38(1): 82–99. DOI: 10.1109/MM.2018.112130359.

[7]
Du Z D, Rubin D D B D, Chen Y J et al. Neuromorphic accelerators: A comparison between neuroscience and machine-learning approaches. In Proc. the 48th International Symposium on Microarchitecture, Dec. 2015, pp.494–507. DOI: 10.1145/2830772.2830789.
[8]
Schuman C D, Potok T E, Patton R M et al. A survey of neuromorphic computing and neural networks in hardware. arXiv: 1705.06963, 2017. https://arxiv.org/abs/1705.06963, Dec. 2023.
[9]
Amir A, Taba B, Berg D et al. A low power, fully event-based gesture recognition system. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.7388–7397. DOI: 10.1109/CVPR.2017.781.
[10]
Gehrig D, Loquercio A, Derpanis K, Scaramuzza D. End-to-end learning of representations for asynchronous event-based data. In Proc. the 2019 IEEE/CVF International Conference on Computer Vision, Oct. 27–Nov. 2, 2019, pp.5632–5642. DOI: 10.1109/ICCV.2019.00573.
[11]

Lichtsteiner P, Posch C, Delbruck T. A 128x128 120 db 15 μs latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 2008, 43(2): 566–576. DOI: 10.1109/JSSC.2007.914337.

[12]

Yang M H, Liu S C, Delbruck T. A dynamic vision sensor with 1% temporal contrast sensitivity and in-pixel asynchronous delta modulator for event encoding. IEEE Journal of Solid-State Circuits, 2015, 50(9): 2149–2160. DOI: 10.1109/JSSC.2015.2425886.

[13]

He W H, Wu Y J, Deng L et al. Comparing SNNs and RNNs on neuromorphic vision datasets: Similarities and differences. Neural Networks, 2020, 132: 108–120. DOI: 10.1016/j.neunet.2020.08.001.

[14]
Shrestha S B, Orchard G. SLAYER: Spike layer error reassignment in time. In Proc. the 32nd International Conference on Neural Information Processing Systems, Dec. 2018, pp.1419–1428.
[15]

Ju H, Xu J X, Chong E et al. Effects of synaptic connectivity on liquid state machine performance. Neural Networks, 2013, 38: 39–51. DOI: 10.1016/j.neunet.2012.11.003.

[16]
Mi Y Y, Lin X H, Zou X L, Ji Z L, Huang T J, Wu S. Spatiotemporal information processing with a reservoir decision-making network. arXiv: 1907.12071, 2019. https://arxiv.org/abs/1907.12071, Dec. 2023.
[17]

Kaiser J, Stal R, Subramoney A et al. Scaling up liquid state machines to predict over address events from dynamic vision sensors. Bioinspiration & Biomimetics, 2017, 12(5): 055001. DOI: 10.1088/1748-3190/aa7663.

[18]
Wang Q, Li P. D-LSM: Deep liquid state machine with unsupervised recurrent reservoir tuning. In Proc. the 23rd International Conference on Pattern Recognition (ICPR), Dec. 2016, pp.2652–2657. DOI: 10.1109/ICPR.2016.7900 035.
[19]

Srinivasan G, Panda P, Roy K. SpilinC: Spiking liquid-ensemble computing for unsupervised speech and image recognition. Frontiers in Neuroscience, 2018, 12: 524. DOI: 10.3389/fnins.2018.00524.

[20]

Orchard G, Jayawant A, Cohen G K, Thakor N. Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in Neuroscience, 2015, 9: 437. DOI: 10.3389/fnins.2015.00437.

[21]

Goodman D F M, Brette R. The Brian simulator. Frontiers in Neuroscience, 2009, 3: 192–197. DOI: 10.3389/neuro.01.026.2009.

[22]

Stimberg M, Brette R, Goodman D F M. Brian 2, an intuitive and efficient neural simulator. eLife, 2019, 8: e47314. DOI: 10.7554/eLife.47314.

[23]

Wijesinghe P, Srinivasan G, Panda P, Roy K. Analysis of liquid ensembles for enhancing the performance and accuracy of liquid state machines. Frontiers in Neuroscience, 2019, 13: 504. DOI: 10.3389/fnins.2019.00504.

[24]
Liu Q H, Ruan H B, Xing D, Tang H J, Pan G. Effective AER object classification using segmented probability-maximization learning in spiking neural networks. In Proc. the 34th AAAI Conference on Artificial Intelligence, Feb. 2020, pp.1308–1315. DOI: 10.1609/aaai.v34i02.5486.
[25]
Reynolds J J M, Plank J S, Schuman C D. Intelligent reservoir generation for liquid state machines using evolutionary optimization. In Proc. the 2019 International Joint Conference on Neural Networks (IJCNN), Jul. 2019, pp.1–8. DOI: 10.1109/IJCNN.2019.8852472.
[26]

Wu Y J, Deng L, Li G Q, Zhu J, Shi L P. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience, 2018, 12: Article No. 331. DOI: 10.3389/fnins.2018.00331.

Journal of Computer Science and Technology
Pages 1288-1299
Cite this article:
Wang L, Guo S-S, Qu L-H, et al. M-LSM: An Improved Multi-Liquid State Machine for Event-Based Vision Recognition. Journal of Computer Science and Technology, 2023, 38(6): 1288-1299. https://doi.org/10.1007/s11390-021-1326-8

162

Views

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 27 January 2021
Accepted: 19 November 2021
Published: 15 November 2023
© Institute of Computing Technology, Chinese Academy of Sciences 2023
Return