In this study, the effect of presentation rates on pupil dilation is investigated for target recognition in the Rapid Serial Visual Presentation (RSVP) paradigm. In this experiment, the RSVP paradigm with five different presentation rates, including 50, 80, 100, 150, and 200 ms, is designed. The pupillometry data of 15 subjects are collected and analyzed. The pupillometry results reveal that the peak and average amplitudes for pupil size and velocity at the 80-ms presentation rate are considerably higher than those at other presentation rates. The average amplitude of pupil acceleration at the 80-ms presentation rate is significantly higher than those at the other presentation rates. The latencies under 50- and 80-ms presentation rates are considerably lower than those of 100-, 150-, and 200-ms presentation rates. Additionally, no considerable differences are observed in the peak, average amplitude, and latency of pupil size, pupil velocity, and acceleration under 100-, 150-, and 200-ms presentation rates. These results reveal that with the increase in the presentation rate, pupil dilation first increases, then decreases, and later reaches saturation. The 80-ms presentation rate results in the largest point of pupil dilation. No correlation is observed between pupil dilation and recognition accuracy under the five presentation rates.
- Article type
- Year
- Co-author
Early diagnosis of autism spectrum disorder (ASD) is very important for improving autism treatment. Recent studies have investigated early diagnosis of children with ASD using machine learning and eye tracking. This paper presents an eye tracking and pupillary response feature extraction method with a naive Bayes classification model for autism that was tested on the Autism Detection Dataset, a dataset of 25 children with ASD and 50 children with typical development aged 3-6 to identify abnormal pupillary responses in children with autism. The method has an average classification accuracy of 90.67% and an average AUC of 92.24% while using only the pupillary features for modeling, which is better than the 82.2% average accuracy achieved by a pupillary and gaze behavior feature model and 78% average accuracy achieved by a gaze behavior and kinematic feature model. This method is simple and accurate. The results show the effectiveness of this method and the feasibility of real clinical applications of this type for early autism diagnosis based on machine learning and eye tracking.
For this research, electroencephalography (EEG) was analyzed to investigate the perception ability of the brain for moving objects at different speeds. In this experiment, total six kinds of videos regarding license plates were created, moving at distinct speed of 0.26 m/s, 0.36 m/s, 0.46 m/s, 0.56 m/s, 0.66 m/s, and 0.76 m/s, respectively. In the semantic priming paradigm, the N400 effect was analyzed for each speed. The ERP results demonstrated that the N400 amplitude gradually reduced with increasing speed. At the three lower speeds, N400 was evoked evidently and mainly distributed in the centro-posterior region. At the three higher speeds, no significant N400 effect was found. The results concluded that the perception ability of the brain declined with the acceleration of the object’s moving speed and that the brain recognized the detailed information of the moving object when its speed was lower than 0.46 m/s.
Although notable progress has been made in the study of Steady-State Visual Evoked Potential (SSVEP)-based Brain-Computer Interface (BCI), several factors that limit the practical applications of BCIs still exist. One of these factors is the importability of the stimulator. In this study, Augmented Reality (AR) technology was introduced to present the visual stimuli of SSVEP-BCI, while the robot grasping experiment was designed to verify the applicability of the AR-BCI system. The offline experiment was designed to determine the best stimulus time, while the online experiment was used to complete the robot grasping task. The offline experiment revealed that better information transfer rate performance could be achieved when the stimulation time is 2 s. Results of the online experiment indicate that all 12 subjects could control the robot to complete the robot grasping task, which indicates the applicability of the AR-SSVEP-humanoid robot (NAO) system. This study verified the reliability of the AR-BCI system and indicated the applicability of the AR-SSVEP-NAO system in robot grasping tasks.
N400 is an objective electrophysiological index in semantic processing for brain. This study focuses on the sensitivity of N400 effect during speech comprehension under the uni- and bi- modality conditions. Varying the Signal-to-Noise Ratio (SNR) of speech signal under the conditions of Audio-only (A), Visual-only (V, i.e., lip-reading), and Audio-Visual (AV), the semantic priming paradigm is used to evoke N400 effect and measure the speech recognition rate. For the conditions A and high SNR AV, the N400 amplitudes in the central region are larger; for the conditions of V and low SNR AV, the N400 amplitudes in the left-frontal region are larger. The N400 amplitudes of frontal and central regions under the conditions of A, AV, and V are consistent with speech recognition rate of behavioral results. These results indicate that audio-cognition is better than visual-cognition at high SNR, and visual-cognition is better than audio-cognition at low SNR.
In general, a large amount of training data can effectively improve the classification performance of the Steady-State Visually Evoked Potential (SSVEP)-based Brain-Computer Interface (BCI) system. However, it will prolong the training time and considerably restrict the practicality of the system. This study proposed a SSVEP nonlinear signal model based on the Volterra filter, which could reconstruct stable reference signals using relatively small number of training targets by transfer learning, thereby reducing the training cost of SSVEP-BCI. Moreover, this study designed a transfer-extended Canonical Correlation Analysis (t-eCCA) method based on the model to achieve cross-target transfer. As a result, in a single-target SSVEP experiment with 16 stimulus frequencies, t-eCCA obtained an average accuracy of 86.96%
In the fatigue state, the neural response characteristics of the brain might be different from those in the normal state. Brain functional connectivity analysis is an effective tool for distinguishing between different brain states. For example, comparative studies on the brain functional connectivity have the potential to reveal the functional differences in different mental states. The purpose of this study was to explore the relationship between human mental states and brain control abilities by analyzing the effect of fatigue on the brain response connectivity. In particular, the phase-scrambling method was used to generate images with two noise levels, while the N-back working memory task was used to induce the fatigue state in subjects. The paradigm of rapid serial visual presentation (RSVP) was used to present visual stimuli. The analysis of brain connections in the normal and fatigue states was conducted using the open-source eConnectome toolbox. The results demonstrated that the control areas of neural responses were mainly distributed in the parietal region in both the normal and fatigue states. Compared to the normal state, the brain connectivity power in the parietal region was significantly weakened under the fatigue state, which indicates that the control ability of the brain is reduced in the fatigue state.
This study applied a steady-state visual evoked potential (SSVEP) based brain–computer interface (BCI) to a patient in lock-in state with amyotrophic lateral sclerosis (ALS) and validated its feasibility for communication. The developed calibration-free and asynchronous spelling system provided a natural and efficient communication experience for the patient, achieving a maximum free-spelling accuracy above 90% and an information transfer rate of over 22.203 bits/min. A set of standard frequency scanning and task spelling data were also acquired to evaluate the patient’s SSVEP response and to facilitate further personalized BCI design. The results demonstrated that the proposed SSVEP-based BCI system was practical and efficient enough to provide daily life communication for ALS patients.
This study explored methods for improving the performance of Steady-State Visual Evoked Potential (SSVEP)-based Brain-Computer Interfaces (BCI), and introduced a new analytical method to quantitatively analyze and reflect the characteristics of SSVEP. We focused on the effect of the pre-stimulation paradigm on the SSVEP dynamic models and the dynamic response process of SSVEP, and performed a comparative analysis of three pre-stimulus paradigms (black, gray, and white). Four dynamic models with different orders (second- and third-order) and with and without a zero point were used to fit the SSVEP envelope. The zero-pole analytical method was adopted to conduct quantitative analysis on the dynamic models, and the response characteristics of SSVEP were represented by zero-pole distribution characteristics. The results of this study indicated that the pre-stimulation paradigm affects the characteristics of SSVEP, and the dynamic models had good fitting abilities with SSVEPs under various types of pre-stimulation. Furthermore, the zero-pole characteristics of the models effectively characterize the damping coefficient, oscillation period, and other SSVEP characteristics. The comparison of zeros and poles indicated that the gray pre-stimulation condition corresponds to a lower damping coefficient, thus showing its potential to improve the performance of SSVEP-BCIs.