Historically, a Brain-Computer Interface (BCI) is a system that measures central nervous system (CNS) activity and converts it into artificial output that replaces, restores, enhances, supplements, or improves natural CNS output (Wolpaw et al., 2012). In past years, BCI applications have mainly been developed for severely disabled persons providing new channels of communication and body control but in recent years BCI research have become more and more interesting for a broader community of researchers. Especially the type of BCIs, which are not consciously controlled by the user or reacting to external stimulation, called passive BCIs. These systems derive its outputs from brain activity, in order to enrich human–machine interaction with implicit information on the actual state of the user. Having access to the user’s ongoing brain activity enables applications spanning a variety of domains such as brain-activity based gaming, workload assessment, brain activity-based biometrics and neuromarketing or neuroergonomics. Currently our research focusing on passive BCI systems for healthy users with two special applications: First the investigation of unconscious like/dislike decision making and second the development of a neuro-adaptive learning environment which takes the mental state of the user into account. This brain inspired learning framework will improve language learning considering fatigue and mental workload of the user through linking BCI and MR technology. This project is in collaboration with the Institute of Computer Graphics and Vision, TU Graz.
Motor Imagery (MI) is one task which has been used for driving brain plasticity and motor learning in several fields including sports, motor rehabilitation and brain-computer interface (BCI) research. A BCI is a device translating brain signals into control signals providing severely motor-impaired persons with an additional, non-muscular channel for communication and control. In the past many studies have shown that brain activity changes associated with MI can serve as useful control signals for BCIs. By using more vivid and engaging MI tasks instead of simple hand/finger tapping tasks, the performance of a BCI can be improved. In several imaging studies we found stronger and more distinctive brain activity in a broader network of brain areas. For example, imagining a complex action requires not only motor-related processing but also visuo-spatial imagery including a fronto-parietal network. The neural activity in MI of reach-to-grasp movements depends on the type of grasping which recruits a network including posterior parietal and premotor regions. Furthermore we found increased activation in parietal and frontal regions during imagery of emotion-laden objects and sports activities. Our results indicate that visuo-spatial cognition and action affordances play a significant role in MI eliciting distinctive brain patterns and suggested to improve the performance of future BCI systems. To support these first findings further research focusing on (sports)motor imagery and its neural correlates is still ongoing.
EEG-neurofeedback is a method to self-regulate one’s own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. People can self-control some of their brain functions in real-time. This method is currently used for many different applications, including e.g. experiments (to deduce the role of cognition and behaviour on specifc neural events), peak-performance training (to enhance the cognitive performance in healthy subjects) and therapy (to help people normalize their deviating brain activity or help physically disadvantaged people restore motor functions). Simple 2D EEG-neurofeedback has already been used for quite some time now, while 3D EEG-neurofeedback in the form of virtual reality gained fame and interest over the last few years. While a vast amount of 2D methods and paradigms have already been tested and validated, the 3D counterpart still holds high potential for new possible treatment methods. This project aims to develop and test novel 3D neurofeedback visualizations including VR environments. This project is in collaboration with the Game Lab Graz, Institute of Interactive Systems and Data Science.
With the recent evolution of consumer lever Mixed Reality (MR) devices, MR tutorials gained popularity in many different areas of application. For example, remote customer support, do-it-yourself tutorials, and applications for industrial manufacturing have recently applied MR visualizations for guiding users through the steps of a tutorial. MR instructions are superior to classical paper or video-based instructions when it comes both to completion time and to the numbers of errors made while following the tutorials. However, existing MR tutorials focus on presenting general instructions which are shown permanently. Thus, the specific needs or mental states of individual users are not considered, which often results in lower learning rates and higher fatigue of its users.
In order to adapt the tutoring system showing only what is required for a specific user and task, one needs to model the user and subsequently adapt the MR instructions accordingly. Mental state monitoring systems allow applications to be adapted based on the mental state of the user and are applied in various fields such as driving or teaching assistance (Davidse et al., 2009; Walter et al., 2017; Zander et al., 2017). In this project, we will conduct research on mental state monitoring for the continuous assessment of a user model that can subsequently be used for intelligently adapting the MR visualization of the tutoring system. Mental state changes, like increasing mental workload (MWL) and mental fatigue (MF), have effects on the electrophysiological signals and following the performance of a person while carrying out a cognitively demanding task. High MF may lead to the inability of a user to complete a task that requires self-motivation, without signs of cognitive failure or motor weakness (Chaudhuri & Behan, 2000). It has been shown that the reduced motivation of a user to perform a task which induces high MF, is associated with increased sympathetic activity and decreased parasympathetic activity (Johnson et al., 2006; Mezzacappa et al., 1998; Tanaka et al., 2009). If there is an increase in MWL and MF, there will be a decrease in the task performance (decrease of the accuracy and an increase of the reaction time) of the participants (Käthner et al., 2014b; Roy et al., 2013b). In this project we will use brain activity (EEG) and electrocardiogram (ECG) for the detection of MWL and MF, more precisely the amplitude power changes in certain frequency bands of the EEG signal referred to as band power (BP) changes and the heart rate variability (HRV) respectively. Recently (Babiloni, 2019) showed in a state-of-art overview of recent outcomes, that the implementation of BP features extracted from the EEG successfully detect and distinguish between different mental states of the user, such as MWL and MF. They reported that an increase in MWL leads to a BP increase in the theta frequency band at frontal cortical areas with a simultaneous BP decrease in the alpha band at parietal areas (Babiloni, 2019). Most of the mental state monitoring systems use classification algorithms based on the extracted EEG features to detect the different mental states of the user. Commonly used classifiers are for example the linear discriminant analysis (LDA) classifier and its variants (shrinkage or stepwise LDA), support vector machines (SVM) or k-nearest neighbors (Acı et al., 2019; Lotte & Roy, 2019). In order to improve classification accuracies, we will use spatial filtering techniques such as the common spatial pattern (CSP) algorithm that are used in combination with a linear classifier, such as LDA (Barachant et al., 2013). But we will also test different deep learning approaches, like convolutional neural networks (CNNs) to classify MWL and MF. The aim of this project is to detect high MWL and MF in participants performing a cognitively demanding task, such as an MR machine maintenance and an MR piano tutorial. The intelligent tutoring system will support adapting the presentation by varying the medium and the level of detail that is used to provide visual instructions. This project is in cooperation with the Institute of Computer Graphics and Vision (ICG, TUG).
Visuo-spatial complexity and MI
Affective Computing
Neural correlates of emotions induced by music
Motor Imagery training in VR
All topics assigend!!! Next chance WS 2023/24
Bsc. Florian Maitz (Master Student and Student Employee)
Institute of Neural Engineering
Stremayrgasse 16/iv
8010 Graz, Austria
+4331687330715
s.wriessnegger@tugraz.at