9 resultados para MULTIMODAL ELUTION
em Aston University Research Archive
Resumo:
This Thesis addresses the problem of automated false-positive free detection of epileptic events by the fusion of information extracted from simultaneously recorded electro-encephalographic (EEG) and the electrocardiographic (ECG) time-series. The approach relies on a biomedical case for the coupling of the Brain and Heart systems through the central autonomic network during temporal lobe epileptic events: neurovegetative manifestations associated with temporal lobe epileptic events consist of alterations to the cardiac rhythm. From a neurophysiological perspective, epileptic episodes are characterised by a loss of complexity of the state of the brain. The description of arrhythmias, from a probabilistic perspective, observed during temporal lobe epileptic events and the description of the complexity of the state of the brain, from an information theory perspective, are integrated in a fusion-of-information framework towards temporal lobe epileptic seizure detection. The main contributions of the Thesis include the introduction of a biomedical case for the coupling of the Brain and Heart systems during temporal lobe epileptic seizures, partially reported in the clinical literature; the investigation of measures for the characterisation of ictal events from the EEG time series towards their integration in a fusion-of-knowledge framework; the probabilistic description of arrhythmias observed during temporal lobe epileptic events towards their integration in a fusion-of-knowledge framework; and the investigation of the different levels of the fusion-of-information architecture at which to perform the combination of information extracted from the EEG and ECG time-series. The performance of the method designed in the Thesis for the false-positive free automated detection of epileptic events achieved a false-positives rate of zero on the dataset of long-term recordings used in the Thesis.
Resumo:
Motion is an important aspect of face perception that has been largely neglected to date. Many of the established findings are based on studies that use static facial images, which do not reflect the unique temporal dynamics available from seeing a moving face. In the present thesis a set of naturalistic dynamic facial emotional expressions was purposely created and used to investigate the neural structures involved in the perception of dynamic facial expressions of emotion, with both functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalography (MEG). Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend the distributed neural system for face perception (Haxby et al.,2000). Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as inferior occipital gyri and superior temporal sulci, along with coupling between superior temporal sulci and amygdalae, as well as with inferior frontal gyri. MEG and Synthetic Aperture Magnetometry (SAM) were used to examine the spatiotemporal profile of neurophysiological activity within this dynamic face perception network. SAM analysis revealed a number of regions showing differential activation to dynamic versus static faces in the distributed face network, characterised by decreases in cortical oscillatory power in the beta band, which were spatially coincident with those regions that were previously identified with fMRI. These findings support the presence of a distributed network of cortical regions that mediate the perception of dynamic facial expressions, with the fMRI data providing information on the spatial co-ordinates paralleled by the MEG data, which indicate the temporal dynamics within this network. This integrated multimodal approach offers both excellent spatial and temporal resolution, thereby providing an opportunity to explore dynamic brain activity and connectivity during face processing.
Resumo:
The studies in this project have investigated the ongoing neuronal network oscillatory activity found in the sensorimotor cortex using two modalities: magnetoencephalography (MEG) and in vitro slice recordings. The results have established that ongoing sensorimotor oscillations span the mu and beta frequency region both in vitro and in MEG recordings, with distinct frequency profiles for each recorded laminae in vitro, while MI and SI show less difference in humans. In addition, these studies show that connections between MI and SI modulate the ongoing neuronal network activity in these areas. The stimulation studies indicate that specific frequencies of stimulation affect the ongoing activity in the sensorimotor cortex. The continuous theta burst stimulation (cTBS) study demonstrates that cTBS predominantly enhances the power of the local ongoing activity. The stimulation studies in this project show limited comparison between modalities, which is informative of the role of connectivity in these effects. However, independently these studies provide novel information on the mechanisms on sensorimotor oscillatory interaction. The pharmacological studies reveal that GABAergic modulation with zolpidem changes the neuronal oscillatory network activity in both healthy and pathological MI. Zolpidem enhances the power of ongoing oscillatory activity in both sensorimotor laminae and in healthy subjects. In contrast, zolpidem attenuates the “abnormal” beta oscillatory activity in the affected hemisphere in Parkinsonian patients, while restoring the hemispheric beta power ratio and frequency variability and thereby improving motor symptomatology. Finally we show that independent signals from MI laminae can be integrated in silico to resemble the aggregate MEG MI oscillatory signals. This highlights the usefulness of combining these two methods when elucidating neuronal network oscillations in the sensorimotor cortex and any interventions.
A multimodal perspective on the composition of cortical oscillations:frontiers in human neuroscience
Resumo:
An expanding corpus of research details the relationship between functional magnetic resonance imaging (fMRI) measures and neuronal network oscillations. Typically, integratedelectroencephalography(EEG) and fMRI,orparallel magnetoencephalography (MEG) and fMRI are used to draw inference about the consanguinity of BOLD and electrical measurements. However, there is a relative dearth of information about the relationship between E/MEG and the focal networks from which these signals emanate. Consequently, the genesis and composition of E/MEG oscillations requires further clarification. Here we aim to contribute to understanding through a series of parallel measurements of primary motor cortex (M1) oscillations, using human MEG and in-vitro rodent local field potentials. We compare spontaneous activity in the ~10Hz mu and 15-30Hz beta frequency ranges and compare MEG signals with independent and integrated layers III and V(LIII/LV) from in vitro recordings. We explore the mechanisms of oscillatory generation, using specific pharmacological modulation with the GABA-A alpha-1 subunit modulator zolpidem. Finally, to determine the contribution of cortico-cortical connectivity, we recorded in-vitro M1, during an incision to sever lateral connections between M1 and S1 cortices. We demonstrate that frequency distribution of MEG signals appear have closer statistically similarity with signals from integrated rather than independent LIII/LV laminae. GABAergic modulation in both modalities elicited comparable changes in the power of the beta band. Finally, cortico-cortical connectivity in sensorimotor cortex (SMC) appears to directly influence the power of the mu rhythm in LIII. These findings suggest that the MEG signal is an amalgam of outputs from LIII and LV, that multiple frequencies can arise from the same cortical area and that in vitro and MEG M1 oscillations are driven by comparable mechanisms. Finally, corticocortical connectivity is reflected in the power of the SMC mu rhythm. © 2013 Ronnqvist, Mcallister, Woodhall, Stanford and Hall.
Resumo:
Generation of stable dual and/or multiple longitudinal modes emitted from a single quantum dot (QD) laser diode (LD) over a broad wavelength range by using volume Bragg gratings (VBG's) in an external cavity setup is reported. The LD operates in both the ground and excited states and the gratings give a dual-mode separation around each emission peak of 5 nm, which is suitable as a continuous wave (CW) optical pump signal for a terahertz (THz) photomixer device. The setup also generates dual modes around both 1180m and 1260 nm simultaneously, giving four simultaneous narrow linewidth modes comprising two simultaneous difference frequency pump signals. (C) 2011 American Institute of Physics.
Resumo:
Mobile technology has not yet achieved widespread acceptance in the Architectural, Engineering, and Construction (AEC) industry. This paper presents work that is part of an ongoing research project focusing on the development of multimodal mobile applications for use in the AEC industry. This paper focuses specifically on a context-relevant lab-based evaluation of two input modalities – stylus and soft-keyboard v. speech-based input – for use with a mobile data collection application for concrete test technicians. The manner in which the evaluation was conducted as well as the results obtained are discussed in detail.
Resumo:
Mobile technologies have yet to be widely adopted by the Architectural, Engineering, and Construction (AEC) industry despite being one of the major growth areas in computing in recent years. This lack of uptake in the AEC industry is likely due, in large part, to the combination of small screen size and inappropriate interaction demands of current mobile technologies. This paper discusses the scope for multimodal interaction design with a specific focus on speech-based interaction to enhance the suitability of mobile technology use within the AEC industry by broadening the field data input capabilities of such technologies. To investigate the appropriateness of using multimodal technology for field data collection in the AEC industry, we have developed a prototype Multimodal Field Data Entry (MFDE) application. This application, which allows concrete testing technicians to record quality control data in the field, has been designed to support two different modalities of data input speech-based data entry and stylus-based data entry. To compare the effectiveness or usability of, and user preference for, the different input options, we have designed a comprehensive lab-based evaluation of the application. To appropriately reflect the anticipated context of use within the study design, careful consideration had to be given to the key elements of a construction site that would potentially influence a test technician's ability to use the input techniques. These considerations and the resultant evaluation design are discussed in detail in this paper.
Resumo:
Mobile and wearable computers present input/output prob-lems due to limited screen space and interaction techniques. When mobile, users typically focus their visual attention on navigating their environment - making visually demanding interface designs hard to operate. This paper presents two multimodal interaction techniques designed to overcome these problems and allow truly mobile, 'eyes-free' device use. The first is a 3D audio radial pie menu that uses head gestures for selecting items. An evaluation of a range of different audio designs showed that egocentric sounds re-duced task completion time, perceived annoyance, and al-lowed users to walk closer to their preferred walking speed. The second is a sonically enhanced 2D gesture recognition system for use on a belt-mounted PDA. An evaluation of the system with and without audio feedback showed users' ges-tures were more accurate when dynamically guided by au-dio-feedback. These novel interaction techniques demon-strate effective alternatives to visual-centric interface de-signs on mobile devices.
Resumo:
In this paper we take seriously the call for strategy-as-practice research to address the material, spatial and bodily aspects of strategic work. Drawing on a video-ethnographic study of strategic episodes in a financial trading context, we develop a conceptual framework that elaborates on strategic work as socially accomplished within particular spaces that are constructed through different orchestrations of material, bodily and discursive resources. Building on the findings, our study identifies three types of strategic work - private work, collaborative work and negotiating work - that are accomplished within three distinct spaces that are constructed through multimodal constellations of semiotic resources. We show that these spaces, and the activities performed within them, are continuously shifting in ways that enable and constrain the particular outcomes of a strategic episode. Our framework contributes to the strategy-as-practice literature by identifying the importance of spaces in conducting strategic work and providing insight into the way that these spaces are constructed.