21 resultados para Electroencephalography (eeg)
Resumo:
OBJECTIVES:: Preterm infants undergo frequent painful procedures in the neonatal intensive care unit. Electroencephalography (EEG) changes in reaction to invasive procedures have been reported in preterm and full-term neonates. Frontal EEG asymmetry as an index of emotion during tactile stimulation shows inconsistent findings in full-term infants, and has not been examined in the context of pain in preterm infants. Our aim was to examine whether heel lance for blood collection induces changes in right-left frontal asymmetry, suggesting negative emotional response, in preterm neonates at different gestational age (GA) at birth and different duration of stay in the neonatal intensive care unit. MATERIALS AND METHODS:: Three groups of preterm infants were compared: set 1: group 1 (n=24), born and tested at 28 weeks GA; group 2 (n=22), born at 28 weeks GA and tested at 33 weeks; set 2: group 3 (n=25), born and tested at 33 weeks GA. EEG power was calculated for 30-second artifact-free periods, in standard frequency bandwidths, in 3 phases (baseline, up to 5 min after heel lance, 10 min after heel lance). RESULTS:: No significant differences were found in right-left frontal asymmetry, or in ipsilateral or contralateral somatosensory response, across phases. In contrast, the Behavioral Indicators of Infant Pain scores changed across phase (P
Resumo:
We present the results of exploratory experiments using lexical valence extracted from brain using electroencephalography (EEG) for sentiment analysis. We selected 78 English words (36 for training and 42 for testing), presented as stimuli to 3 English native speakers. EEG signals were recorded from the subjects while they performed a mental imaging task for each word stimulus. Wavelet decomposition was employed to extract EEG features from the time-frequency domain. The extracted features were used as inputs to a sparse multinomial logistic regression (SMLR) classifier for valence classification, after univariate ANOVA feature selection. After mapping EEG signals to sentiment valences, we exploited the lexical polarity extracted from brain data for the prediction of the valence of 12 sentences taken from the SemEval-2007 shared task, and compared it against existing lexical resources.
Resumo:
Experience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning [1], which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure [2]. Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) [3], triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4-6]. It could also be key to the processing of natural sounds within auditory cortices [7], suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics.
Resumo:
Event-related potentials (ERPs) and other electroencephalographic (EEG) evidence show that frontal brain areas of higher and lower socioeconomic status (SES) children are recruited differently during selective attention tasks. We assessed whether multiple variables related to self-regulation (perceived mental effort) emotional states (e.g., anxiety, stress, etc.) and motivational states (e.g., boredom, engagement, etc.) may co-occur or interact with frontal attentional processing probed in two matched-samples of fourteen lower-SES and higher-SES adolescents. ERP and EEG activation were measured during a task probing selective attention to sequences of tones. Pre- and post-task salivary cortisol and self-reported emotional states were also measured. At similar behavioural performance level, the higher-SES group showed a greater ERP differentiation between attended (relevant) and unattended (irrelevant) tones than the lower-SES group. EEG power analysis revealed a cross-over interaction, specifically, lower-SES adolescents showed significantly higher theta power when ignoring rather than attending to tones, whereas, higher-SES adolescents showed the opposite pattern. Significant theta asymmetry differences were also found at midfrontal electrodes indicating left hypo-activity in lower-SES adolescents. The attended vs. unattended difference in right midfrontal theta increased with individual SES rank, and (independently from SES) with lower cortisol task reactivity and higher boredom. Results suggest lower-SES children used additional compensatory resources to monitor/control response inhibition to distracters, perceiving also more mental effort, as compared to higher-SES counterparts. Nevertheless, stress, boredom and other task-related perceived states were unrelated to SES. Ruling out presumed confounds, this study confirms the midfrontal mechanisms responsible for the SES effects on selective attention reported previously and here reflect genuine cognitive differences.
Resumo:
Cognitive and neurophysiological correlates of arithmetic calculation, concepts, and applications were examined in 41 adolescents, ages 12-15 years. Psychological and task-related EEG measures which correctly distinguished children who scored low vs. high (using a median split) in each arithmetic subarea were interpreted as indicative of processes involved. Calculation was related to visual-motor sequencing, spatial visualization, theta activity measured during visual-perceptual and verbal tasks at right- and left-hemisphere locations, and right-hemisphere alpha activity measured during a verbal task. Performance on arithmetic word problems was related to spatial visualization and perception, vocabulary, and right-hemisphere alpha activity measured during a verbal task. Results suggest a complex interplay of spatial and sequential operations in arithmetic performance, consistent with processing model concepts of lateralized brain function.
Resumo:
Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100. ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon. © 2010 Elsevier Inc.
Resumo:
Music for Sleeping & Waking Minds is a 8-hour composition intended for overnight listening. It features 4 performers who wear custom-designed EEG sensors. The performers rest and fall asleep as they naturally would. Over the course of one night, their brainwave activity generates a spatial audio environment. Audiences are invited to sleep or listen as they wish. Composition & concept by Gascia Ouzounian. Physiological interface and interaction design by R. Benjamin Knapp. Audio interface and interaction design by Eric Lyon.