159 resultados para auditory EEG
Resumo:
Verbal auditory hallucinations can have a strong impact on the social and professional functioning of individuals diagnosed with schizophrenia. The safety-seeking behaviours used to reduce the threat associated with voices play a significant role in explaining the functional consequences of auditory hallucinations. Nevertheless, these safety-seeking behaviours have been little studied. Twenty-eight patients with schizophrenia and verbal auditory hallucinations were recruited for this study. Hallucinations were evaluated using the Psychotic Symptom Rating Scale and the Belief About Voice Questionnaire and safety behaviours using a modified version of the Safety Behaviour Questionnaire. Our results show that the vast majority of patients relies on safety behaviours to reduce the threat associated with voices. This reliance on safety behaviours is mostly explained by beliefs about origin of voices the omnipotence attributed to hallucinations and the behavioural and emotional reactions to the voices. Safety-seeking behaviours play an important role in maintaining dysfunctional beliefs with respect to voices. They should be better targeted within the cognitive and behavioural therapies for auditory hallucinations.
Resumo:
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.
Resumo:
Auditory evoked potentials are informative of intact cortical functions of comatose patients. The integrity of auditory functions evaluated using mismatch negativity paradigms has been associated with their chances of survival. However, because auditory discrimination is assessed at various delays after coma onset, it is still unclear whether this impairment depends on the time of the recording. We hypothesized that impairment in auditory discrimination capabilities is indicative of coma progression, rather than of the comatose state itself and that rudimentary auditory discrimination remains intact during acute stages of coma. We studied 30 post-anoxic comatose patients resuscitated from cardiac arrest and five healthy, age-matched controls. Using a mismatch negativity paradigm, we performed two electroencephalography recordings with a standard 19-channel clinical montage: the first within 24 h after coma onset and under mild therapeutic hypothermia, and the second after 1 day and under normothermic conditions. We analysed electroencephalography responses based on a multivariate decoding algorithm that automatically quantifies neural discrimination at the single patient level. Results showed high average decoding accuracy in discriminating sounds both for control subjects and comatose patients. Importantly, accurate decoding was largely independent of patients' chance of survival. However, the progression of auditory discrimination between the first and second recordings was informative of a patient's chance of survival. A deterioration of auditory discrimination was observed in all non-survivors (equivalent to 100% positive predictive value for survivors). We show, for the first time, evidence of intact auditory processing even in comatose patients who do not survive and that progression of sound discrimination over time is informative of a patient's chance of survival. Tracking auditory discrimination in comatose patients could provide new insight to the chance of awakening in a quantitative and automatic fashion during early stages of coma.
Resumo:
Introduction: Discrimination of species-specific vocalizations is fundamental for survival and social interactions. Its unique behavioral relevance has encouraged the identification of circumscribed brain regions exhibiting selective responses (Belin et al., 2004), while the role of network dynamics has received less attention. Those studies that have examined the brain dynamics of vocalization discrimination leave unresolved the timing and the inter-relationship between general categorization, attention, and speech-related processes (Levy et al., 2001, 2003; Charest et al., 2009). Given these discrepancies and the presence of several confounding factors, electrical neuroimaging analyses were applied to auditory evoked-potential (AEPs) to acoustically and psychophysically controlled non-verbal human and animal vocalizations. This revealed which region(s) exhibit voice-sensitive responses and in which sequence. Methods: Subjects (N=10) performed a living vs. man-made 'oddball' auditory discrimination task, such that on a given block of trials 'target' stimuli occurred 10% of the time. Stimuli were complex, meaningful sounds of 500ms duration. There were 120 different sound files in total, 60 of which represented sounds of living objects and 60 man-made objects. The stimuli that were the focus of the present investigation were restricted to those of living objects within blocks where no response was required. These stimuli were further sorted between human non-verbal vocalizations and animal vocalizations. They were also controlled in terms of their spectrograms and formant distributions. Continuous 64-channel EEG was acquired through Neuroscan Synamps referenced to the nose, band-pass filtered 0.05-200Hz, and digitized at 1000Hz. Peri-stimulus epochs of continuous EEG (-100ms to 900ms) were visually inspected for artifacts, 40Hz low-passed filtered and baseline corrected using the pre-stimulus period . Averages were computed from each subject separately. AEPs in response to animal and human vocalizations were analyzed with respect to differences of Global Field Power (GFP) and with respect to changes of the voltage configurations at the scalp (reviewed in Murray et al., 2008). The former provides a measure of the strength of the electric field irrespective of topographic differences; the latter identifies changes in spatial configurations of the underlying sources independently of the response strength. In addition, we utilized the local auto-regressive average distributed linear inverse solution (LAURA; Grave de Peralta Menendez et al., 2001) to visualize and statistically contrast the likely underlying sources of effects identified in the preceding analysis steps. Results: We found differential activity in response to human vocalizations over three periods in the post-stimulus interval, and this response was always stronger than that to animal vocalizations. The first differential response (169-219ms) was a consequence of a modulation in strength of a common brain network localized into the right superior temporal sulcus (STS; Brodmann's Area (BA) 22) and extending into the superior temporal gyrus (STG; BA 41). A second difference (291-357ms) also followed from strength modulations of a common network with statistical differences localized to the left inferior precentral and prefrontal gyrus (BA 6/45). These two first strength modulations correlated (Spearman's rho(8)=0.770; p=0.009) indicative of functional coupling between temporally segregated stages of vocalization discrimination. A third difference (389-667ms) followed from strength and topographic modulations and was localized to the left superior frontal gyrus (BA10) although this third difference did not reach our spatial criterion of 12 continuous voxels. Conclusions: We show that voice discrimination unfolds over multiple temporal stages, involving a wide network of brain regions. The initial stages of vocalization discrimination are based on modulations in response strength within a common brain network with no evidence for a voice-selective module. The latency of this effect parallels that of face discrimination (Bentin et al., 2007), supporting the possibility that voice and face processes can mutually inform one another. Putative underlying sources (localized in the right STS; BA 22) are consistent with prior hemodynamic imaging evidence in humans (Belin et al., 2004). Our effect over the 291-357ms post-stimulus period overlaps the 'voice-specific-response' reported by Levy et al. (Levy et al., 2001) and the estimated underlying sources (left BA6/45) were in agreement with previous findings in humans (Fecteau et al., 2005). These results challenge the idea that circumscribed and selective areas subserve con-specific vocalization processing.
Resumo:
Mapping the human auditory cortex with standard functional imaging techniques is difficult because of its small size and angular position along the Sylvian fissure. As a result, the exact number and location of auditory cortex areas in the human remains unknown. In a first experiment, we measured the two largest tonotopic areas of primary auditory cortex (PAC, Al and R) using high-resolution functional MRI at 7 Tesla relative to the underlying anatomy of Heschl's gyrus (HG). The data reveals a clear anatomical- functional relationship that indicates the location of PAC across the range of common morphological variants of HG (single gyri, partial duplication and complete duplication). Human PAC tonotopic areas are oriented along an oblique posterior-to-anterior axis with mirror-symmetric frequency gradients perpendicular to HG, as in the macaque. In a second experiment, we tested whether these primary frequency-tuned units were modulated by selective attention to preferred vs. non-preferred sound frequencies in the dynamic manner needed to account for human listening abilities in noisy environments, such as cocktail parties or busy streets. We used a dual-stream selective attention experiment where subjects attended to one of two competing tonal streams presented simultaneously to different ears. Attention to low-frequency tones (250 Hz) enhanced neural responses within low-frequency-tuned voxels relative to high (4000 Hz), and vice versa when at-tention switched from high to low. Human PAC is able to tune into attended frequency channels and can switch frequencies on demand, like a radio. In a third experiment, we investigated repetition suppression effects to environmental sounds within primary and non-primary early-stage auditory areas, identified with the tonotopic mapping design. Repeated presentations of sounds from the same sources, as compared to different sources, gave repetition suppression effects within posterior and medial non-primary areas of the right hemisphere, reflecting their potential involvement in semantic representations. These three studies were conducted at 7 Tesla with high-resolution imaging. However, 7 Tesla scanners are, for the moment, not yet used for clinical diagnosis and mostly reside in institutions external to hospitals. Thus, hospital-based clinical functional and structural studies are mainly performed using lower field systems (1.5 or 3 Tesla). In a fourth experiment, we acquired tonotopic maps at 3 and 7 Tesla and evaluated the consistency of a tonotopic mapping paradigm between scanners. Mirror-symmetric gradients within PAC were highly similar at 7 and 3 Tesla across renderings at different spatial resolutions. We concluded that the tonotopic mapping paradigm is robust and suitable for definition of primary tonotopic areas, also at 3 Tesla. Finally, in a fifth study, we considered whether focal brain lesions alter tonotopic representations in the intact ipsi- and contralesional primary auditory cortex in three patients with hemispheric or cerebellar lesions, without and with auditory complaints. We found evidence for tonotopic reorganisation at the level of the primary auditory cortex in cases of brain lesions independently of auditory complaints. Overall, these results reflect a certain degree of plasticity within primary auditory cortex in different populations of subjects, assessed at different field strengths. - La cartographie du cortex auditif chez l'humain est difficile à réaliser avec des techniques d'imagerie fonctionnelle standard, étant donné sa petite taille et position angulaire le long de la fissure sylvienne. En conséquence, le nombre et l'emplacement exacts des différentes aires du cortex auditif restent inconnus chez l'homme. Lors d'une première expérience, nous avons mesuré, avec de l'imagerie par résonance magnétique à haute intensité (IRMf à 7 Tesla) chez des sujets humains sains, deux larges aires au sein du cortex auditif primaire (PAC; Al et R) avec une représentation spécifique des fréquences pures préférées - ou tonotopie. Nos résultats ont démontré une relation anatomico- fonctionnelle qui définit clairement la position du PAC à travers toutes les variantes du gyrus d'Heschl's (HG). Les aires tonotopiques du PAC humain sont orientées le long d'un axe postéro-antérieur oblique avec des gradients de fréquences spécifiques perpendiculaires à HG, d'une manière similaire à celles mesurées chez le singe. Dans une deuxième expérience, nous avons testé si ces aires primaires pouvaient être modulées, de façon dynamique, par une attention sélective pour des fréquences préférées par rapport à celles non-préférées. Cette modulation est primordiale lors d'interactions sociales chez l'humain en présence de bruits distracteurs tels que d'autres discussions ou un environnement sonore nuisible (comme par exemple, dans la circulation routière). Dans cette étude, nous avons utilisé une expérience d'attention sélective où le sujet devait être attentif à une des deux voies sonores présentées simultanément à chaque oreille. Lorsque le sujet portait était attentif aux sons de basses fréquences (250 Hz), la réponse neuronale relative à ces fréquences augmentait par rapport à celle des hautes fréquences (4000 Hz), et vice versa lorsque l'attention passait des hautes aux basses fréquences. De ce fait, nous pouvons dire que PAC est capable de focaliser sur la fréquence attendue et de changer de canal selon la demande, comme une radio. Lors d'une troisième expérience, nous avons étudié les effets de suppression due à la répétition de sons environnementaux dans les aires auditives primaires et non-primaires, d'abord identifiées via le protocole de la première étude. La présentation répétée de sons provenant de la même source sonore, par rapport à de sons de différentes sources sonores, a induit un effet de suppression dans les aires postérieures et médiales auditives non-primaires de l'hémisphère droite, reflétant une implication de ces aires dans la représentation de la catégorie sémantique. Ces trois études ont été réalisées avec de l'imagerie à haute résolution à 7 Tesla. Cependant, les scanners 7 Tesla ne sont pour le moment utilisés que pour de la recherche fondamentale, principalement dans des institutions externes, parfois proches du patient mais pas directement à son chevet. L'imagerie fonctionnelle et structurelle clinique se fait actuellement principalement avec des infrastructures cliniques à 1.5 ou 3 Tesla. Dans le cadre dune quatrième expérience, nous avons avons évalués la cohérence du paradigme de cartographie tonotopique à travers différents scanners (3 et 7 Tesla) chez les mêmes sujets. Nos résultats démontrent des gradients de fréquences définissant PAC très similaires à 3 et 7 Tesla. De ce fait, notre paradigme de définition des aires primaires auditives est robuste et applicable cliniquement. Finalement, nous avons évalués l'impact de lésions focales sur les représentations tonotopiques des aires auditives primaires des hémisphères intactes contralésionales et ipsilésionales chez trois patients avec des lésions hémisphériques ou cérébélleuses avec ou sans plaintes auditives. Nous avons trouvé l'évidence d'une certaine réorganisation des représentations topographiques au niveau de PAC dans le cas de lésions cérébrales indépendamment des plaintes auditives. En conclusion, nos résultats démontrent une certaine plasticité du cortex auditif primaire avec différentes populations de sujets et différents champs magnétiques.
Resumo:
Background:Amplitude-integrated electroencephalogram (aEEG) is increasingly used for neuromonitoring in preterms. We aimed to quantify the effects of gestational age (GA), postnatal age (PNA), and other perinatal factors on the development of aEEG early after birth in very preterm newborns with normal cerebral ultrasounds.Methods:Continuous aEEG was prospectively performed in 96 newborns (mean GA: 29.5 (range: 24.4-31.9) wk, birth weight 1,260 (580-2,120) g) during the first 96 h of life. aEEG tracings were qualitatively (maturity scores) and quantitatively (amplitudes) evaluated using preestablished criteria.Results:A significant increase in all aEEG measures was observed between day 1 and day 4 and for increasing GA (P < 0.001). The effect of PNA on aEEG development was 6.4- to 11.3-fold higher than that of GA. In multivariate regression, GA and PNA were associated with increased qualitative and quantitative aEEG measures, whereas small-for-GA status was independently associated with increased maximum aEEG amplitude (P = 0.003). Morphine administration negatively affected all aEEG measures (P < .05), and caffeine administration negatively affected qualitative aEEG measures (P = 0.02).Conclusion:During the first few days after birth, aEEG activity in very preterm infants significantly develops and is strongly subjected to the effect of PNA. Perinatal factors may alter the early aEEG tracing and interfere with its interpretation.
Resumo:
The tonotopic representations within the primary auditory cortex (PAC) have been successfully mapped with ultra-high field fMRI. Here, we compared the reliability of this tonotopic mapping paradigm at 7 T with 1.5 mm spatial resolution with maps acquired at 3 T with the same stimulation paradigm, but with spatial resolutions of 1.8 and 2.4 mm. For all subjects, the mirror-symmetric gradients within PAC were highly similar at 7 T and 3 T and across renderings at different spatial resolutions; albeit with lower percent signal changes at 3 T. In contrast, the frequency maps outside PAC tended to suffer from a reduced BOLD contrast-to-noise ratio at 3 T for a 1.8 mm voxel size, while robust at 2.4 mm and at 1.5 mm at 7 T. Overall, our results showed the robustness of the phase-encoding paradigm used here to map tonotopic representations across scanners.
Resumo:
We consider electroencephalograms (EEGs) of healthy individuals and compare the properties of the brain functional networks found through two methods: unpartialized and partialized cross-correlations. The networks obtained by partial correlations are fundamentally different from those constructed through unpartial correlations in terms of graph metrics. In particular, they have completely different connection efficiency, clustering coefficient, assortativity, degree variability, and synchronization properties. Unpartial correlations are simple to compute and they can be easily applied to large-scale systems, yet they cannot prevent the prediction of non-direct edges. In contrast, partial correlations, which are often expensive to compute, reduce predicting such edges. We suggest combining these alternative methods in order to have complementary information on brain functional networks.
Resumo:
Intrinsic connections in the cat primary auditory field (AI) as revealed by injections of Phaseolus vulgaris leucoagglutinin (PHA-L) or biocytin, had an anisotropic and patchy distribution. Neurons, labelled retrogradely with PHA-L were concentrated along a dorsoventral stripe through the injection site and rostral to it; the spread of rostrally located neurons was greater after injections into regions of low rather than high characteristic frequencies. The intensity of retrograde labelling varied from weak and granular to very strong and Golgi-like. Out of 313 Golgi like retrogradely labelled neurons 79.6% were pyramidal, 17.2% multipolar, 2.6% bipolar, and 0.6% bitufted; 13.4% were putatively inhibitory, i.e. aspiny or sparsely spiny multipolar, or bitufted. Individual anterogradely labelled intrinsic axons were reconstructed for distances of 2 to 7 mm. Five main types were distinguished on the basis of the branching pattern and the location of synaptic specialisations. Type 1 axons travelled horizontally within layers II to VI and sent collaterals at regular intervals; boutons were only present in the terminal arborizations of these collaterals. Type 2 axons also travelled horizontally within layers II to VI and had rather short and thin collateral branches; boutons or spine-like protrusions occurred in most parts of the axon. Type 3 axons travelled obliquely through the cortex and formed a single terminal arborization, the only site where boutons were found. Type 4 axons travelled for some distance in layer I; they formed a heterogeneous group as to their collaterals and synaptic specializations. Type 5 axons travelled at the interface between layer VI and the white matter; boutons en passant, spine-like protrusions, and thin short branches with boutons en passant were frequent all along their trajectory. Thus, only some axonal types sustain the patchy pattern of intrinsic connectivity, whereas others are involved in a more diffuse connectivity.
Resumo:
OBJECTIVES: Recommendations for EEG monitoring in the ICU are lacking. The Neurointensive Care Section of the ESICM assembled a multidisciplinary group to establish consensus recommendations on the use of EEG in the ICU. METHODS: A systematic review was performed and 42 studies were included. Data were extracted using the PICO approach, including: (a) population, i.e. ICU patients with at least one of the following: traumatic brain injury, subarachnoid hemorrhage, intracerebral hemorrhage, stroke, coma after cardiac arrest, septic and metabolic encephalopathy, encephalitis, and status epilepticus; (b) intervention, i.e. EEG monitoring of at least 30 min duration; (c) control, i.e. intermittent vs. continuous EEG, as no studies compared patients with a specific clinical condition, with and without EEG monitoring; (d) outcome endpoints, i.e. seizure detection, ischemia detection, and prognostication. After selection, evidence was classified and recommendations developed using the GRADE system. RECOMMENDATIONS: The panel recommends EEG in generalized convulsive status epilepticus and to rule out nonconvulsive seizures in brain-injured patients and in comatose ICU patients without primary brain injury who have unexplained and persistent altered consciousness. We suggest EEG to detect ischemia in comatose patients with subarachnoid hemorrhage and to improve prognostication of coma after cardiac arrest. We recommend continuous over intermittent EEG for refractory status epilepticus and suggest it for patients with status epilepticus and suspected ongoing seizures and for comatose patients with unexplained and persistent altered consciousness. CONCLUSIONS: EEG monitoring is an important diagnostic tool for specific indications. Further data are necessary to understand its potential for ischemia assessment and coma prognostication.
Resumo:
We propose and validate a multivariate classification algorithm for characterizing changes in human intracranial electroencephalographic data (iEEG) after learning motor sequences. The algorithm is based on a Hidden Markov Model (HMM) that captures spatio-temporal properties of the iEEG at the level of single trials. Continuous intracranial iEEG was acquired during two sessions (one before and one after a night of sleep) in two patients with depth electrodes implanted in several brain areas. They performed a visuomotor sequence (serial reaction time task, SRTT) using the fingers of their non-dominant hand. Our results show that the decoding algorithm correctly classified single iEEG trials from the trained sequence as belonging to either the initial training phase (day 1, before sleep) or a later consolidated phase (day 2, after sleep), whereas it failed to do so for trials belonging to a control condition (pseudo-random sequence). Accurate single-trial classification was achieved by taking advantage of the distributed pattern of neural activity. However, across all the contacts the hippocampus contributed most significantly to the classification accuracy for both patients, and one fronto-striatal contact for one patient. Together, these human intracranial findings demonstrate that a multivariate decoding approach can detect learning-related changes at the level of single-trial iEEG. Because it allows an unbiased identification of brain sites contributing to a behavioral effect (or experimental condition) at the level of single subject, this approach could be usefully applied to assess the neural correlates of other complex cognitive functions in patients implanted with multiple electrodes.
Resumo:
Current models of brain organization include multisensory interactions at early processing stages and within low-level, including primary, cortices. Embracing this model with regard to auditory-visual (AV) interactions in humans remains problematic. Controversy surrounds the application of an additive model to the analysis of event-related potentials (ERPs), and conventional ERP analysis methods have yielded discordant latencies of effects and permitted limited neurophysiologic interpretability. While hemodynamic imaging and transcranial magnetic stimulation studies provide general support for the above model, the precise timing, superadditive/subadditive directionality, topographic stability, and sources remain unresolved. We recorded ERPs in humans to attended, but task-irrelevant stimuli that did not require an overt motor response, thereby circumventing paradigmatic caveats. We applied novel ERP signal analysis methods to provide details concerning the likely bases of AV interactions. First, nonlinear interactions occur at 60-95 ms after stimulus and are the consequence of topographic, rather than pure strength, modulations in the ERP. AV stimuli engage distinct configurations of intracranial generators, rather than simply modulating the amplitude of unisensory responses. Second, source estimations (and statistical analyses thereof) identified primary visual, primary auditory, and posterior superior temporal regions as mediating these effects. Finally, scalar values of current densities in all of these regions exhibited functionally coupled, subadditive nonlinear effects, a pattern increasingly consistent with the mounting evidence in nonhuman primates. In these ways, we demonstrate how neurophysiologic bases of multisensory interactions can be noninvasively identified in humans, allowing for a synthesis across imaging methods on the one hand and species on the other.