897 resultados para Stimulus onset asynchrony
Resumo:
The topography of the visual evoked magnetic response (VEMR) to a pattern onset stimulus was studied in five normal subjects using a single channel BTi magnetometer. Topographic distributions were analysed at regular intervals following stimulus onset (chronotopograpby). Two distinct field distributions were observed with half field stimulation: (1) activity corresponding to the C11 m which remains stable for an average of 34 msec and (2) activity corresponding to the C111 m which remains stable for about 50 msec. However, the full field topography of the largest peak within the first 130 msec does not have a predictable latency or topography in different subjects. The data suggest that the appearance of this peak is dependent on the amplitude, latency and duration of the half field C11 m peaks and the efficiency of half field summation. Hence, topographic mapping is essential to correctly identify the C11 m peak in a full field response as waveform morphology, peak latency and polarity are not reliable indicators. © 1993.
Resumo:
The rhythm created by spacing a series of brief tones in a regular pattern can be disguised by interleaving identical distractors at irregular intervals. The disguised rhythm can be unmasked if the distractors are allocated to a separate stream from the rhythm by integration with temporally overlapping captors. Listeners identified which of 2 rhythms was presented, and the accuracy and rated clarity of their judgment was used to estimate the fusion of the distractors and captors. The extent of fusion depended primarily on onset asynchrony and degree of temporal overlap. Harmonic relations had some influence, but only an extreme difference in spatial location was effective (dichotic presentation). Both preattentive and attentionally driven processes governed performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Resumo:
We compared judgements of the simultaneity or asynchrony of visual stimuli in individuals with autism spectrum disorders (ASD) and typically-developing controls using Magnetoencephalography (MEG). Two vertical bars were presented simultaneously or non-simultaneously with two different stimulus onset delays. Participants with ASD distinguished significantly better between real simultaneity (0 ms delay between two stimuli) and apparent simultaneity (17 ms delay between two stimuli) than controls. In line with the increased sensitivity, event-related MEG activity showed increased differential responses for simultaneity versus apparent simultaneity. The strongest evoked potentials, observed over occipital cortices at about 130 ms, were correlated with performance differences in the ASD group only. Superior access to early visual brain processes in ASD might underlie increased resolution of visual events in perception. © 2012 Springer Science+Business Media New York.
Resumo:
An unresolved goal in face perception is to identify brain areas involved in face processing and simultaneously understand the timing of their involvement. Currently, high spatial resolution imaging techniques identify the fusiform gyrus as subserving processing of invariant face features relating to identity. High temporal resolution imaging techniques localize an early latency evoked component—the N/M170—as having a major generator in the fusiform region; however, this evoked component is not believed to be associated with the processing of identity. To resolve this, we used novel magnetoencephalographic beamformer analyses to localize cortical regions in humans spatially with trial-by-trial activity that differentiated faces and objects and to interrogate their functional sensitivity by analyzing the effects of stimulus repetition. This demonstrated a temporal sequence of processing that provides category-level and then item-level invariance. The right fusiform gyrus showed adaptation to faces (not objects) at ∼150 ms after stimulus onset regardless of face identity; however, at the later latency of ∼200–300 ms, this area showed greater adaptation to repeated identity faces than to novel identities. This is consistent with an involvement of the fusiform region in both early and midlatency face-processing operations, with only the latter showing sensitivity to invariant face features relating to identity.
Resumo:
The synchronization of neuronal activity, especially in the beta- (14-30 Hz) /gamma- (30 80 Hz) frequency bands, is thought to provide a means for the integration of anatomically distributed processing and for the formation of transient neuronal assemblies. Thus non-stimulus locked (i.e. induced) gamma-band oscillations are believed to underlie feature binding and the formation of neuronal object representations. On the other hand, the functional roles of neuronal oscillations in slower theta- (4 8 Hz) and alpha- (8 14 Hz) frequency bands remain controversial. In addition, early stimulus-locked activity has been largely ignored, as it is believed to reflect merely the physical properties of sensory stimuli. With human neuromagnetic recordings, both the functional roles of gamma- and alpha-band oscillations and the significance of early stimulus-locked activity in neuronal processing were examined in this thesis. Study I of this thesis shows that even the stimulus-locked (evoked) gamma oscillations were sensitive to high-level stimulus features for speech and non-speech sounds, suggesting that they may underlie the formation of early neuronal object representations for stimuli with a behavioural relevance. Study II shows that neuronal processing for consciously perceived and unperceived stimuli differed as early as 30 ms after stimulus onset. This study also showed that the alpha band oscillations selectively correlated with conscious perception. Study III, in turn, shows that prestimulus alpha-band oscillations influence the subsequent detection and processing of sensory stimuli. Further, in Study IV, we asked whether phase synchronization between distinct frequency bands is present in cortical circuits. This study revealed prominent task-sensitive phase synchrony between alpha and beta/gamma oscillations. Finally, the implications of Studies II, III, and IV to the broader scientific context are analysed in the last study of this thesis (V). I suggest, in this thesis that neuronal processing may be extremely fast and that the evoked response is important for cognitive processes. I also propose that alpha oscillations define the global neuronal workspace of perception, action, and consciousness and, further, that cross-frequency synchronization is required for the integration of neuronal object representations into global neuronal workspace.
Resumo:
The neural basis of visual perception can be understood only when the sequence of cortical activity underlying successful recognition is known. The early steps in this processing chain, from retina to the primary visual cortex, are highly local, and the perception of more complex shapes requires integration of the local information. In Study I of this thesis, the progression from local to global visual analysis was assessed by recording cortical magnetoencephalographic (MEG) responses to arrays of elements that either did or did not form global contours. The results demonstrated two spatially and temporally distinct stages of processing: The first, emerging 70 ms after stimulus onset around the calcarine sulcus, was sensitive to local features only, whereas the second, starting at 130 ms across the occipital and posterior parietal cortices, reflected the global configuration. To explore the links between cortical activity and visual recognition, Studies II III presented subjects with recognition tasks of varying levels of difficulty. The occipito-temporal responses from 150 ms onwards were closely linked to recognition performance, in contrast to the 100-ms mid-occipital responses. The averaged responses increased gradually as a function of recognition performance, and further analysis (Study III) showed the single response strengths to be graded as well. Study IV addressed the attention dependence of the different processing stages: Occipito-temporal responses peaking around 150 ms depended on the content of the visual field (faces vs. houses), whereas the later and more sustained activity was strongly modulated by the observers attention. Hemodynamic responses paralleled the pattern of the more sustained electrophysiological responses. Study V assessed the temporal processing capacity of the human object recognition system. Above sufficient luminance, contrast and size of the object, the processing speed was not limited by such low-level factors. Taken together, these studies demonstrate several distinct stages in the cortical activation sequence underlying the object recognition chain, reflecting the level of feature integration, difficulty of recognition, and direction of attention.
Resumo:
The “distractor-frequency effect” refers to the finding that high-frequency (HF) distractor words slow picture naming less than low-frequency distractors in the picture–word interference paradigm. Rival input and output accounts of this effect have been proposed. The former attributes the effect to attentional selection mechanisms operating during distractor recognition, whereas the latter attributes it to monitoring/decision mechanisms operating on distractor and target responses in an articulatory buffer. Using high-density (128-channel) EEG, we tested hypotheses from these rival accounts. In addition to conducting stimulus- and response-locked whole-brain corrected analyses, we investigated the correct-related negativity, an ERP observed on correct trials at fronto-central electrodes proposed to reflect the involvement of domain general monitoring. The wholebrain ERP analysis revealed a significant effect of distractor frequency at inferior right frontal and temporal sites between 100 and 300-msec post-stimulus onset, during which lexical access is thought to occur. Response-locked, region of interest (ROI) analyses of fronto-central electrodes revealed a correct-related negativity starting 121 msec before and peaking 125 msec after vocal onset on the grand averages. Slope analysis of this component revealed a significant difference between HF and lowfrequency distractor words, with the former associated with a steeper slope on the time windowspanning from100 msec before to 100 msec after vocal onset. The finding of ERP effects in time windows and components corresponding to both lexical processing and monitoring suggests the distractor frequency effect is most likely associated with more than one physiological mechanism.
Resumo:
Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. In addition, these signals have very transient structures related to spiking or sudden onset of a stimulus, which have durations not exceeding tens of milliseconds. Further, brain signals are highly nonstationary because both behavioral state and external stimuli can change on a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal, something not always possible using standard signal processing techniques such as short time fourier transform, multitaper method, wavelet transform, or Hilbert transform. In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both a sharp stimulus-onset transient and a sustained gamma rhythm in local field potential recorded from the primary visual cortex. We compare the performance of MP with other techniques and discuss its advantages and limitations. Data and codes for generating all time-frequency power spectra are provided.
Resumo:
This research is concerned with the development of tactual displays to supplement the information available through lipreading. Because voicing carries a high informational load in speech and is not well transmitted through lipreading, the efforts are focused on providing tactual displays of voicing to supplement the information available on the lips of the talker. This research includes exploration of 1) signal-processing schemes to extract information about voicing from the acoustic speech signal, 2) methods of displaying this information through a multi-finger tactual display, and 3) perceptual evaluations of voicing reception through the tactual display alone (T), lipreading alone (L), and the combined condition (L+T). Signal processing for the extraction of voicing information used amplitude-envelope signals derived from filtered bands of speech (i.e., envelopes derived from a lowpass-filtered band at 350 Hz and from a highpass-filtered band at 3000 Hz). Acoustic measurements made on the envelope signals of a set of 16 initial consonants represented through multiple tokens of C1VC2 syllables indicate that the onset-timing difference between the low- and high-frequency envelopes (EOA: envelope-onset asynchrony) provides a reliable and robust cue for distinguishing voiced from voiceless consonants. This acoustic cue was presented through a two-finger tactual display such that the envelope of the high-frequency band was used to modulate a 250-Hz carrier signal delivered to the index finger (250-I) and the envelope of the low-frequency band was used to modulate a 50-Hz carrier delivered to the thumb (50T). The temporal-onset order threshold for these two signals, measured with roving signal amplitude and duration, averaged 34 msec, sufficiently small for use of the EOA cue. Perceptual evaluations of the tactual display of EOA with speech signal indicated: 1) that the cue was highly effective for discrimination of pairs of voicing contrasts; 2) that the identification of 16 consonants was improved by roughly 15 percentage points with the addition of the tactual cue over L alone; and 3) that no improvements in L+T over L were observed for reception of words in sentences, indicating the need for further training on this task
Resumo:
The ability to isolate a single sound source among concurrent sources and reverberant energy is necessary for understanding the auditory world. The precedence effect describes a related experimental finding, that when presented with identical sounds from two locations with a short onset asynchrony (on the order of milliseconds), listeners report a single source with a location dominated by the lead sound. Single-cell recordings in multiple animal models have indicated that there are low-level mechanisms that may contribute to the precedence effect, yet psychophysical studies in humans have provided evidence that top-down cognitive processes have a great deal of influence on the perception of simulated echoes. In the present study, event-related potentials evoked by click pairs at and around listeners' echo thresholds indicate that perception of the lead and lag sound as individual sources elicits a negativity between 100 and 250 msec, previously termed the object-related negativity (ORN). Even for physically identical stimuli, the ORN is evident when listeners report hearing, as compared with not hearing, a second sound source. These results define a neural mechanism related to the conscious perception of multiple auditory objects.
Resumo:
Whether the somatosensory system, like its visual and auditory counterparts, is comprised of parallel functional pathways for processing identity and spatial attributes (so-called what and where pathways, respectively) has hitherto been studied in humans using neuropsychological and hemodynamic methods. Here, electrical neuroimaging of somatosensory evoked potentials (SEPs) identified the spatio-temporal mechanisms subserving vibrotactile processing during two types of blocks of trials. What blocks varied stimuli in their frequency (22.5 Hz vs. 110 Hz) independently of their location (left vs. right hand). Where blocks varied the same stimuli in their location independently of their frequency. In this way, there was a 2x2 within-subjects factorial design, counterbalancing the hand stimulated (left/right) and trial type (what/where). Responses to physically identical somatosensory stimuli differed within 200 ms post-stimulus onset, which is within the same timeframe we previously identified for audition (De Santis, L., Clarke, S., Murray, M.M., 2007. Automatic and intrinsic auditory "what" and "where" processing in humans revealed by electrical neuroimaging. Cereb Cortex 17, 9-17.). Initially (100-147 ms), responses to each hand were stronger to the what than where condition in a statistically indistinguishable network within the hemisphere contralateral to the stimulated hand, arguing against hemispheric specialization as the principal basis for somatosensory what and where pathways. Later (149-189 ms) responses differed topographically, indicative of the engagement of distinct configurations of brain networks. A common topography described responses to the where condition irrespective of the hand stimulated. By contrast, different topographies accounted for the what condition and also as a function of the hand stimulated. Parallel, functionally specialized pathways are observed across sensory systems and may be indicative of a computationally advantageous organization for processing spatial and identity information.
Resumo:
Auditory spatial functions, including the ability to discriminate between the positions of nearby sound sources, are subserved by a large temporo-parieto-frontal network. With the aim of determining whether and when the parietal contribution is critical for auditory spatial discrimination, we applied single pulse transcranial magnetic stimulation on the right parietal cortex 20, 80, 90 and 150 ms post-stimulus onset while participants completed a two-alternative forced choice auditory spatial discrimination task in the left or right hemispace. Our results reveal that transient TMS disruption of right parietal activity impairs spatial discrimination when applied at 20 ms post-stimulus onset for sounds presented in the left (controlateral) hemispace and at 80 ms for sounds presented in the right hemispace. We interpret our finding in terms of a critical role for controlateral temporo-parietal cortices over initial stages of the building-up of auditory spatial representation and for a right hemispheric specialization in integrating the whole auditory space over subsequent, higher-order processing stages.
Resumo:
Accurate perception of the order of occurrence of sensory information is critical for the building up of coherent representations of the external world from ongoing flows of sensory inputs. While some psychophysical evidence reports that performance on temporal perception can improve, the underlying neural mechanisms remain unresolved. Using electrical neuroimaging analyses of auditory evoked potentials (AEPs), we identified the brain dynamics and mechanism supporting improvements in auditory temporal order judgment (TOJ) during the course of the first vs. latter half of the experiment. Training-induced changes in brain activity were first evident 43-76 ms post stimulus onset and followed from topographic, rather than pure strength, AEP modulations. Improvements in auditory TOJ accuracy thus followed from changes in the configuration of the underlying brain networks during the initial stages of sensory processing. Source estimations revealed an increase in the lateralization of initially bilateral posterior sylvian region (PSR) responses at the beginning of the experiment to left-hemisphere dominance at its end. Further supporting the critical role of left and right PSR in auditory TOJ proficiency, as the experiment progressed, responses in the left and right PSR went from being correlated to un-correlated. These collective findings provide insights on the neurophysiologic mechanism and plasticity of temporal processing of sounds and are consistent with models based on spike timing dependent plasticity.
Resumo:
The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.
Resumo:
The oscillation of neuronal circuits reflected in the EEG gamma frequency may be fundamental to the perceptual process referred to as binding (the integration of various thoughts and perceptions into a coherent picture). The aim of our study was to expand our knowledge of the developmental course ofEEG gamma in the auditory modality. 2 We investigated EEG 40 Hz gamma band responses (35.2 to 43.0 Hz) using an auditory novelty oddball paradigm alone and with a visual-number-series distracter task in 208 participants as a function of age (7 years to adult) at 9 sites across the sagital and lateral axes (F3, Fz, F4, C3, Cz, C4, P3, Pz, P4). Gamma responses were operationally defined as change in power or a change in phase synchrony level from baseline within two time windows. The evoked gamma response was defined as a significant change from baseline occurring between 0 to 150 ms after stimulus onset; the induced gamma response was measured from 250 to 750 ms after stimulus onset. A significant evoked gamma band response was found when measuring changes in both power and phase synchrony. The increase in both measures was maximal at frontal regions. Decreases in both measures were found when participants were distracted by a secondary task. For neither measure were developmental effects noted. However, evoked gamma power was significantly enhanced with the presentation of a novel stimulus, especially at the right frontal site (F4); frontal evoked gamma phase synchrony also showed enhancement for novel stimuli but only for our two oldest age groups (16-18 year olds and adults). Induced gamma band responses also varied with task-dependent cognitive stimulus properties. In the induced gamma power response in all age groups, target stimuli generated the highest power values at the parietal region, while the novel stimuli were always below baseline. Target stimuli increased induced synchrony in all regions for all participants, but the novel stimulus selectively affected participants dependent on their age and gender. Adult participants, for example, exhibited a reduction in gamma power, but an increase in synchrony to the novel stimulus within the same region. Induced gamma synchrony was more sensitive to the gender of the participant than was induced gamma power. While induced gamma power produced little effects of age, gamma synchrony did have age effects. These results confirm that the perceptual process which regulates gamma power is distinct from that which governs the synchronization for neuronal firing, and both gamma power and synchrony are important factors to be considered for the "binding" hypothesis. However, there is surprisingly little effect of age on the absolute levels of or distribution of EEG gamma in the age range investigated.