912 resultados para Dyck, Murray J


Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multisensory interactions are observed in species from single-cell organisms to humans. Important early work was primarily carried out in the cat superior colliculus and a set of critical parameters for their occurrence were defined. Primary among these were temporal synchrony and spatial alignment of bisensory inputs. Here, we assessed whether spatial alignment was also a critical parameter for the temporally earliest multisensory interactions that are observed in lower-level sensory cortices of the human. While multisensory interactions in humans have been shown behaviorally for spatially disparate stimuli (e.g. the ventriloquist effect), it is not clear if such effects are due to early sensory level integration or later perceptual level processing. In the present study, we used psychophysical and electrophysiological indices to show that auditory-somatosensory interactions in humans occur via the same early sensory mechanism both when stimuli are in and out of spatial register. Subjects more rapidly detected multisensory than unisensory events. At just 50 ms post-stimulus, neural responses to the multisensory 'whole' were greater than the summed responses from the constituent unisensory 'parts'. For all spatial configurations, this effect followed from a modulation of the strength of brain responses, rather than the activation of regions specifically responsive to multisensory pairs. Using the local auto-regressive average source estimation, we localized the initial auditory-somatosensory interactions to auditory association areas contralateral to the side of somatosensory stimulation. Thus, multisensory interactions can occur across wide peripersonal spatial separations remarkably early in sensory processing and in cortical regions traditionally considered unisensory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent studies of multisensory integration compel a redefinition of fundamental sensory processes, including, but not limited to, how visual inputs influence the localization of sounds and suppression of their echoes.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite myriad studies, neurophysiologic mechanisms mediating illusory contour (IC) sensitivity remain controversial. Among the competing models one favors feed-forward effects within lower-tier cortices (V1/V2). Another situates IC sensitivity first within higher-tier cortices, principally lateral-occipital cortices (LOC), with later feedback effects in V1/V2. Still others postulate that LOC are sensitive to salient regions demarcated by the inducing stimuli, whereas V1/V2 effects specifically support IC sensitivity. We resolved these discordances by using misaligned line gratings, oriented either horizontally or vertically, to induce ICs. Line orientation provides an established assay of V1/V2 modulations independently of IC presence, and gratings lack salient regions. Electrical neuroimaging analyses of visual evoked potentials (VEPs) disambiguated the relative timing and localization of IC sensitivity with respect to that for grating orientation. Millisecond-by-millisecond analyses of VEPs and distributed source estimations revealed a main effect of grating orientation beginning at 65 ms post-stimulus onset within the calcarine sulcus that was followed by a main effect of IC presence beginning at 85 ms post-stimulus onset within the LOC. There was no evidence for differential processing of ICs as a function of the orientation of the grating. These results support models wherein IC sensitivity occurs first within the LOC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discriminating complex sounds relies on multiple stages of differential brain activity. The specific roles of these stages and their links to perception were the focus of the present study. We presented 250ms duration sounds of living and man-made objects while recording 160-channel electroencephalography (EEG). Subjects categorized each sound as that of a living, man-made or unknown item. We tested whether/when the brain discriminates between sound categories even when not transpiring behaviorally. We applied a single-trial classifier that identified voltage topographies and latencies at which brain responses are most discriminative. For sounds that the subjects could not categorize, we could successfully decode the semantic category based on differences in voltage topographies during the 116-174ms post-stimulus period. Sounds that were correctly categorized as that of a living or man-made item by the same subjects exhibited two periods of differences in voltage topographies at the single-trial level. Subjects exhibited differential activity before the sound ended (starting at 112ms) and on a separate period at ~270ms post-stimulus onset. Because each of these periods could be used to reliably decode semantic categories, we interpreted the first as being related to an implicit tuning for sound representations and the second as being linked to perceptual decision-making processes. Collectively, our results show that the brain discriminates environmental sounds during early stages and independently of behavioral proficiency and that explicit sound categorization requires a subsequent processing stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several epidemiological studies have reported an association between complications of pregnancy and delivery and schizophrenia, but none have had sufficient power to examine specific complications that, individually, are of low prevalence. We, therefore, performed an individual patient meta-analysis using the raw data from case control studies that used the Lewis-Murray scale. Data were obtained from 12 studies on 700 schizophrenia subjects and 835 controls. There were significant associations between schizophrenia and premature rupture of membranes, gestational age shorter than 37 weeks, and use of resuscitation or incubator. There were associations of borderline significance between schizophrenia and birthweight lower than 2,500 g and forceps delivery. There was no significant interaction between these complications and sex. We conclude that some abnormalities of pregnancy and delivery may be associated with development of schizophrenia. The pathophysiology may involve hypoxia and so future studies should focus on the accurate measurement of this exposure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Double-strand breaks (DSBs) in DNA are caused by ionizing radiation. These chromosomal breaks can kill the cell unless repaired efficiently, and inefficient or inappropriate repair can lead to mutation, gene translocation and cancer. Two proteins that participate in the repair of DSBs are Rad52 and Ku: in lower eukaryotes such as yeast, DSBs are repaired by Rad52-dependent homologous recombination, whereas vertebrates repair DSBs primarily by Ku-dependent non-homologous end-joining. The contribution of homologous recombination to vertebrate DSB repair, however, is important. Biochemical studies indicate that Ku binds to DNA ends and facilitates end-joining. Here we show that human Rad52, like Ku, binds directly to DSBs, protects them from exonuclease attack and facilitates end-to-end interactions. A model for repair is proposed in which either Ku or Rad52 binds the DSB. Ku directs DSBs into the non-homologous end-joining repair pathway, whereas Rad52 initiates repair by homologous recombination. Ku and Rad52, therefore, direct entry into alternative pathways for the repair of DNA breaks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neuronal oscillations are an important aspect of EEG recordings. These oscillations are supposed to be involved in several cognitive mechanisms. For instance, oscillatory activity is considered a key component for the top-down control of perception. However, measuring this activity and its influence requires precise extraction of frequency components. This processing is not straightforward. Particularly, difficulties with extracting oscillations arise due to their time-varying characteristics. Moreover, when phase information is needed, it is of the utmost importance to extract narrow-band signals. This paper presents a novel method using adaptive filters for tracking and extracting these time-varying oscillations. This scheme is designed to maximize the oscillatory behavior at the output of the adaptive filter. It is then capable of tracking an oscillation and describing its temporal evolution even during low amplitude time segments. Moreover, this method can be extended in order to track several oscillations simultaneously and to use multiple signals. These two extensions are particularly relevant in the framework of EEG data processing, where oscillations are active at the same time in different frequency bands and signals are recorded with multiple sensors. The presented tracking scheme is first tested with synthetic signals in order to highlight its capabilities. Then it is applied to data recorded during a visual shape discrimination experiment for assessing its usefulness during EEG processing and in detecting functionally relevant changes. This method is an interesting additional processing step for providing alternative information compared to classical time-frequency analyses and for improving the detection and analysis of cross-frequency couplings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799-1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with "visual preference" showed enhanced phosphene perception irrespective of L-sound velocity, those with "auditory preference" showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Action representations can interact with object recognition processes. For example, so-called mirror neurons respond both when performing an action and when seeing or hearing such actions. Investigations of auditory object processing have largely focused on categorical discrimination, which begins within the initial 100 ms post-stimulus onset and subsequently engages distinct cortical networks. Whether action representations themselves contribute to auditory object recognition and the precise kinds of actions recruiting the auditory-visual mirror neuron system remain poorly understood. We applied electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to sounds of man-made objects that were further subdivided between sounds conveying a socio-functional context and typically cuing a responsive action by the listener (e.g. a ringing telephone) and those that are not linked to such a context and do not typically elicit responsive actions (e.g. notes on a piano). This distinction was validated psychophysically by a separate cohort of listeners. Beginning approximately 300 ms, responses to such context-related sounds significantly differed from context-free sounds both in the strength and topography of the electric field. This latency is >200 ms subsequent to general categorical discrimination. Additionally, such topographic differences indicate that sounds of different action sub-types engage distinct configurations of intracranial generators. Statistical analysis of source estimations identified differential activity within premotor and inferior (pre)frontal regions (Brodmann's areas (BA) 6, BA8, and BA45/46/47) in response to sounds of actions typically cuing a responsive action. We discuss our results in terms of a spatio-temporal model of auditory object processing and the interplay between semantic and action representations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT come into play simultaneously and seem to be functionally linked during interval encoding, whereas they operate serially (V1 followed by V5/MT) and seem to be independent while maintaining temporal information in working memory. These data help to refine our knowledge of the functional properties of human visual cortex, highlighting the contribution and the temporal dynamics of V1 and V5/MT in the processing of the temporal aspects of visual information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study details a method to statistically determine, on a millisecond scale and for individual subjects, those brain areas whose activity differs between experimental conditions, using single-trial scalp-recorded EEG data. To do this, we non-invasively estimated local field potentials (LFPs) using the ELECTRA distributed inverse solution and applied non-parametric statistical tests at each brain voxel and for each time point. This yields a spatio-temporal activation pattern of differential brain responses. The method is illustrated here in the analysis of auditory-somatosensory (AS) multisensory interactions in four subjects. Differential multisensory responses were temporally and spatially consistent across individuals, with onset at approximately 50 ms and superposition within areas of the posterior superior temporal cortex that have traditionally been considered auditory in their function. The close agreement of these results with previous investigations of AS multisensory interactions suggests that the present approach constitutes a reliable method for studying multisensory processing with the temporal and spatial resolution required to elucidate several existing questions in this field. In particular, the present analyses permit a more direct comparison between human and animal studies of multisensory interactions and can be extended to examine correlation between electrophysiological phenomena and behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way.