3 resultados para Temporal acoustic window
em Aston University Research Archive
Resumo:
Early, lesion-based models of language processing suggested that semantic and phonological processes are associated with distinct temporal and parietal regions respectively, with frontal areas more indirectly involved. Contemporary spatial brain mapping techniques have not supported such clear-cut segregation, with strong evidence of activation in left temporal areas by both processes and disputed evidence of involvement of frontal areas in both processes. We suggest that combining spatial information with temporal and spectral data may allow a closer scrutiny of the differential involvement of closely overlapping cortical areas in language processing. Using beamforming techniques to analyze magnetoencephalography data, we localized the neuronal substrates underlying primed responses to nouns requiring either phonological or semantic processing, and examined the associated measures of time and frequency in those areas where activation was common to both tasks. Power changes in the beta (14-30 Hz) and gamma (30-50 Hz) frequency bandswere analyzed in pre-selected time windows of 350-550 and 500-700ms In left temporal regions, both tasks elicited power changes in the same time window (350-550 ms), but with different spectral characteristics, low beta (14-20 Hz) for the phonological task and high beta (20-30 Hz) for the semantic task. In frontal areas (BA10), both tasks elicited power changes in the gamma band (30-50 Hz), but in different time windows, 500-700ms for the phonological task and 350-550ms for the semantic task. In the left inferior parietal area (BA40), both tasks elicited changes in the 20-30 Hz beta frequency band but in different time windows, 350-550ms for the phonological task and 500-700ms for the semantic task. Our findings suggest that, where spatial measures may indicate overlapping areas of involvement, additional beamforming techniques can demonstrate differential activation in time and frequency domains. © 2012 McNab, Hillebrand, Swithenby and Rippon.
Resumo:
Because of attentional limitations, the human visual system can process for awareness and response only a fraction of the input received. Lesion and functional imaging studies have identified frontal, temporal, and parietal areas as playing a major role in the attentional control of visual processing, but very little is known about how these areas interact to form a dynamic attentional network. We hypothesized that the network communicates by means of neural phase synchronization, and we used magnetoencephalography to study transient long-range interarea phase coupling in a well studied attentionally taxing dual-target task (attentional blink). Our results reveal that communication within the fronto-parieto-temporal attentional network proceeds via transient long-range phase synchronization in the beta band. Changes in synchronization reflect changes in the attentional demands of the task and are directly related to behavioral performance. Thus, we show how attentional limitations arise from the way in which the subsystems of the attentional network interact. The human brain faces an inestimable task of reducing a potentially overloading amount of input into a manageable flow of information that reflects both the current needs of the organism and the external demands placed on it. This task is accomplished via a ubiquitous construct known as “attention,” whose mechanism, although well characterized behaviorally, is far from understood at the neurophysiological level. Whereas attempts to identify particular neural structures involved in the operation of attention have met with considerable success (1-5) and have resulted in the identification of frontal, parietal, and temporal regions, far less is known about the interaction among these structures in a way that can account for the task-dependent successes and failures of attention. The goal of the present research was, thus, to unravel the means by which the subsystems making up the human attentional network communicate and to relate the temporal dynamics of their communication to observed attentional limitations in humans. A prime candidate for communication among distributed systems in the human brain is neural synchronization (for review, see ref. 6). Indeed, a number of studies provide converging evidence that long-range interarea communication is related to synchronized oscillatory activity (refs. 7-14; for review, see ref. 15). To determine whether neural synchronization plays a role in attentional control, we placed humans in an attentionally demanding task and used magnetoencephalography (MEG) to track interarea communication by means of neural synchronization. In particular, we presented 10 healthy subjects with two visual target letters embedded in streams of 13 distractor letters, appearing at a rate of seven per second. The targets were separated in time by a single distractor. This condition leads to the “attentional blink” (AB), a well studied dual-task phenomenon showing the reduced ability to report the second of two targets when an interval <500 ms separates them (16-18). Importantly, the AB does not prevent perceptual processing of missed target stimuli but only their conscious report (19), demonstrating the attentional nature of this effect and making it a good candidate for the purpose of our investigation. Although numerous studies have investigated factors, e.g., stimulus and timing parameters, that manipulate the magnitude of a particular AB outcome, few have sought to characterize the neural state under which “standard” AB parameters produce an inability to report the second target on some trials but not others. We hypothesized that the different attentional states leading to different behavioral outcomes (second target reported correctly or not) are characterized by specific patterns of transient long-range synchronization between brain areas involved in target processing. Showing the hypothesized correspondence between states of neural synchronization and human behavior in an attentional task entails two demonstrations. First, it needs to be demonstrated that cortical areas that are suspected to be involved in visual-attention tasks, and the AB in particular, interact by means of neural synchronization. This demonstration is particularly important because previous brain-imaging studies (e.g., ref. 5) only showed that the respective areas are active within a rather large time window in the same task and not that they are concurrently active and actually create an interactive network. Second, it needs to be demonstrated that the pattern of neural synchronization is sensitive to the behavioral outcome; specifically, the ability to correctly identify the second of two rapidly succeeding visual targets
Resumo:
Objective: The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Methods: Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135. ms) and long (1500. ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Results: Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. Conclusions: The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. Significance: This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound.