999 resultados para Electrophysiological Processes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Summarizing the accumulated experience for a long time in the polyparametric cognitive modeling of different physiological processes (electrocardiogram, electroencephalogram, electroreovasogram and others) and the development on this basis some diagnostics methods give ground for formulating a new methodology of the system analysis in biology. The gist of the methodology consists of parametrization of fractals of electrophysiological processes, matrix description of functional state of an object with a unified set of parameters, construction of the polyparametric cognitive geometric model with artificial intelligence algorithms. The geometry model enables to display the parameter relationships are adequate to requirements of the system approach. The objective character of the elements of the models and high degree of formalization which facilitate the use of the mathematical methods are advantages of these models. At the same time the geometric images are easily interpreted in physiological and clinical terms. The polyparametric modeling is an object oriented tool possessed advances functional facilities and some principal features.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le somnambulisme est une parasomnie commune, caractérisée par des éveils incomplets lors des stades de sommeil lent, au cours desquels les individus atteints présentent des comportements moteurs d’une complexité variable accompagnés de confusion et d’un jugement altéré. La littérature actuelle suggère que ce trouble serait associé à des particularités de l’activité en ondes lentes et des oscillations lentes, deux indices de l’intégrité du processus homéostatique et de la profondeur du sommeil. Toutefois, en raison de certaines lacunes méthodologiques dans les études existantes, le rôle de ces marqueurs électroencéphalographiques dans la pathophysiologie du somnambulisme reste à éclaircir. Notre premier article a donc investigué d’éventuelles anomalies de l’activité en ondes lentes et des oscillations lentes chez les somnambules, en comparant leur sommeil au cours de la nuit entière à celui de participants contrôles. De plus, comme les somnambules semblent réagir différemment (en termes de fragmentation du sommeil notamment) des dormeurs normaux à une pression homéostatique accrue, nous avons comparé l’activité en ondes lentes et les oscillations lentes en nuit de base et suite à une privation de sommeil de 38 heures. Les résultats de nos enregistrements électroencéphalographiques chez 10 somnambules adultes et neuf participants contrôles montrent une élévation de la puissance spectrale de l’activité en ondes lentes et de la densité des oscillations lentes en nuit de récupération par rapport à la nuit de base pour nos deux groupes. Toutefois, contrairement à plusieurs études précédentes, nous ne n’observons pas de différence entre somnambules et dormeurs normaux quant à l’activité en ondes lentes et aux oscillations lentes pour aucune des deux nuits. Au-delà ce certaines considérations méthodologiques ayant pu contribuer à ce résultat inattendu, nous croyons qu’il justifie un questionnement sur l’hétérogénéité des somnambules comme population. Notre deuxième article s’est penché sur les facteurs électroencéphalographiques transitoires susceptibles d’être associés au déclenchement des épisodes de somnambulisme. Nous avons comparé les fluctuations d’activité en ondes lentes et des oscillations lentes dans les minutes avant des épisodes de somnambulisme spontanés (c.a.d.: non associés à un stimulus identifiable) à celles survenant avant des éveils normaux comparables chez 12 somnambules adultes. Nous montrons que, comparativement aux éveils normaux, les épisodes de somnambulisme sont précédés d’un sommeil plus profond, tel qu’indiqué par une plus grande densité spectrale de l’activité en ondes lentes et une plus grande densité des oscillations lentes. Cet approfondissement du sommeil, spécifique aux épisodes de somnambulisme, semble survenir sur un laps de temps relativement long (>3 minutes), et non abruptement au cours des secondes précédant l’épisode. Ces données ouvrent un questionnement quant aux mécanismes en jeu dans la survenue des épisodes de somnambulisme spontanés. Globalement, cette thèse suggère que des phénomènes liés à l’activité en ondes lentes et aux oscillations lentes seraient liés au déclenchement des épisodes de somnambulisme, mais que des études supplémentaires devront être menées afin de délimiter le rôle précis que ces marqueurs jouent dans la pathophysiologie du somnambulisme.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To better understand synaptic signaling at the mammalian rod bipolar cell terminal and pave the way for applying genetic approaches to the study of visual information processing in the mammalian retina, synaptic vesicle dynamics and intraterminal calcium were monitored in terminals of acutely isolated mouse rod bipolar cells and the number of ribbon-style active zones quantified. We identified a releasable pool, corresponding to a maximum of 7 s. The presence of a smaller, rapidly releasing pool and a small, fast component of refilling was also suggested. Following calcium channel closure, membrane surface area was restored to baseline with a time constant that ranged from 2 to 21 s depending on the magnitude of the preceding Ca2+ transient. In addition, a brief, calcium-dependent delay often preceded the start of onset of membrane recovery. Thus, several aspects of synaptic vesicle dynamics appear to be conserved between rod-dominant bipolar cells of fish and mammalian rod bipolar cells. A major difference is that the number of vesicles available for release is significantly smaller in the mouse rod bipolar cell, both as a function of the total number per neuron and on a per active zone basis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigated the electrophysiological response to matched two-formant vowels and two-note musical intervals, with the goal of examining whether music is processed differently from language in early cortical responses. Using magnetoencephalography (MEG), we compared the mismatch-response (MMN/MMF, an early, pre-attentive difference-detector occurring approximately 200 ms post-onset) to musical intervals and vowels composed of matched frequencies. Participants heard blocks of two stimuli in a passive oddball paradigm in one of three conditions: sine waves, piano tones and vowels. In each condition, participants heard two-formant vowels or musical intervals whose frequencies were 11, 12, or 24 semitones apart. In music, 12 semitones and 24 semitones are perceived as highly similar intervals (one and two octaves, respectively), while in speech 12 semitones and 11 semitones formant separations are perceived as highly similar (both variants of the vowel in 'cut'). Our results indicate that the MMN response mirrors the perceptual one: larger MMNs were elicited for the 12-11 pairing in the music conditions than in the language condition; conversely, larger MMNs were elicited to the 12-24 pairing in the language condition that in the music conditions, suggesting that within 250 ms of hearing complex auditory stimuli, the neural computation of similarity, just as the behavioral one, differs significantly depending on whether the context is music or speech.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

When observers are presented with two visual targets appearing in the same position in close temporal proximity, a marked reduction in detection performance of the second target has often been reported, the so-called attentional blink phenomenon. Several studies found a similar decrement of P300 amplitudes during the attentional blink period as observed with detection performances of the second target. However, whether the parallel courses of second target performances and corresponding P300 amplitudes resulted from the same underlying mechanisms remained unclear. The aim of our study was therefore to investigate whether the mechanisms underlying the AB can be assessed by fixed-links modeling and whether this kind of assessment would reveal the same or at least related processes in the behavioral and electrophysiological data. On both levels of observation three highly similar processes could be identified: an increasing, a decreasing and a u-shaped trend. Corresponding processes from the behavioral and electrophysiological data were substantially correlated, with the two u-shaped trends showing the strongest association with each other. Our results provide evidence for the assumption that the same mechanisms underlie attentional blink task performance at the electrophysiological and behavioral levels as assessed by fixed-links models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Isolating processes within the brain that are specific to human behavior is a key goal for social neuroscience. The current research was an attempt to test whether recent findings of enhanced negative ERPs in response to unexpected human gaze are unique to eye gaze stimuli by comparing the effects of gaze cues with the effects of an arrow cue. ERPs were recorded while participants (N¼30) observed a virtual actor or an arrow that gazed (or pointed) either toward (object congruent) or away from (object incongruent) a flashing checkerboard. An enhanced negative ERP (N300) in response to object incongruent compared to object congruent trials was recorded for both eye gaze and arrow stimuli. The findings are interpreted as reflecting a domain general mechanism for detecting unexpected events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical evidence suggests impaired facial emotion recognition in schizophrenia. However, the nature of this deficit is the subject of ongoing research. The current study tested the hypothesis that a generalized deficit at an early stage of face-specific processing (i.e. putatively subserved by the fusiform gyrus) accounts for impaired facial emotion recognition in schizophrenia as opposed to the Negative Emotion-specific Deficit Model, which suggests impaired facial information processing at subsequent stages. Event-related potentials (ERPs) were recorded from 11 schizophrenia patients and 15 matched controls while performing a gender discrimination and a facial emotion recognition task. Significant reduction of the face-specific vertex positive potential (VPP) at a peak latency of 165 ms was confirmed in schizophrenia subjects whereas their early visual processing, as indexed by P1, was found to be intact. Attenuated VPP was found to correlate with subsequent P3 amplitude reduction and to predict accuracy when performing a facial emotion discrimination task. A subset of ten schizophrenia patients and ten matched healthy control subjects also performed similar tasks in the magnetic resonance imaging scanner. Patients showed reduced blood oxygenation level-dependent (BOLD) activation in the fusiform, inferior frontal, middle temporal and middle occipital gyrus as well as in the amygdala. Correlation analyses revealed that VPP and the subsequent P3a ERP components predict fusiform gyrus BOLD activation. These results suggest that problems in facial affect recognition in schizophrenia may represent flow-on effects of a generalized deficit in early visual processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a musical context, the pitch of sounds is encoded according to domain-general principles not confined to music or even to audition overall but common to other perceptual and cognitive processes (such as multiple pattern encoding and feature integration), and to domain-specific and culture-specific properties related to a particular musical system only (such as the pitch steps of the Western tonal system). The studies included in this thesis shed light on the processing stages during which pitch encoding occurs on the basis of both domain-general and music-specific properties, and elucidate the putative brain mechanisms underlying pitch-related music perception. Study I showed, in subjects without formal musical education, that the pitch and timbre of multiple sounds are integrated as unified object representations in sensory memory before attentional intervention. Similarly, multiple pattern pitches are simultaneously maintained in non-musicians' sensory memory (Study II). These findings demonstrate the degree of sophistication of pitch processing at the sensory memory stage, requiring neither attention nor any special expertise of the subjects. Furthermore, music- and culture-specific properties, such as the pitch steps of the equal-tempered musical scale, are automatically discriminated in sensory memory even by subjects without formal musical education (Studies III and IV). The cognitive processing of pitch according to culture-specific musical-scale schemata hence occurs as early as at the sensory-memory stage of pitch analysis. Exposure and cortical plasticity seem to be involved in musical pitch encoding. For instance, after only one hour of laboratory training, the neural representations of pitch in the auditory cortex are altered (Study V). However, faulty brain mechanisms for attentive processing of fine-grained pitch steps lead to inborn deficits in music perception and recognition such as those encountered in congenital amusia (Study VI). These findings suggest that predispositions for exact pitch-step discrimination together with long-term exposure to music govern the acquisition of the automatized schematic knowledge of the music of a particular culture that even non-musicians possess.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intact function of working memory (WM) is essential for children and adults to cope with every day life. Children with deficits in WM mechanisms have learning difficulties that are often accompanied by behavioral problems. The neural processes subserving WM, and brain structures underlying this system, continue to develop during childhood till adolescence and young adulthood. With functional magnetic resonance imaging (fMRI) it is possible to investigate the organization and development of WM. The present thesis aimed to investigate, using behavioral and neuroimaging methods, whether mnemonic processing of spatial and nonspatial visual information is segregated in the developing and mature human brain. A further aim in this research was to investigate the organization and development of audiospatial and visuospatial information processing in WM. The behavioral results showed that spatial and nonspatial visual WM processing is segregated in the adult brain. The fMRI result in children suggested that memory load related processing of spatial and nonspatial visual information engages common cortical networks, whereas selective attention to either type of stimuli recruits partially segregated areas in the frontal, parietal and occipital cortices. Deactivation mechanisms that are important in the performance of WM tasks in adults are already operational in healthy school-aged children. Electrophysiological evidence suggested segregated mnemonic processing of visual and auditory location information. The results of the development of audiospatial and visuospatial WM demonstrate that WM performance improves with age, suggesting functional maturation of underlying cognitive processes and brain areas. The development of the performance of spatial WM tasks follows a different time course in boys and girls indicating a larger degree of immaturity in the male than female WM systems. Furthermore, the differences in mastering auditory and visual WM tasks may indicate that visual WM reaches functional maturity earlier than the corresponding auditory system. Spatial WM deficits may underlie some learning difficulties and behavioral problems related to impulsivity, difficulties in concentration, and hyperactivity. Alternatively, anxiety or depressive symptoms may affect WM function and the ability to concentrate, being thus the primary cause of poor academic achievement in children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large variety of social signals, such as facial expression and body language, are conveyed in everyday interactions and an accurate perception and interpretation of these social cues is necessary in order for reciprocal social interactions to take place successfully and efficiently. The present study was conducted to determine whether impairments in social functioning that are commonly observed following a closed head injury, could at least be partially attributable to disruption in the ability to appreciate social cues. More specifically, an attempt was made to determine whether face processing deficits following a closed head injury (CHI) coincide with changes in electrophysiological responsivity to the presentation of facial stimuli. A number of event-related potentials (ERPs) that have been linked specifically to various aspects of visual processing were examined. These included the N170, an index of structural encoding ability, the N400, an index of the ability to detect differences in serially presented stimuli, and the Late Positivity (LP), an index of the sensitivity to affective content in visually-presented stimuli. Electrophysiological responses were recorded while participants with and without a closed head injury were presented with pairs of faces delivered in a rapid sequence and asked to compare them on the basis of whether they matched with respect to identity or emotion. Other behavioural measures of identity and emotion recognition were also employed, along with a small battery of standard neuropsychological tests used to determine general levels of cognitive impairment. Participants in the CHI group were impaired in a number of cognitive domains that are commonly affected following a brain injury. These impairments included reduced efficiency in various aspects of encoding verbal information into memory, general slower rate of information processing, decreased sensitivity to smell, and greater difficulty in the regulation of emotion and a limited awareness of this impairment. Impairments in face and emotion processing were clearly evident in the CHI group. However, despite these impairments in face processing, there were no significant differences between groups in the electrophysiological components examined. The only exception was a trend indicating delayed N170 peak latencies in the CHI group (p = .09), which may reflect inefficient structural encoding processes. In addition, group differences were noted in the region of the N100, thought to reflect very early selective attention. It is possible, then, that facial expression and identity processing deficits following CHI are secondary to (or exacerbated by) an underlying disruption of very early attentional processes. Alternately the difficulty may arise in the later cognitive stages involved in the interpretation of the relevant visual information. However, the present data do not allow these alternatives to be distinguished. Nonetheless, it was clearly evident that individuals with CHI are more likely than controls to make face processing errors, particularly for the more difficult to discriminate negative emotions. Those working with individuals who have sustained a head injury should be alerted to this potential source of social monitoring difficulties which is often observed as part of the sequelae following a CHI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The initial timing of face-specific effects in event-related potentials (ERPs) is a point of contention in face processing research. Although effects during the time of the N170 are robust in the literature, inconsistent effects during the time of the P100 challenge the interpretation of the N170 as being the initial face-specific ERP effect. The interpretation of the early P100 effects are often attributed to low-level differences between face stimuli and a host of other image categories. Research using sophisticated controls for low-level stimulus characteristics (Rousselet, Husk, Bennett, & Sekuler, 2008) report robust face effects starting at around 130 ms following stimulus onset. The present study examines the independent components (ICs) of the P100 and N170 complex in the context of a minimally controlled low-level stimulus set and a clear P100 effect for faces versus houses at the scalp. Results indicate that four ICs account for the ERPs to faces and houses in the first 200ms following stimulus onset. The IC that accounts for the majority of the scalp N170 (icNla) begins dissociating stimulus conditions at approximately 130 ms, closely replicating the scalp results of Rousselet et al. (2008). The scalp effects at the time of the P100 are accounted for by two constituent ICs (icP1a and icP1b). The IC that projects the greatest voltage at the scalp during the P100 (icP1a) shows a face-minus-house effect over the period of the P100 that is less robust than the N 170 effect of icN 1 a when measured as the average of single subject differential activation robustness. The second constituent process of the P100 (icP1b), although projecting a smaller voltage to the scalp than icP1a, shows a more robust effect for the face-minus-house contrast starting prior to 100 ms following stimulus onset. Further, the effect expressed by icP1 b takes the form of a larger negative projection to medial occipital sites for houses over faces partially canceling the larger projection of icP1a, thereby enhancing the face positivity at this time. These findings have three main implications for ERP research on face processing: First, the ICs that constitute the face-minus-house P100 effect are independent from the ICs that constitute the N170 effect. This suggests that the P100 effect and the N170 effect are anatomically independent. Second, the timing of the N170 effect can be recovered from scalp ERPs that have spatio-temporally overlapping effects possibly associated with low-level stimulus characteristics. This unmixing of the EEG signals may reduce the need for highly constrained stimulus sets, a characteristic that is not always desirable for a topic that is highly coupled to ecological validity. Third, by unmixing the constituent processes of the EEG signals new analysis strategies are made available. In particular the exploration of the relationship between cortical processes over the period of the P100 and N170 ERP complex (and beyond) may provide previously unaccessible answers to questions such as: Is the face effect a special relationship between low-level and high-level processes along the visual stream?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As important social stimuli, faces playa critical role in our lives. Much of our interaction with other people depends on our ability to recognize faces accurately. It has been proposed that face processing consists of different stages and interacts with other systems (Bruce & Young, 1986). At a perceptual level, the initial two stages, namely structural encoding and face recognition, are particularly relevant and are the focus of this dissertation. Event-related potentials (ERPs) are averaged EEG signals time-locked to a particular event (such as the presentation of a face). With their excellent temporal resolution, ERPs can provide important timing information about neural processes. Previous research has identified several ERP components that are especially related to face processing, including the N 170, the P2 and the N250. Their nature with respect to the stages of face processing is still unclear, and is examined in Studies 1 and 2. In Study 1, participants made gender decisions on a large set of female faces interspersed with a few male faces. The ERP responses to facial characteristics of the female faces indicated that the N 170 amplitude from each side of the head was affected by information from eye region and by facial layout: the right N 170 was affected by eye color and by face width, while the left N 170 was affected by eye size and by the relation between the sizes of the top and bottom parts of a face. In contrast, the P100 and the N250 components were largely unaffected by facial characteristics. These results thus provided direct evidence for the link between the N 170 and structural encoding of faces. In Study 2, focusing on the face recognition stage, we manipulated face identity strength by morphing individual faces to an "average" face. Participants performed a face identification task. The effect of face identity strength was found on the late P2 and the N250 components: as identity strength decreased from an individual face to the "average" face, the late P2 increased and the N250 decreased. In contrast, the P100, the N170 and the early P2 components were not affected by face identity strength. These results suggest that face recognition occurs after 200 ms, but not earlier. Finally, because faces are often associated with social information, we investigated in Study 3 how group membership might affect ERP responses to faces. After participants learned in- and out-group memberships of the face stimuli based on arbitrarily assigned nationality and university affiliation, we found that the N170 latency differentiated in-group and out-group faces, taking longer to process the latter. In comparison, without group memberships, there was no difference in N170 latency among the faces. This dissertation provides evidence that at a neural level, structural encoding of faces, indexed by the N170, occurs within 200 ms. Face recognition, indexed by the late P2 and the N250, occurs shortly afterwards between 200 and 300 ms. Social cognitive factors can also influence face processing. The effect is already evident as early as 130-200 ms at the structural encoding stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Color perception has been a traditional test-case of the idea that the language we speak affects our perception of the world.1 It is now established that categorical perception of color is verbally mediated and varies with culture and language.2 However, it is unknown whether the well-demonstrated language effects on color discrimination really reach down to the level of visual perception, or whether they only reflect post-perceptual cognitive processes. Using brain potentials in a color oddball detection task with Greek and English speakers, we demonstrate that language effects may exist at a level that is literally perceptual, suggesting that speakers of different languages have differently structured minds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We usually perform actions in a dynamic environment and changes in the location of a target for an upcoming action require both covert shifts of attention and motor planning update. In this study we tested whether, similarly to oculomotor areas that provide signals for overt and covert attention shifts, covert attention shifts modulate activity in cortical area V6A, which provides a bridge between visual signals and arm-motor control. We performed single cell recordings in monkeys trained to fixate straight-ahead while shifting attention outward to a peripheral cue and inward again to the fixation point. We found that neurons in V6A are influenced by spatial attention demonstrating that visual, motor, and attentional responses can occur in combination in single neurons of V6A. This modulation in an area primarily involved in visuo-motor transformation for reaching suggests that also reach-related regions could directly contribute in the shifts of spatial attention necessary to plan and control goal-directed arm movements. Moreover, to test whether V6A is causally involved in these processes, we have performed a human study using on-line repetitive transcranial magnetic stimulation over the putative human V6A (pV6A) during an attention and a reaching task requiring covert shifts of attention and reaching movements towards cued targets in space. We demonstrate that the pV6A is causally involved in attention reorienting to target detection and that this process interferes with the execution of reaching movements towards unattended targets. The current findings suggest the direct involvement of the action-related dorso-medial visual stream in attentional processes, and a more specific role of V6A in attention reorienting. Therefore, we propose that attention signals are used by the V6A to rapidly update the current motor plan or the ongoing action when a behaviorally relevant object unexpectedly appears at an unattended location.