92 resultados para Reaction-time Task

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several lines of research have documented early-latency non-linear response interactions between audition and touch in humans and non-human primates. That these effects have been obtained under anesthesia, passive stimulation, as well as speeded reaction time tasks would suggest that some multisensory effects are not directly influencing behavioral outcome. We investigated whether the initial non-linear neural response interactions have a direct bearing on the speed of reaction times. Electrical neuroimaging analyses were applied to event-related potentials in response to auditory, somatosensory, or simultaneous auditory-somatosensory multisensory stimulation that were in turn averaged according to trials leading to fast and slow reaction times (using a median split of individual subject data for each experimental condition). Responses to multisensory stimulus pairs were contrasted with each unisensory response as well as summed responses from the constituent unisensory conditions. Behavioral analyses indicated that neural response interactions were only implicated in the case of trials producing fast reaction times, as evidenced by facilitation in excess of probability summation. In agreement, supra-additive non-linear neural response interactions between multisensory and the sum of the constituent unisensory stimuli were evident over the 40-84 ms post-stimulus period only when reaction times were fast, whereas subsequent effects (86-128 ms) were observed independently of reaction time speed. Distributed source estimations further revealed that these earlier effects followed from supra-additive modulation of activity within posterior superior temporal cortices. These results indicate the behavioral relevance of early multisensory phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose and validate a multivariate classification algorithm for characterizing changes in human intracranial electroencephalographic data (iEEG) after learning motor sequences. The algorithm is based on a Hidden Markov Model (HMM) that captures spatio-temporal properties of the iEEG at the level of single trials. Continuous intracranial iEEG was acquired during two sessions (one before and one after a night of sleep) in two patients with depth electrodes implanted in several brain areas. They performed a visuomotor sequence (serial reaction time task, SRTT) using the fingers of their non-dominant hand. Our results show that the decoding algorithm correctly classified single iEEG trials from the trained sequence as belonging to either the initial training phase (day 1, before sleep) or a later consolidated phase (day 2, after sleep), whereas it failed to do so for trials belonging to a control condition (pseudo-random sequence). Accurate single-trial classification was achieved by taking advantage of the distributed pattern of neural activity. However, across all the contacts the hippocampus contributed most significantly to the classification accuracy for both patients, and one fronto-striatal contact for one patient. Together, these human intracranial findings demonstrate that a multivariate decoding approach can detect learning-related changes at the level of single-trial iEEG. Because it allows an unbiased identification of brain sites contributing to a behavioral effect (or experimental condition) at the level of single subject, this approach could be usefully applied to assess the neural correlates of other complex cognitive functions in patients implanted with multiple electrodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whether different brain networks are involved in generating unimanual responses to a simple visual stimulus presented in the ipsilateral versus contralateral hemifield remains a controversial issue. Visuo-motor routing was investigated with event-related functional magnetic resonance imaging (fMRI) using the Poffenberger reaction time task. A 2 hemifield x 2 response hand design generated the "crossed" and "uncrossed" conditions, describing the spatial relation between these factors. Both conditions, with responses executed by the left or right hand, showed a similar spatial pattern of activated areas, including striate and extrastriate areas bilaterally, SMA, and M1 contralateral to the responding hand. These results demonstrated that visual information is processed bilaterally in striate and extrastriate visual areas, even in the "uncrossed" condition. Additional analyses based on sorting data according to subjects' reaction times revealed differential crossed versus uncrossed activity only for the slowest trials, with response strength in infero-temporal cortices significantly correlating with crossed-uncrossed differences (CUD) in reaction times. Collectively, the data favor a parallel, distributed model of brain activation. The presence of interhemispheric interactions and its consequent bilateral activity is not determined by the crossed anatomic projections of the primary visual and motor pathways. Distinct visuo-motor networks need not be engaged to mediate behavioral responses for the crossed visual field/response hand condition. While anatomical connectivity heavily influences the spatial pattern of activated visuo-motor pathways, behavioral and functional parameters appear to also affect the strength and dynamics of responses within these pathways.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Action representations can interact with object recognition processes. For example, so-called mirror neurons respond both when performing an action and when seeing or hearing such actions. Investigations of auditory object processing have largely focused on categorical discrimination, which begins within the initial 100 ms post-stimulus onset and subsequently engages distinct cortical networks. Whether action representations themselves contribute to auditory object recognition and the precise kinds of actions recruiting the auditory-visual mirror neuron system remain poorly understood. We applied electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to sounds of man-made objects that were further subdivided between sounds conveying a socio-functional context and typically cuing a responsive action by the listener (e.g. a ringing telephone) and those that are not linked to such a context and do not typically elicit responsive actions (e.g. notes on a piano). This distinction was validated psychophysically by a separate cohort of listeners. Beginning approximately 300 ms, responses to such context-related sounds significantly differed from context-free sounds both in the strength and topography of the electric field. This latency is >200 ms subsequent to general categorical discrimination. Additionally, such topographic differences indicate that sounds of different action sub-types engage distinct configurations of intracranial generators. Statistical analysis of source estimations identified differential activity within premotor and inferior (pre)frontal regions (Brodmann's areas (BA) 6, BA8, and BA45/46/47) in response to sounds of actions typically cuing a responsive action. We discuss our results in terms of a spatio-temporal model of auditory object processing and the interplay between semantic and action representations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ullman (2004) suggested that Specific Language Impairment (SLI) results from a general procedural learning deficit. In order to test this hypothesis, we investigated children with SLI via procedural learning tasks exploring the verbal, motor, and cognitive domains. Results showed that compared with a Control Group, the children with SLI (a) were unable to learn a phonotactic learning task, (b) were able but less efficiently to learn a motor learning task and (c) succeeded in a cognitive learning task. Regarding the motor learning task (Serial Reaction Time Task), reaction times were longer and learning slower than in controls. The learning effect was not significant in children with an associated Developmental Coordination Disorder (DCD), and future studies should consider comorbid motor impairment in order to clarify whether impairments are related to the motor rather than the language disorder. Our results indicate that a phonotactic learning but not a cognitive procedural deficit underlies SLI, thus challenging Ullmans' general procedural deficit hypothesis, like a few other recent studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knockout mice lacking the alpha-1b adrenergic receptor were tested in behavioral experiments. Reaction to novelty was first assessed in a simple test in which the time taken by the knockout mice and their littermate controls to enter a second compartment was compared. Then the mice were tested in an open field to which unknown objects were subsequently added. Special novelty was introduced by moving one of the familiar objects to another location in the open field. Spatial behavior and memory were further studied in a homing board test, and in the water maze. The alpha-1b knockout mice showed an enhanced reactivity to new situations. They were faster to enter the new environment, covered longer paths in the open field, and spent more time exploring the new objects. They reacted like controls to modification inducing spatial novelty. In the homing board test, both the knockout mice and the control mice seemed to use a combination of distant visual and proximal olfactory cues, showing place preference only if the two types of cues were redundant. In the water maze the alpha-1b knockout mice were unable to learn the task, which was confirmed in a probe trial without platform. They were perfectly able, however, to escape in a visible platform procedure. These results confirm previous findings showing that the noradrenergic pathway is important for the modulation of behaviors such as reaction to novelty and exploration, and suggest that this is mediated, at least partly, through the alpha-1b adrenergic receptors. The lack of alpha-1b adrenergic receptors in spatial orientation does not seem important in cue-rich tasks but may interfere with orientation in situations providing distant cues only.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated the spatial, spectral, temporal and functional proprieties of functional brain connections involved in the concurrent execution of unrelated visual perception and working memory tasks. Electroencephalography data was analysed using a novel data-driven approach assessing source coherence at the whole-brain level. Three connections in the beta-band (18-24 Hz) and one in the gamma-band (30-40 Hz) were modulated by dual-task performance. Beta-coherence increased within two dorsofrontal-occipital connections in dual-task conditions compared to the single-task condition, with the highest coherence seen during low working memory load trials. In contrast, beta-coherence in a prefrontal-occipital functional connection and gamma-coherence in an inferior frontal-occipitoparietal connection was not affected by the addition of the second task and only showed elevated coherence under high working memory load. Analysis of coherence as a function of time suggested that the dorsofrontal-occipital beta-connections were relevant to working memory maintenance, while the prefrontal-occipital beta-connection and the inferior frontal-occipitoparietal gamma-connection were involved in top-down control of concurrent visual processing. The fact that increased coherence in the gamma-connection, from low to high working memory load, was negatively correlated with faster reaction time on the perception task supports this interpretation. Together, these results demonstrate that dual-task demands trigger non-linear changes in functional interactions between frontal-executive and occipitoparietal-perceptual cortices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using event-related potentials (ERPs), we investigated the neural response associated with preparing to switch from one task to another. We used a cued task-switching paradigm in which the interval between the cue and the imperative stimulus was varied. The difference between response time (RT) to trials on which the task switched and trials on which the task repeated (switch cost) decreased as the interval between cue and target (CTI) was increased, demonstrating that subjects used the CTI to prepare for the forthcoming task. However, the RT on repeated-task trials in blocks during which the task could switch (mixed-task blocks) were never as short as RTs during single-task blocks (mixing cost). This replicates previous research. The ERPs in response to the cue were compared across three conditions: single-task trials, switch trials, and repeat trials. ERP topographic differences were found between single-task trials and mixed-task (switch and repeat) trials at approximately 160 and approximately 310 msec after the cue, indicative of changes in the underlying neural generator configuration as a basis for the mixing cost. In contrast, there were no topographic differences evident between switch and repeat trials during the CTI. Rather, the response of statistically indistinguishable generator configurations was stronger at approximately 310 msec on switch than on repeat trials. By separating differences in ERP topography from differences in response strength, these results suggest that a reappraisal of previous research is appropriate.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract (English)General backgroundMultisensory stimuli are easier to recognize, can improve learning and a processed faster compared to unisensory ones. As such, the ability an organism has to extract and synthesize relevant sensory inputs across multiple sensory modalities shapes his perception of and interaction with the environment. A major question in the scientific field is how the brain extracts and fuses relevant information to create a unified perceptual representation (but also how it segregates unrelated information). This fusion between the senses has been termed "multisensory integration", a notion that derives from seminal animal single-cell studies performed in the superior colliculus, a subcortical structure shown to create a multisensory output differing from the sum of its unisensory inputs. At the cortical level, integration of multisensory information is traditionally deferred to higher classical associative cortical regions within the frontal, temporal and parietal lobes, after extensive processing within the sensory-specific and segregated pathways. However, many anatomical, electrophysiological and neuroimaging findings now speak for multisensory convergence and interactions as a distributed process beginning much earlier than previously appreciated and within the initial stages of sensory processing.The work presented in this thesis is aimed at studying the neural basis and mechanisms of how the human brain combines sensory information between the senses of hearing and touch. Early latency non-linear auditory-somatosensory neural response interactions have been repeatedly observed in humans and non-human primates. Whether these early, low-level interactions are directly influencing behavioral outcomes remains an open question as they have been observed under diverse experimental circumstances such as anesthesia, passive stimulation, as well as speeded reaction time tasks. Under laboratory settings, it has been demonstrated that simple reaction times to auditory-somatosensory stimuli are facilitated over their unisensory counterparts both when delivered to the same spatial location or not, suggesting that audi- tory-somatosensory integration must occur in cerebral regions with large-scale spatial representations. However experiments that required the spatial processing of the stimuli have observed effects limited to spatially aligned conditions or varying depending on which body part was stimulated. Whether those divergences stem from task requirements and/or the need for spatial processing has not been firmly established.Hypotheses and experimental resultsIn a first study, we hypothesized that auditory-somatosensory early non-linear multisensory neural response interactions are relevant to behavior. Performing a median split according to reaction time of a subset of behavioral and electroencephalographic data, we found that the earliest non-linear multisensory interactions measured within the EEG signal (i.e. between 40-83ms post-stimulus onset) were specific to fast reaction times indicating a direct correlation of early neural response interactions and behavior.In a second study, we hypothesized that the relevance of spatial information for task performance has an impact on behavioral measures of auditory-somatosensory integration. Across two psychophysical experiments we show that facilitated detection occurs even when attending to spatial information, with no modulation according to spatial alignment of the stimuli. On the other hand, discrimination performance with probes, quantified using sensitivity (d'), is impaired following multisensory trials in general and significantly more so following misaligned multisensory trials.In a third study, we hypothesized that behavioral improvements might vary depending which body part is stimulated. Preliminary results suggest a possible dissociation between behavioral improvements andERPs. RTs to multisensory stimuli were modulated by space only in the case when somatosensory stimuli were delivered to the neck whereas multisensory ERPs were modulated by spatial alignment for both types of somatosensory stimuli.ConclusionThis thesis provides insight into the functional role played by early, low-level multisensory interac-tions. Combining psychophysics and electrical neuroimaging techniques we demonstrate the behavioral re-levance of early and low-level interactions in the normal human system. Moreover, we show that these early interactions are hermetic to top-down influences on spatial processing suggesting their occurrence within cerebral regions having access to large-scale spatial representations. We finally highlight specific interactions between auditory space and somatosensory stimulation on different body parts. Gaining an in-depth understanding of how multisensory integration normally operates is of central importance as it will ultimately permit us to consider how the impaired brain could benefit from rehabilitation with multisensory stimula-Abstract (French)Background théoriqueDes stimuli multisensoriels sont plus faciles à reconnaître, peuvent améliorer l'apprentissage et sont traités plus rapidement comparé à des stimuli unisensoriels. Ainsi, la capacité qu'un organisme possède à extraire et à synthétiser avec ses différentes modalités sensorielles des inputs sensoriels pertinents, façonne sa perception et son interaction avec l'environnement. Une question majeure dans le domaine scientifique est comment le cerveau parvient à extraire et à fusionner des stimuli pour créer une représentation percep- tuelle cohérente (mais aussi comment il isole les stimuli sans rapport). Cette fusion entre les sens est appelée "intégration multisensorielle", une notion qui provient de travaux effectués dans le colliculus supérieur chez l'animal, une structure sous-corticale possédant des neurones produisant une sortie multisensorielle différant de la somme des entrées unisensorielles. Traditionnellement, l'intégration d'informations multisen- sorielles au niveau cortical est considérée comme se produisant tardivement dans les aires associatives supérieures dans les lobes frontaux, temporaux et pariétaux, suite à un traitement extensif au sein de régions unisensorielles primaires. Cependant, plusieurs découvertes anatomiques, électrophysiologiques et de neuroimageries remettent en question ce postulat, suggérant l'existence d'une convergence et d'interactions multisensorielles précoces.Les travaux présentés dans cette thèse sont destinés à mieux comprendre les bases neuronales et les mécanismes impliqués dans la combinaison d'informations sensorielles entre les sens de l'audition et du toucher chez l'homme. Des interactions neuronales non-linéaires précoces audio-somatosensorielles ont été observées à maintes reprises chez l'homme et le singe dans des circonstances aussi variées que sous anes- thésie, avec stimulation passive, et lors de tâches nécessitant un comportement (une détection simple de stimuli, par exemple). Ainsi, le rôle fonctionnel joué par ces interactions à une étape du traitement de l'information si précoce demeure une question ouverte. Il a également été démontré que les temps de réaction en réponse à des stimuli audio-somatosensoriels sont facilités par rapport à leurs homologues unisensoriels indépendamment de leur position spatiale. Ce résultat suggère que l'intégration audio- somatosensorielle se produit dans des régions cérébrales possédant des représentations spatiales à large échelle. Cependant, des expériences qui ont exigé un traitement spatial des stimuli ont produits des effets limités à des conditions où les stimuli multisensoriels étaient, alignés dans l'espace ou encore comme pouvant varier selon la partie de corps stimulée. Il n'a pas été établi à ce jour si ces divergences pourraient être dues aux contraintes liées à la tâche et/ou à la nécessité d'un traitement de l'information spatiale.Hypothèse et résultats expérimentauxDans une première étude, nous avons émis l'hypothèse que les interactions audio- somatosensorielles précoces sont pertinentes pour le comportement. En effectuant un partage des temps de réaction par rapport à la médiane d'un sous-ensemble de données comportementales et électroencépha- lographiques, nous avons constaté que les interactions multisensorielles qui se produisent à des latences précoces (entre 40-83ms) sont spécifique aux temps de réaction rapides indiquant une corrélation directe entre ces interactions neuronales précoces et le comportement.Dans une deuxième étude, nous avons émis l'hypothèse que si l'information spatiale devient perti-nente pour la tâche, elle pourrait exercer une influence sur des mesures comportementales de l'intégration audio-somatosensorielles. Dans deux expériences psychophysiques, nous montrons que même si les participants prêtent attention à l'information spatiale, une facilitation de la détection se produit et ce toujours indépendamment de la configuration spatiale des stimuli. Cependant, la performance de discrimination, quantifiée à l'aide d'un index de sensibilité (d') est altérée suite aux essais multisensoriels en général et de manière plus significative pour les essais multisensoriels non-alignés dans l'espace.Dans une troisième étude, nous avons émis l'hypothèse que des améliorations comportementales pourraient différer selon la partie du corps qui est stimulée (la main vs. la nuque). Des résultats préliminaires suggèrent une dissociation possible entre une facilitation comportementale et les potentiels évoqués. Les temps de réactions étaient influencés par la configuration spatiale uniquement dans le cas ou les stimuli somatosensoriels étaient sur la nuque alors que les potentiels évoqués étaient modulés par l'alignement spatial pour les deux types de stimuli somatosensorielles.ConclusionCette thèse apporte des éléments nouveaux concernant le rôle fonctionnel joué par les interactions multisensorielles précoces de bas niveau. En combinant la psychophysique et la neuroimagerie électrique, nous démontrons la pertinence comportementale des ces interactions dans le système humain normal. Par ailleurs, nous montrons que ces interactions précoces sont hermétiques aux influences dites «top-down» sur le traitement spatial suggérant leur occurrence dans des régions cérébrales ayant accès à des représentations spatiales de grande échelle. Nous soulignons enfin des interactions spécifiques entre l'espace auditif et la stimulation somatosensorielle sur différentes parties du corps. Approfondir la connaissance concernant les bases neuronales et les mécanismes impliqués dans l'intégration multisensorielle dans le système normale est d'une importance centrale car elle permettra d'examiner et de mieux comprendre comment le cerveau déficient pourrait bénéficier d'une réhabilitation avec la stimulation multisensorielle.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Detection and discrimination of visuospatial input involve at least extracting, selecting and encoding relevant information and decision-making processes allowing selecting a response. These two operations are altered, respectively, by attentional mechanisms that change discrimination capacities, and by beliefs concerning the likelihood of uncertain events. Information processing is tuned by the attentional level that acts like a filter on perception, while decision-making processes are weighed by subjective probability of risk. In addition, it has been shown that anxiety could affect the detection of unexpected events through the modification of the level of arousal. Consequently, purpose of this study concerns whether and how decision-making and brain dynamics are affected by anxiety. To investigate these questions, the performance of women with either a high (12) or a low (12) STAI-T (State-Trait Anxiety Inventory, Spielberger, 1983) was examined in a decision-making visuospatial task where subjects have to recognize a target visual pattern from non-target patterns. The target pattern was a schematic image of furniture arranged in such a way as to give the impression of a living room. Non-target patterns were created by either the compression or the dilatation of the distances between objects. Target and non-target patterns were always presented in the same configuration. Preliminary behavioral results show no group difference in reaction time. In addition, visuo-spatial abilities were analyzed trough the signal detection theory for quantifying perceptual decisions in the presence of uncertainty (Green and Swets, 1966). This theory treats detection of a stimulus as a decision-making process determined by the nature of the stimulus and cognitive factors. Astonishingly, no difference in d' (corresponding to the distance between means of the distributions) and c (corresponds to the likelihood ratio) indexes was observed. Comparison of Event-related potentials (ERP) reveals that brain dynamics differ according to anxiety. It shows differences in component latencies, particularly a delay in anxious subjects over posterior electrode sites. However, these differences are compensated during later components by shorter latencies in anxious subjects compared to non-anxious one. These inverted effects seem indicate that the absence of difference in reaction time rely on a compensation of attentional level that tunes cortical activation in anxious subjects, but they have to hammer away to maintain performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this procedure, subjects learn the spatial position of one hole out of many, that allows them to escape from a large open-field into their home cage. The arena is circular and can be rotated between trials so that no proximal landmark is permanently associated with the target hole. This task is thus similar to the Morris water maze procedure, since subjects must remember the position of the escape hole relative to extra-arena cues only. In addition it allows studying the importance of olfactory cues such as scent marks in or around a hole. Since the motivation is to reach home and the motor requirement is low, this task provides a useful alternative to the Morris place navigation task for studying spatial orientation in weanling or senescent rats. Examples are given showing that various behavioural parameters provide a good estimation as how subjects learn this task.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the Morris water maze (MWM) task, proprioceptive information is likely to have a poor accuracy due to movement inertia. Hence, in this condition, dynamic visual information providing information on linear and angular acceleration would play a critical role in spatial navigation. To investigate this assumption we compared rat's spatial performance in the MWM and in the homing hole board (HB) tasks using a 1.5 Hz stroboscopic illumination. In the MWM, rats trained in the stroboscopic condition needed more time than those trained in a continuous light condition to reach the hidden platform. They expressed also little accuracy during the probe trial. In the HB task, in contrast, place learning remained unaffected by the stroboscopic light condition. The deficit in the MWM was thus complete, affecting both escape latency and discrimination of the reinforced area, and was thus task specific. This dissociation confirms that dynamic visual information is crucial to spatial navigation in the MWM whereas spatial navigation on solid ground is mediated by a multisensory integration, and thus less dependent on visual information.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Elderly individuals display a rapid age-related increase in intraindividual variability (IIV) of their performances. This phenomenon could reflect subtle changes in frontal lobe integrity. However, structural studies in this field are still missing. To address this issue, we computed an IIV index for a simple reaction time (RT) task and performed magnetic resonance imaging (MRI) including voxel based morphometry (VBM) and the tract based spatial statistics (TBSS) analysis of diffusion tensor imaging (DTI) in 61 adults aged from 22 to 88 years. The age-related IIV increase was associated with decreased fractional anisotropy (FA) as well as increased radial (RD) and mean (MD) diffusion in the main white matter (WM) fiber tracts. In contrast, axial diffusion (AD) and grey matter (GM) densities did not show any significant correlation with IIV. In multivariate models, only FA has an age-independent effect on IIV. These results revealed that WM but not GM changes partly mediated the age-related increase of IIV. They also revealed that the association between WM and IIV could not be only attributed to the damage of frontal lobe circuits but concerned the majority of interhemispheric and intrahemispheric corticocortical connections.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: To examine the relationship between reward sensitivity and self-reported apathy in stroke patients and to investigate the neuroanatomical correlates of both reward sensitivity and apathy. METHODS: In this prospective study, 55 chronic stroke patients were administered a questionnaire to assess apathy and a laboratory task to examine reward sensitivity by measuring motivationally driven behavior ("reinforcement-related speeding"). Fifteen participants without brain damage served as controls for the laboratory task. Negative mood, working memory, and global cognitive functioning were also measured to determine whether reward insensitivity and apathy were secondary to cognitive impairments or negative mood. Voxel-based lesion-symptom mapping was used to explore the neuroanatomical substrates of reward sensitivity and apathy. RESULTS: Participants showed reinforcement-related speeding in the highly reinforced condition of the laboratory task. However, this effect was significant for the controls only. For patients, poorer reward sensitivity was associated with greater self-reported apathy (p < 0.05) beyond negative mood and after lesion size was controlled for. Neither apathy nor reward sensitivity was related to working memory or global cognitive functioning. Voxel-based lesion-symptom mapping showed that damage to the ventral putamen and globus pallidus, dorsal thalamus, and left insula and prefrontal cortex was associated with poorer reward sensitivity. The putamen and thalamus were also involved in self-reported apathy. CONCLUSIONS: Poor reward sensitivity in stroke patients with damage to the ventral basal ganglia, dorsal thalamus, insula, or prefrontal cortex constitutes a core feature of apathy. These results provide valuable insight into the neural mechanisms and brain substrate underlying apathy.