911 resultados para Higher-level visual processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le traumatisme craniocérébral léger (TCCL) a des effets complexes sur plusieurs fonctions cérébrales, dont l’évaluation et le suivi peuvent être difficiles. Les problèmes visuels et les troubles de l’équilibre font partie des plaintes fréquemment rencontrées après un TCCL. En outre, ces problèmes peuvent continuer à affecter les personnes ayant eu un TCCL longtemps après la phase aiguë du traumatisme. Cependant, les évaluations cliniques conventionnelles de la vision et de l’équilibre ne permettent pas, la plupart du temps, d’objectiver ces symptômes, surtout lorsqu’ils s’installent durablement. De plus, il n’existe pas, à notre connaissance, d’étude longitudinale ayant étudié les déficits visuels perceptifs, en tant que tels, ni les troubles de l’équilibre secondaires à un TCCL, chez l’adulte. L’objectif de ce projet était donc de déterminer la nature et la durée des effets d’un tel traumatisme sur la perception visuelle et sur la stabilité posturale, en évaluant des adultes TCCL et contrôles sur une période d’un an. Les mêmes sujets, exactement, ont participé aux deux expériences, qui ont été menées les mêmes jours pour chacun des sujets. L’impact du TCCL sur la perception visuelle de réseaux sinusoïdaux définis par des attributs de premier et de second ordre a d’abord été étudié. Quinze adultes diagnostiqués TCCL ont été évalués 15 jours, 3 mois et 12 mois après leur traumatisme. Quinze adultes contrôles appariés ont été évalués à des périodes identiques. Des temps de réaction (TR) de détection de clignotement et de discrimination de direction de mouvement ont été mesurés. Les niveaux de contraste des stimuli de premier et de second ordre ont été ajustés pour qu’ils aient une visibilité comparable, et les moyennes, médianes, écarts-types (ET) et écarts interquartiles (EIQ) des TR correspondant aux bonnes réponses ont été calculés. Le niveau de symptômes a également été évalué pour le comparer aux données de TR. De façon générale, les TR des TCCL étaient plus longs et plus variables (plus grands ET et EIQ) que ceux des contrôles. De plus, les TR des TCCL étaient plus courts pour les stimuli de premier ordre que pour ceux de second ordre, et plus variables pour les stimuli de premier ordre que pour ceux de second ordre, dans la condition de discrimination de mouvement. Ces observations se sont répétées au cours des trois sessions. Le niveau de symptômes des TCCL était supérieur à celui des participants contrôles, et malgré une amélioration, cet écart est resté significatif sur la période d’un an qui a suivi le traumatisme. La seconde expérience, elle, était destinée à évaluer l’impact du TCCL sur le contrôle postural. Pour cela, nous avons mesuré l’amplitude d’oscillation posturale dans l’axe antéropostérieur et l’instabilité posturale (au moyen de la vitesse quadratique moyenne (VQM) des oscillations posturales) en position debout, les pieds joints, sur une surface ferme, dans cinq conditions différentes : les yeux fermés, et dans un tunnel virtuel tridimensionnel soit statique, soit oscillant de façon sinusoïdale dans la direction antéropostérieure à trois vitesses différentes. Des mesures d’équilibre dérivées de tests cliniques, le Bruininks-Oseretsky Test of Motor Proficiency 2nd edition (BOT-2) et le Balance Error Scoring System (BESS) ont également été utilisées. Les participants diagnostiqués TCCL présentaient une plus grande instabilité posturale (une plus grande VQM des oscillations posturales) que les participants contrôles 2 semaines et 3 mois après le traumatisme, toutes conditions confondues. Ces troubles de l’équilibre secondaires au TCCL n’étaient plus présents un an après le traumatisme. Ces résultats suggèrent également que les déficits affectant les processus d’intégration visuelle mis en évidence dans la première expérience ont pu contribuer aux troubles de l’équilibre secondaires au TCCL. L’amplitude d’oscillation posturale dans l’axe antéropostérieur de même que les mesures dérivées des tests cliniques d’évaluation de l’équilibre (BOT-2 et BESS) ne se sont pas révélées être des mesures sensibles pour quantifier le déficit postural chez les sujets TCCL. L’association des mesures de TR à la perception des propriétés spécifiques des stimuli s’est révélée être à la fois une méthode de mesure particulièrement sensible aux anomalies visuomotrices secondaires à un TCCL, et un outil précis d’investigation des mécanismes sous-jacents à ces anomalies qui surviennent lorsque le cerveau est exposé à un traumatisme léger. De la même façon, les mesures d’instabilité posturale se sont révélées suffisamment sensibles pour permettre de mesurer les troubles de l’équilibre secondaires à un TCCL. Ainsi, le développement de tests de dépistage basés sur ces résultats et destinés à l’évaluation du TCCL dès ses premières étapes apparaît particulièrement intéressant. Il semble également primordial d’examiner les relations entre de tels déficits et la réalisation d’activités de la vie quotidienne, telles que les activités scolaires, professionnelles ou sportives, pour déterminer les impacts fonctionnels que peuvent avoir ces troubles des fonctions visuomotrice et du contrôle de l’équilibre.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Animal color pattern phenotypes evolve rapidly. What influences their evolution? Because color patterns are used in communication, selection for signal efficacy, relative to the intended receiver's visual system, may explain and predict the direction of evolution. We investigated this in bowerbirds, whose color patterns consist of plumage, bower structure, and ornaments and whose visual displays are presented under predictable visual conditions. We used data on avian vision, environmental conditions, color pattern properties, and an estimate of the bowerbird phylogeny to test hypotheses about evolutionary effects of visual processing. Different components of the color pattern evolve differently. Plumage sexual dimorphism increased and then decreased, while overall (plumage plus bower) visual contrast increased. The use of bowers allows relative crypsis of the bird but increased efficacy of the signal as a whole. Ornaments do not elaborate existing plumage features but instead are innovations (new color schemes) that increase signal efficacy. Isolation between species could be facilitated by plumage but not ornaments, because we observed character displacement only in plumage. Bowerbird color pattern evolution is at least partially predictable from the function of the visual system and from knowledge of different functions of different components of the color patterns. This provides clues to how more constrained visual signaling systems may evolve.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Various neuroimaging investigations have revealed that perception of emotional pictures is associated with greater visual cortex activity than their neutral counterparts. It has further been proposed that threat-related information is rapidly processed, suggesting that the modulation of visual cortex activity should occur at an early stage. Additional studies have demonstrated that oscillatory activity in the gamma band range (40-100 Hz) is associated with threat processing. Magnetoencephalography (MEG) was used to investigate such activity during perception of task-irrelevant, threat-related versus neutral facial expressions. Our results demonstrated a bilateral reduction in gamma band activity for expressions of threat, specifically anger, compared with neutral faces in extrastriate visual cortex (BA 18) within 50-250 ms of stimulus onset. These results suggest that gamma activity in visual cortex may play a role in affective modulation of visual processing, in particular with the perception of threat cues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developmental learning disabilities such as dyslexia and dyscalculia have a high rate of co-occurrence in pediatric populations, suggesting that they share underlying cognitive and neurophysiological mechanisms. Dyslexia and other developmental disorders with a strong heritable component have been associated with reduced sensitivity to coherent motion stimuli, an index of visual temporal processing on a millisecond time-scale. Here we examined whether deficits in sensitivity to visual motion are evident in children who have poor mathematics skills relative to other children of the same age. We obtained psychophysical thresholds for visual coherent motion and a control task from two groups of children who differed in their performance on a test of mathematics achievement. Children with math skills in the lowest 10% in their cohort were less sensitive than age-matched controls to coherent motion, but they had statistically equivalent thresholds to controls on a coherent form control measure. Children with mathematics difficulties therefore tend to present a similar pattern of visual processing deficit to those that have been reported previously in other developmental disorders. We speculate that reduced sensitivity to temporally defined stimuli such as coherent motion represents a common processing deficit apparent across a range of commonly co-occurring developmental disorders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attention defines our mental ability to select and respond to stimuli, internal or external, on the basis of behavioural goals in the presence of competing, behaviourally irrelevant, stimuli. The frontal and parietal cortices are generally agreed to be involved with attentional processing, in what is termed the 'fronto-parietal' network. The left parietal cortex has been seen as the site for temporal attentional processing, whereas the right parietal cortex has been seen as the site for spatial attentional processing. There is much debate about when the modulation of the primary visual cortex occurs, whether it is modulated in the feedforward sweep of processing or modulated by feedback projections from extrastriate and higher cortical areas. MEG and psychophysical measurements were used to look at spatially selective covert attention. Dual-task and cue-based paradigms were used. It was found that the posterior parietal cortex (PPC), in particular the SPL and IPL, was the main site of activation during these experiments, and that the left parietal lobe was activated more strongly than the right parietal lobe throughout. The levels of activation in both parietal and occipital areas were modulated in accordance with attentional demands. It is likely that spatially selective covert attention is dominated by the left parietal lobe, and that this takes the form of the proposed sensory-perceptual lateralization within the parietal lobes. Another form of lateralization is proposed, termed the motor-processing lateralization, the side of dominance being determined by handedness, being reversed in left- relative to right-handers. In terms of the modulation of the primary visual cortex, it was found that it is unlikely that V1 is modulated initially; rather the modulation takes the form of feedback from higher extrastriate and parietal areas. This fits with the idea of preattentive visual processing, a commonly accepted idea which, in itself, prevents the concept of initial modulation of V1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grafted GMA on EPR samples were prepared in a Thermo-Haake internal mixer by free radical melt grafting reactions in the absence (conventional system; EPR-g-GMA(CONV)) and presence of the reactive comonomer divinyl benzene, DVB (EPR-g-GMA(DVB)). The GMA-homopolymer (poly-GMA), a major side reaction product in the conventional system, was almost completely absent in the DVB-containing system, the latter also resulted in a much higher level of GMA grafting. A comprehensive microstructure analysis of the formed poly-GMA was performed based on one-dimensional H-1 and C-13 NMR spectroscopy and the complete spectral assignments were supported by two-dimensional NMR techniques based on long range two and three bond order carbon-proton couplings from HMBC (Heteronuclear Multiple Bond Coherence) and that of one bond carbon-proton couplings from HSQC (Heteronuclear Single Quantum Coherence), as well as the use of Distortionless Enhancement by Polarization Transfer (DEPT) NMR spectroscopy. The unambiguous analysis of the stereochemical configuration of poly-GMA was further used to help understand the microstructures of the GMA-grafts obtained in the two different free radical melt grafting reactions, the conventional and comonomer-containing systems. In the grafted GMA, in the conventional system (EPR-g-GMA(CONV)), the methylene protons of the GMA were found to be sensitive to tetrad configurational sequences and the results showed that 56% of the GMA sequence in the graft is in atactic configuration and 42% is in syndiotactic configuration whereas the poly-GMA was predominantly syndiotactic. The differences in the microstructures of the graft in the conventional EPR-g-GMA(CONV) and the DVB-containing (EPR-g-GMA(DVB)) systems is also reported (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Both phonological (speech) and auditory (non-speech) stimuli have been shown to predict early reading skills. However, previous studies have failed to control for the level of processing required by tasks administered across the two levels of stimuli. For example, phonological tasks typically tap explicit awareness e.g., phoneme deletion, while auditory tasks usually measure implicit awareness e.g., frequency discrimination. Therefore, the stronger predictive power of speech tasks may be due to their higher processing demands, rather than the nature of the stimuli. Method: The present study uses novel tasks that control for level of processing (isolation, repetition and deletion) across speech (phonemes and nonwords) and non-speech (tones) stimuli. 800 beginning readers at the onset of literacy tuition (mean age 4 years and 7 months) were assessed on the above tasks as well as word reading and letter-knowledge in the first part of a three time-point longitudinal study. Results: Time 1 results reveal a significantly higher association between letter-sound knowledge and all of the speech compared to non-speech tasks. Performance was better for phoneme than tone stimuli, and worse for deletion than isolation and repetition across all stimuli. Conclusions: Results are consistent with phonological accounts of reading and suggest that level of processing required by the task is less important than stimuli type in predicting the earliest stage of reading.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ethylene-propylene rubber (EPR) functionalised with glycidyl methacrylate (GMA) (f-EPR) during melt processing in the presence of a co-monomer, such as trimethylolpropane triacrylate (Tris), was used to promote compatibilisation in blends of polyethylene terephthalate (PET) and f-EPR, and their characteristics were compared with those of PET/f-EPR reactive blends in which the f-EPR was functionalised with GMA via a conventional free radical melt reaction (in the absence of a co-monomer). Binary blends of PETand f-EPR (with two types of f-EPR prepared either in presence or absence of the co-monomer) with various compositions (80/20, 60/40 and 50/50 w/w%) were prepared in an internal mixer. The blends were evaluated by their rheology (from changes in torque during melt processing and blending reflecting melt viscosity, and their melt flow rate), morphology scanning electron microscopy (SEM), dynamic mechanical properties (DMA), Fourier transform infrared (FTIR) analysis, and solubility (Molau) test. The reactive blends (PET/f-EPR) showed a marked increase in their melt viscosities in comparison with the corresponding physical (PET/EPR) blends (higher torque during melt blending), the extent of which depended on the amount of homopolymerised GMA (poly-GMA) present and the level of GMA grafting in the f-EPR. This increase was accounted for by, most probably, the occurrence of a reaction between the epoxy groups of GMA and the hydroxyl/carboxyl end groups of PET. Morphological examination by SEM showed a large improvement of phase dispersion, indicating reduced interfacial tension and compatibilisation, in both reactive blends, but with the Tris-GMA-based blends showing an even finer morphology (these blends are characterised by absence of poly-GMA and presence of higher level of grafted GMA in its f-EPR component by comparison to the conventional GMA-based blends). Examination of the DMA for the reactive blends at different compositions showed that in both cases there was a smaller separation between the glass transition temperatures compared to their position in the corresponding physical blends, which pointed to some interaction or chemical reaction between f-EPR and PET. The DMA results also showed that the shifts in the Tgs of the Tris-GMA-based blends were slightly higher than for the conventional GMA-blends. However, the overall tendency of the Tgs to approach each other in each case was found not to be significantly different (e.g. in a 60/40 ratio the former blend shifted by up to 4.5 °C in each direction whereas in the latter blend the shifts were about 3 °C). These results would suggest that in these blends the SEM and DMA analyses are probing uncorrelatable morphological details. The evidence for the formation of in situ graft copolymer between the f-EPR and PET during reactive blending was clearly illustrated from analysis by FTIR of the separated phases from the Tris-GMA-based reactive blends, and the positive Molau test pointed out to graft copolymerisation in the interface. A mechanism for the formation of the interfacial reaction during the reactive blending process is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Because of attentional limitations, the human visual system can process for awareness and response only a fraction of the input received. Lesion and functional imaging studies have identified frontal, temporal, and parietal areas as playing a major role in the attentional control of visual processing, but very little is known about how these areas interact to form a dynamic attentional network. We hypothesized that the network communicates by means of neural phase synchronization, and we used magnetoencephalography to study transient long-range interarea phase coupling in a well studied attentionally taxing dual-target task (attentional blink). Our results reveal that communication within the fronto-parieto-temporal attentional network proceeds via transient long-range phase synchronization in the beta band. Changes in synchronization reflect changes in the attentional demands of the task and are directly related to behavioral performance. Thus, we show how attentional limitations arise from the way in which the subsystems of the attentional network interact. The human brain faces an inestimable task of reducing a potentially overloading amount of input into a manageable flow of information that reflects both the current needs of the organism and the external demands placed on it. This task is accomplished via a ubiquitous construct known as “attention,” whose mechanism, although well characterized behaviorally, is far from understood at the neurophysiological level. Whereas attempts to identify particular neural structures involved in the operation of attention have met with considerable success (1-5) and have resulted in the identification of frontal, parietal, and temporal regions, far less is known about the interaction among these structures in a way that can account for the task-dependent successes and failures of attention. The goal of the present research was, thus, to unravel the means by which the subsystems making up the human attentional network communicate and to relate the temporal dynamics of their communication to observed attentional limitations in humans. A prime candidate for communication among distributed systems in the human brain is neural synchronization (for review, see ref. 6). Indeed, a number of studies provide converging evidence that long-range interarea communication is related to synchronized oscillatory activity (refs. 7-14; for review, see ref. 15). To determine whether neural synchronization plays a role in attentional control, we placed humans in an attentionally demanding task and used magnetoencephalography (MEG) to track interarea communication by means of neural synchronization. In particular, we presented 10 healthy subjects with two visual target letters embedded in streams of 13 distractor letters, appearing at a rate of seven per second. The targets were separated in time by a single distractor. This condition leads to the “attentional blink” (AB), a well studied dual-task phenomenon showing the reduced ability to report the second of two targets when an interval <500 ms separates them (16-18). Importantly, the AB does not prevent perceptual processing of missed target stimuli but only their conscious report (19), demonstrating the attentional nature of this effect and making it a good candidate for the purpose of our investigation. Although numerous studies have investigated factors, e.g., stimulus and timing parameters, that manipulate the magnitude of a particular AB outcome, few have sought to characterize the neural state under which “standard” AB parameters produce an inability to report the second target on some trials but not others. We hypothesized that the different attentional states leading to different behavioral outcomes (second target reported correctly or not) are characterized by specific patterns of transient long-range synchronization between brain areas involved in target processing. Showing the hypothesized correspondence between states of neural synchronization and human behavior in an attentional task entails two demonstrations. First, it needs to be demonstrated that cortical areas that are suspected to be involved in visual-attention tasks, and the AB in particular, interact by means of neural synchronization. This demonstration is particularly important because previous brain-imaging studies (e.g., ref. 5) only showed that the respective areas are active within a rather large time window in the same task and not that they are concurrently active and actually create an interactive network. Second, it needs to be demonstrated that the pattern of neural synchronization is sensitive to the behavioral outcome; specifically, the ability to correctly identify the second of two rapidly succeeding visual targets

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.

The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.

First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.

Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.

My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.

In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le traitement des émotions joue un rôle essentiel dans les relations interpersonnelles. Des déficits dans la reconnaissance des émotions évoquées par les expressions faciales et vocales ont été démontrés à la suite d’un traumatisme craniocérébral (TCC). Toutefois, la majorité des études n’ont pas différencié les participants selon le niveau de gravité du TCC et n’ont pas évalué certains préalables essentiels au traitement émotionnel, tels que la capacité à percevoir les caractéristiques faciales et vocales, et par le fait même, la capacité à y porter attention. Aucune étude ne s’est intéressée au traitement des émotions évoquées par les expressions musicales, alors que la musique est utilisée comme méthode d’intervention afin de répondre à des besoins de prise en charge comportementale, cognitive ou affective chez des personnes présentant des atteintes neurologiques. Ainsi, on ignore si les effets positifs de l’intervention musicale sont basés sur la préservation de la reconnaissance de certaines catégories d’émotions évoquées par les expressions musicales à la suite d’un TCC. La première étude de cette thèse a évalué la reconnaissance des émotions de base (joie, tristesse, peur) évoquées par les expressions faciales, vocales et musicales chez quarante et un adultes (10 TCC modéré-sévère, 9 TCC léger complexe, 11 TCC léger simple et 11 témoins), à partir de tâches expérimentales et de tâches perceptuelles contrôles. Les résultats suggèrent un déficit de la reconnaissance de la peur évoquée par les expressions faciales à la suite d’un TCC modéré-sévère et d’un TCC léger complexe, comparativement aux personnes avec un TCC léger simple et sans TCC. Le déficit n’est pas expliqué par un trouble perceptuel sous-jacent. Les résultats montrent de plus une préservation de la reconnaissance des émotions évoquées par les expressions vocales et musicales à la suite d’un TCC, indépendamment du niveau de gravité. Enfin, malgré une dissociation observée entre les performances aux tâches de reconnaissance des émotions évoquées par les modalités visuelle et auditive, aucune corrélation n’a été trouvée entre les expressions vocales et musicales. La deuxième étude a mesuré les ondes cérébrales précoces (N1, N170) et plus tardives (N2) de vingt-cinq adultes (10 TCC léger simple, 1 TCC léger complexe, 3 TCC modéré-sévère et 11 témoins), pendant la présentation d’expressions faciales évoquant la peur, la neutralité et la joie. Les résultats suggèrent des altérations dans le traitement attentionnel précoce à la suite d’un TCC, qui amenuisent le traitement ultérieur de la peur évoquée par les expressions faciales. En somme, les conclusions de cette thèse affinent notre compréhension du traitement des émotions évoquées par les expressions faciales, vocales et musicales à la suite d’un TCC selon le niveau de gravité. Les résultats permettent également de mieux saisir les origines des déficits du traitement des émotions évoquées par les expressions faciales à la suite d’un TCC, lesquels semblent secondaires à des altérations attentionnelles précoces. Cette thèse pourrait contribuer au développement éventuel d’interventions axées sur les émotions à la suite d’un TCC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Gamma-band oscillations are prominently impaired in schizophrenia, but the nature of the deficit and relationship to perceptual processes is unclear. Methods: 16 patients with chronic schizophrenia (ScZ) and 16 age-matched healthy controls completed a visual paradigm while magnetoencephalographic (MEG) data was recorded. Participants had to detect randomly occurring stimulus acceleration while viewing a concentric moving grating. MEG data were analyzed for spectral power (1-100 Hz) at sensorand source-level to examine the brain regions involved in aberrant rhythmic activity, and for contribution of differences in baseline activity towards the generation of low- and highfrequency power. Results: Our data show reduced gamma-band power at sensor level in schizophrenia patients during stimulus processing while alpha-band and baseline spectrum were intact. Differences in oscillatory activity correlated with reduced behavioral detection rates in the schizophrenia group and higher scores on the “Cognitive Factor” of the Positive and Negative Syndrome Scale. Source reconstruction revealed that extra-striate (fusiform/lingual gyrus), but not striate (cuneus), visual cortices contributed towards the reduced activity observed at sensorlevel in ScZ patients. Importantly, differences in stimulus-related activity were not due to differences in baseline activity. Conclusions: Our findings highlight that MEG-measured high-frequency oscillations during visual processing can be robustly identified in ScZ. Our data further suggest impairments that involve dysfunctions in ventral stream processing and a failure to increase gamma-band activity in a task-context. Implications of these findings are discussed in the context of current theories of cortical-subcortical circuit dysfunctions and perceptual processing in ScZ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is an investigation of structural brain abnormalities, as well as multisensory and unisensory processing deficits in autistic traits and Autism Spectrum Disorder (ASD). To achieve this, structural and functional magnetic resonance imaging (fMRI) and psychophysical techniques were employed. ASD is a neurodevelopmental condition which is characterised by the social communication and interaction deficits, as well as repetitive patterns of behaviour, interests and activities. These traits are thought to be present in a typical population. The Autism Spectrum Quotient questionnaire (AQ) was developed to assess the prevalence of autistic traits in the general population. Von dem Hagen et al. (2011) revealed a link between AQ with white matter (WM) and grey matter (GM) volume (using voxel-based-morphometry). However, their findings revealed no difference in GM in areas associated with social cognition. Cortical thickness (CT) measurements are known to be a more direct measure of cortical morphology than GM volume. Therefore, Chapter 2 investigated the relationship between AQ scores and CT in the same sample of participants. This study showed that AQ scores correlated with CT in the left temporo-occipital junction, left posterior cingulate, right precentral gyrus and bilateral precentral sulcus, in a typical population. These areas were previously associated with structural and functional differences in ASD. Thus the findings suggest, to some extent, autistic traits are reflected in brain structure - in the general population. The ability to integrate auditory and visual information is crucial to everyday life, and results are mixed regarding how ASD influences audiovisual integration. To investigate this question, Chapter 3 examined the Temporal Integration Window (TIW), which indicates how precisely sight and sound need to be temporally aligned so that a unitary audiovisual event can be perceived. 26 adult males with ASD and 26 age and IQ-matched typically developed males were presented with flash-beep (BF), point-light drummer, and face-voice (FV) displays with varying degrees of asynchrony and asked to make Synchrony Judgements (SJ) and Temporal Order Judgements (TOJ). Analysis of the data included fitting Gaussian functions as well as using an Independent Channels Model (ICM) to fit the data (Garcia-Perez & Alcala-Quintana, 2012). Gaussian curve fitting for SJs showed that the ASD group had a wider TIW, but for TOJ no group effect was found. The ICM supported these results and model parameters indicated that the wider TIW for SJs in the ASD group was not due to sensory processing at the unisensory level, but rather due to decreased temporal resolution at a decisional level of combining sensory information. Furthermore, when performing TOJ, the ICM revealed a smaller Point of Subjective Simultaneity (PSS; closer to physical synchrony) in the ASD group than in the TD group. Finding that audiovisual temporal processing is different in ASD encouraged us to investigate the neural correlates of multisensory as well as unisensory processing using functional magnetic resonance imaging fMRI. Therefore, Chapter 4 investigated audiovisual, auditory and visual processing in ASD of simple BF displays and complex, social FV displays. During a block design experiment, we measured the BOLD signal when 13 adults with ASD and 13 typically developed (TD) age-sex- and IQ- matched adults were presented with audiovisual, audio and visual information of BF and FV displays. Our analyses revealed that processing of audiovisual as well as unisensory auditory and visual stimulus conditions in both the BF and FV displays was associated with reduced activation in ASD. Audiovisual, auditory and visual conditions of FV stimuli revealed reduced activation in ASD in regions of the frontal cortex, while BF stimuli revealed reduced activation the lingual gyri. The inferior parietal gyrus revealed an interaction between stimulus sensory condition of BF stimuli and group. Conjunction analyses revealed smaller regions of the superior temporal cortex (STC) in ASD to be audiovisual sensitive. Against our predictions, the STC did not reveal any activation differences, per se, between the two groups. However, a superior frontal area was shown to be sensitive to audiovisual face-voice stimuli in the TD group, but not in the ASD group. Overall this study indicated differences in brain activity for audiovisual, auditory and visual processing of social and non-social stimuli in individuals with ASD compared to TD individuals. These results contrast previous behavioural findings, suggesting different audiovisual integration, yet intact auditory and visual processing in ASD. Our behavioural findings revealed audiovisual temporal processing deficits in ASD during SJ tasks, therefore we investigated the neural correlates of SJ in ASD and TD controls. Similar to Chapter 4, we used fMRI in Chapter 5 to investigate audiovisual temporal processing in ASD in the same participants as recruited in Chapter 4. BOLD signals were measured while the ASD and TD participants were asked to make SJ on audiovisual displays of different levels of asynchrony: the participants’ PSS, audio leading visual information (audio first), visual leading audio information (visual first). Whereas no effect of group was found with BF displays, increased putamen activation was observed in ASD participants compared to TD participants when making SJs on FV displays. Investigating SJ on audiovisual displays in the bilateral superior temporal gyrus (STG), an area involved in audiovisual integration (see Chapter 4), we found no group differences or interaction between group and levels of audiovisual asynchrony. The investigation of different levels of asynchrony revealed a complex pattern of results indicating a network of areas more involved in processing PSS than audio first and visual first, as well as areas responding differently to audio first compared to video first. These activation differences between audio first and video first in different brain areas are constant with the view that audio leading and visual leading stimuli are processed differently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le traitement des émotions joue un rôle essentiel dans les relations interpersonnelles. Des déficits dans la reconnaissance des émotions évoquées par les expressions faciales et vocales ont été démontrés à la suite d’un traumatisme craniocérébral (TCC). Toutefois, la majorité des études n’ont pas différencié les participants selon le niveau de gravité du TCC et n’ont pas évalué certains préalables essentiels au traitement émotionnel, tels que la capacité à percevoir les caractéristiques faciales et vocales, et par le fait même, la capacité à y porter attention. Aucune étude ne s’est intéressée au traitement des émotions évoquées par les expressions musicales, alors que la musique est utilisée comme méthode d’intervention afin de répondre à des besoins de prise en charge comportementale, cognitive ou affective chez des personnes présentant des atteintes neurologiques. Ainsi, on ignore si les effets positifs de l’intervention musicale sont basés sur la préservation de la reconnaissance de certaines catégories d’émotions évoquées par les expressions musicales à la suite d’un TCC. La première étude de cette thèse a évalué la reconnaissance des émotions de base (joie, tristesse, peur) évoquées par les expressions faciales, vocales et musicales chez quarante et un adultes (10 TCC modéré-sévère, 9 TCC léger complexe, 11 TCC léger simple et 11 témoins), à partir de tâches expérimentales et de tâches perceptuelles contrôles. Les résultats suggèrent un déficit de la reconnaissance de la peur évoquée par les expressions faciales à la suite d’un TCC modéré-sévère et d’un TCC léger complexe, comparativement aux personnes avec un TCC léger simple et sans TCC. Le déficit n’est pas expliqué par un trouble perceptuel sous-jacent. Les résultats montrent de plus une préservation de la reconnaissance des émotions évoquées par les expressions vocales et musicales à la suite d’un TCC, indépendamment du niveau de gravité. Enfin, malgré une dissociation observée entre les performances aux tâches de reconnaissance des émotions évoquées par les modalités visuelle et auditive, aucune corrélation n’a été trouvée entre les expressions vocales et musicales. La deuxième étude a mesuré les ondes cérébrales précoces (N1, N170) et plus tardives (N2) de vingt-cinq adultes (10 TCC léger simple, 1 TCC léger complexe, 3 TCC modéré-sévère et 11 témoins), pendant la présentation d’expressions faciales évoquant la peur, la neutralité et la joie. Les résultats suggèrent des altérations dans le traitement attentionnel précoce à la suite d’un TCC, qui amenuisent le traitement ultérieur de la peur évoquée par les expressions faciales. En somme, les conclusions de cette thèse affinent notre compréhension du traitement des émotions évoquées par les expressions faciales, vocales et musicales à la suite d’un TCC selon le niveau de gravité. Les résultats permettent également de mieux saisir les origines des déficits du traitement des émotions évoquées par les expressions faciales à la suite d’un TCC, lesquels semblent secondaires à des altérations attentionnelles précoces. Cette thèse pourrait contribuer au développement éventuel d’interventions axées sur les émotions à la suite d’un TCC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The posterior parietal cortex (PPC) of primates represents a remarkable platform that has evolved over time to solve some of the computational challenges that we face in the everyday life, such as sensorimotor integration, spatial attention, and motor planning. With the aim of further investigating the multifaceted functional characteristics of medial PPC, we conducted three studies to explore the visuomotor, somatic, visual, and attention-related properties of two PPC areas: V6A, a visuomotor area part of the dorsomedial visual stream, and PE, an area strongly dominated by somatomotor input, residing mainly on the exposed surface of the superior parietal lobule. In the first study, we tested the impact of visual feedback on V6A grasp-related activity during arm movements towards objects of different shapes. Our results demonstrate that V6A is modulated by both grip type and visual information during grasping preparation and execution, with a predominance of cells influenced by grip type. In the second study, we explored the influence of depth and direction information on reach-related activity of neurons in the so far largely neglected medial part of area PE. We observed a remarkable trend in medial PPC, going from the joint coding of depth and direction signals caudally, in area V6A, to a largely segregated processing of the two signals rostrally, in area PE. In the third study, we used a combined fMRI-electrophysiology experiment to investigate the neuronal mechanisms underlying covert shift of attention processes in area V6A. Our preliminary results reveal that half of the cells showed shift-selective activity when the monkey covertly shifted its attention towards the receptive field. All together these findings highlight the role of the medial PPC in integrating information coming from different sources (vision, somatosensory and motor) and emphasize the involvement of action-related regions of the dorsomedial visual stream in higher level cognitive functions.