930 resultados para Visual perception in infants
Resumo:
With the present study we aimed to analyze the relationship between infants' behavior and their visual evoked-potential (VEPs) response. Specifically, we want to verify differences regarding the VEP response in sleeping and awake infants and if an association between VEP components, in both groups, with neurobehavioral outcome could be identified. To do so, thirty-two full-term and healthy infants, approximately 1-month of age, were assessed through a VEP unpatterned flashlight stimuli paradigm, offered in two different intensities, and were assessed using a neurobehavioral scale. However, only 18 infants have both assessments, and therefore, these is the total included in both analysis. Infants displayed a mature neurobehavioral outcome, expected for their age. We observed that P2 and N3 components were present in both sleeping and awake infants. Differences between intensities were found regarding the P2 amplitude, but only in awake infants. Regression analysis showed that N3 amplitude predicted an adequate social interactive and internal regulatory behavior in infants who were awake during the stimuli presentation. Taking into account that social orientation and regulatory behaviors are fundamental keys for social-like behavior in 1-month-old infants, this study provides an important approach for assessing physiological biomarkers (VEPs) and its relation with social behavior, very early in postnatal development. Moreover, we evidence the importance of the infant's state when studying differences regarding visual threshold processing and its association with behavioral outcome.
Resumo:
The thesis at hand is concerned with the spatio-temporal brain mechanisms of visual food perception as investigated by electrical neuroimaging. Due to the increasing prevalence of obesity and its associated challenges for public health care, there is a need to better understand behavioral and brain processes underlying food perception and food-based decision-making. The first study (Study A) of this thesis was concerned with the role of repeated exposure to visual food cues. In our everyday lives we constantly and repeatedly encounter food and these exposures influence our food choices and preferences. In Study A, we therefore applied electrical neuroimaging analyses of visual evoked potentials to investigate the spatio-temporal brain dynamics linked to the repeated viewing of high- and low-energy food cues (published manuscript: "The role of energetic value in dynamic brain response adaptation during repeated food image viewing" (Lietti et al., 2012)). In this study, we found that repetitions differentially affect behavioral and brain mechanisms when high-energy, as opposed to low-energy foods and non-food control objects, were viewed. The representation of high-energy food remained invariant between initial and repeated exposures indicating that the sight of high-energy dense food induces less behavioral and neural adaptation than the sight of low-energy food and non-food control objects. We discuss this finding in the context of the higher salience (due to greater motivation and higher reward or hedonic valuation) of energy- dense food that likely generates a more mnemonically stable representation. In turn, this more invariant representation of energy-dense food is supposed to (partially) explain why these foods are over-consumed despite of detrimental health consequences. In Study Β we investigated food responsiveness in patients who had undergone Roux-en-Y gastric bypass surgery to overcome excessive obesity. This type of gastric bypass surgery is not only known to alter food appreciation, but also the secretion patterns of adipokines and gut peptides. Study Β aimed at a comprehensive and interdisciplinary investigation of differences along the gut-brain axis in bypass-operated patients as opposed to weight-matched non-operated controls. On the one hand, the spatio-temporal brain dynamics to the visual perception of high- vs. low-energy foods under differing states of motivation towards food intake (i.e. pre- and post-prandial) were assessed and compared between groups. On the other hand, peripheral gut hormone measures were taken in pre- and post-prandial nutrition state and compared between groups. In order to evaluate alterations in the responsiveness along the gut-brain-axis related to gastric bypass surgery, correlations between both measures were compared between both participant groups. The results revealed that Roux-en- Y gastric bypass surgery alters the spatio-temporal brain dynamics to the perception of high- and low-energy food cues, as well as the responsiveness along the gut-brain-axis. The potential role of these response alterations is discussed in relation to previously observed changes in physiological factors and food intake behavior post-Roux-en-Y gastric bypass surgery. By doing so, we highlight potential behavioral, neural and endocrine (i.e. gut hormone) targets for the future development of intervention strategies for deviant eating behavior and obesity. Together, the studies showed that the visual representation of foods in the brain is plastic and that modulations in neural activity are already noted at early stages of visual processing. Different factors of influence such as a repeated exposure, Roux-en-Y gastric bypass surgery, motivation (nutrition state), as well as the energy density of the visually perceived food were identified. En raison de la prévalence croissante de l'obésité et du défi que cela représente en matière de santé publique, une meilleure compréhension des processus comportementaux et cérébraux liés à la nourriture sont nécessaires. En particulier, cette thèse se concentre sur l'investigation des mécanismes cérébraux spatio-temporels liés à la perception visuelle de la nourriture. Nous sommes quotidiennement et répétitivement exposés à des images de nourriture. Ces expositions répétées influencent nos choix, ainsi que nos préférences alimentaires. La première étude (Study A) de cette thèse investigue donc l'impact de ces exposition répétée à des stimuli visuels de nourriture. En particulier, nous avons comparé la dynamique spatio-temporelle de l'activité cérébrale induite par une exposition répétée à des images de nourriture de haute densité et de basse densité énergétique. (Manuscrit publié: "The role of energetic value in dynamic brain response adaptation during repeated food image viewing" (Lietti et al., 2012)). Dans cette étude, nous avons pu constater qu'une exposition répétée à des images représentant de la nourriture de haute densité énergétique, par opposition à de la nourriture de basse densité énergétique, affecte les mécanismes comportementaux et cérébraux de manière différente. En particulier, la représentation neurale des images de nourriture de haute densité énergétique est similaire lors de l'exposition initiale que lors de l'exposition répétée. Ceci indique que la perception d'images de nourriture de haute densité énergétique induit des adaptations comportementales et neurales de moindre ampleur par rapport à la perception d'images de nourriture de basse densité énergétique ou à la perception d'une « catégorie contrôle » d'objets qui ne sont pas de la nourriture. Notre discussion est orientée sur les notions prépondérantes de récompense et de motivation qui sont associées à la nourriture de haute densité énergétique. Nous suggérons que la nourriture de haute densité énergétique génère une représentation mémorielle plus stable et que ce mécanisme pourrait (partiellement) être sous-jacent au fait que la nourriture de haute densité énergétique soit préférentiellement consommée. Dans la deuxième étude (Study Β) menée au cours de cette thèse, nous nous sommes intéressés aux mécanismes de perception de la nourriture chez des patients ayant subi un bypass gastrique Roux- en-Y, afin de réussir à perdre du poids et améliorer leur santé. Ce type de chirurgie est connu pour altérer la perception de la nourriture et le comportement alimentaire, mais également la sécrétion d'adipokines et de peptides gastriques. Dans une approche interdisciplinaire et globale, cette deuxième étude investigue donc les différences entre les patients opérés et des individus « contrôles » de poids similaire au niveau des interactions entre leur activité cérébrale et les mesures de leurs hormones gastriques. D'un côté, nous avons investigué la dynamique spatio-temporelle cérébrale de la perception visuelle de nourriture de haute et de basse densité énergétique dans deux états physiologiques différent (pre- et post-prandial). Et de l'autre, nous avons également investigué les mesures physiologiques des hormones gastriques. Ensuite, afin d'évaluer les altérations liées à l'intervention chirurgicale au niveau des interactions entre la réponse cérébrale et la sécrétion d'hormone, des corrélations entre ces deux mesures ont été comparées entre les deux groupes. Les résultats révèlent que l'intervention chirurgicale du bypass gastrique Roux-en-Y altère la dynamique spatio-temporelle de la perception visuelle de la nourriture de haute et de basse densité énergétique, ainsi que les interactions entre cette dernière et les mesures périphériques des hormones gastriques. Nous discutons le rôle potentiel de ces altérations en relation avec les modulations des facteurs physiologiques et les changements du comportement alimentaire préalablement déjà démontrés. De cette manière, nous identifions des cibles potentielles pour le développement de stratégies d'intervention future, au niveau comportemental, cérébral et endocrinien (hormones gastriques) en ce qui concerne les déviances du comportement alimentaire, dont l'obésité. Nos deux études réunies démontrent que la représentation visuelle de la nourriture dans le cerveau est plastique et que des modulations de l'activité neurale apparaissent déjà à un stade très précoce des mécanismes de perception visuelle. Différents facteurs d'influence comme une exposition repetee, le bypass gastrique Roux-en-Y, la motivation (état nutritionnel), ainsi que la densité énergétique de la nourriture qui est perçue ont pu être identifiés.
Resumo:
Current models of brain organization include multisensory interactions at early processing stages and within low-level, including primary, cortices. Embracing this model with regard to auditory-visual (AV) interactions in humans remains problematic. Controversy surrounds the application of an additive model to the analysis of event-related potentials (ERPs), and conventional ERP analysis methods have yielded discordant latencies of effects and permitted limited neurophysiologic interpretability. While hemodynamic imaging and transcranial magnetic stimulation studies provide general support for the above model, the precise timing, superadditive/subadditive directionality, topographic stability, and sources remain unresolved. We recorded ERPs in humans to attended, but task-irrelevant stimuli that did not require an overt motor response, thereby circumventing paradigmatic caveats. We applied novel ERP signal analysis methods to provide details concerning the likely bases of AV interactions. First, nonlinear interactions occur at 60-95 ms after stimulus and are the consequence of topographic, rather than pure strength, modulations in the ERP. AV stimuli engage distinct configurations of intracranial generators, rather than simply modulating the amplitude of unisensory responses. Second, source estimations (and statistical analyses thereof) identified primary visual, primary auditory, and posterior superior temporal regions as mediating these effects. Finally, scalar values of current densities in all of these regions exhibited functionally coupled, subadditive nonlinear effects, a pattern increasingly consistent with the mounting evidence in nonhuman primates. In these ways, we demonstrate how neurophysiologic bases of multisensory interactions can be noninvasively identified in humans, allowing for a synthesis across imaging methods on the one hand and species on the other.
Resumo:
Tone Mapping is the problem of compressing the range of a High-Dynamic Range image so that it can be displayed in a Low-Dynamic Range screen, without losing or introducing novel details: The final image should produce in the observer a sensation as close as possible to the perception produced by the real-world scene. We propose a tone mapping operator with two stages. The first stage is a global method that implements visual adaptation, based on experiments on human perception, in particular we point out the importance of cone saturation. The second stage performs local contrast enhancement, based on a variational model inspired by color vision phenomenology. We evaluate this method with a metric validated by psychophysical experiments and, in terms of this metric, our method compares very well with the state of the art.
Cognitive disorganisation in schizotypy is associated with deterioration in visual backward masking.
Resumo:
To understand the causes of schizophrenia, a search for stable markers (endophenotypes) is ongoing. In previous years, we have shown that the shine-through visual backward masking paradigm meets the most important characteristics of an endophenotype. Here, we tested masking performance differences between healthy students with low and high schizotypy scores as determined by the self-report O-Life questionnaire assessing schizotypy along three dimensions, i.e. positive schizotypy (unusual experiences), cognitive disorganisation, and negative schizotypy (introvertive anhedonia). Forty participants performed the shine-through backward masking task and a classical cognitive test, the Wisconsin Card Sorting Task (WCST). We found that visual backward masking was impaired for students scoring high as compared to low on the cognitive disorganisation dimension, whereas the positive and negative schizotypy dimensions showed no link to masking performance. We also found group differences for students scoring high and low on the cognitive disorganisation factor for the WCST. These findings indicate that the shine-through paradigm is sensitive to differences in schizotypy which are closely linked with the pathological expression in schizophrenia.
Resumo:
The current state of empirical investigations refers to consciousness as an all-or-none phenomenon. However, a recent theoretical account opens up this perspective by proposing a partial level (between nil and full) of conscious perception. In the well-studied case of single-word reading, short-lived exposure can trigger incomplete word-form recognition wherein letters fall short of forming a whole word in one's conscious perception thereby hindering word-meaning access and report. Hence, the processing from incomplete to complete word-form recognition straightforwardly mirrors a transition from partial to full-blown consciousness. We therefore hypothesized that this putative functional bottleneck to consciousness (i.e. the perceptual boundary between partial and full conscious perception) would emerge at a major key hub region for word-form recognition during reading, namely the left occipito-temporal junction. We applied a real-time staircase procedure and titrated subjective reports at the threshold between partial (letters) and full (whole word) conscious perception. This experimental approach allowed us to collect trials with identical physical stimulation, yet reflecting distinct perceptual experience levels. Oscillatory brain activity was monitored with magnetoencephalography and revealed that the transition from partial-to-full word-form perception was accompanied by alpha-band (7-11 Hz) power suppression in the posterior left occipito-temporal cortex. This modulation of rhythmic activity extended anteriorly towards the visual word form area (VWFA), a region whose selectivity for word-forms in perception is highly debated. The current findings provide electrophysiological evidence for a functional bottleneck to consciousness thereby empirically instantiating a recently proposed partial perspective on consciousness. Moreover, the findings provide an entirely new outlook on the functioning of the VWFA as a late bottleneck to full-blown conscious word-form perception.
Resumo:
Rats, like other crepuscular animals, have excellent auditory capacities and they discriminate well between different sounds [Heffner HE, Heffner RS, Hearing in two cricetid rodents: wood rats (Neotoma floridana) and grasshopper mouse (Onychomys leucogaster). J Comp Psychol 1985;99(3):275-88]. However, most experimental literature concerning spatial orientation almost exclusively emphasizes the use of visual landmarks [Cressant A, Muller RU, Poucet B. Failure of centrally placed objects to control the firing fields of hippocampal place cells. J Neurosci 1997;17(7):2531-42; and Goodridge JP, Taube JS. Preferential use of the landmark navigational system by head direction cells in rats. Behav Neurosci 1995;109(1):49-61]. To address the important issue of whether rats are able to achieve a place navigation task relative to auditory beacons, we designed a place learning task in the water maze. We controlled cue availability by conducting the experiment in total darkness. Three auditory cues did not allow place navigation whereas three visual cues in the same positions did support place navigation. One auditory beacon directly associated with the goal location did not support taxon navigation (a beacon strategy allowing the animal to find the goal just by swimming toward the cue). Replacing the auditory beacons by one single visual beacon did support taxon navigation. A multimodal configuration of two auditory cues and one visual cue allowed correct place navigation. The deletion of the two auditory or of the one visual cue did disrupt the spatial performance. Thus rats can combine information from different sensory modalities to achieve a place navigation task. In particular, auditory cues support place navigation when associated with a visual one.
Resumo:
We report the case study of a French-Spanish bilingual dyslexic girl, MP, who exhibited a severe visual attention (VA) span deficit but preserved phonological skills. Behavioural investigation showed a severe reduction of reading speed for both single items (words and pseudo-words) and texts in the two languages. However, performance was more affected in French than in Spanish. MP was administered an intensive VA span intervention programme. Pre-post intervention comparison revealed a positive effect of intervention on her VA span abilities. The intervention further transferred to reading. It primarily resulted in faster identification of the regular and irregular words in French. The effect of intervention was rather modest in Spanish that only showed a tendency for faster word reading. Text reading improved in the two languages with a stronger effect in French but pseudo-word reading did not improve in either French or Spanish. The overall results suggest that VA span intervention may primarily enhance the fast global reading procedure, with stronger effects in French than in Spanish. MP underwent two fMRI sessions to explore her brain activations before and after VA span training. Prior to the intervention, fMRI assessment showed that the striate and extrastriate visual cortices alone were activated but none of the regions typically involved in VA span. Post-training fMRI revealed increased activation of the superior and inferior parietal cortices. Comparison of pre- and post-training activations revealed significant activation increase of the superior parietal lobes (BA 7) bilaterally. Thus, we show that a specific VA span intervention not only modulates reading performance but further results in increased brain activity within the superior parietal lobes known to housing VA span abilities. Furthermore, positive effects of VA span intervention on reading suggest that the ability to process multiple visual elements simultaneously is one cause of successful reading acquisition.
Resumo:
Participants in an immersive virtual environment interact with the scene from an egocentric point of view that is, where there bodies appear to be located rather than from outside as if looking through a window. People interact through normal body movements, such as head-turning,reaching, and bending, and within the tracking limitations move through the environment or effect changes within it in natural ways.
Resumo:
This work investigates novel alternative means of interaction in a virtual environment (VE).We analyze whether humans can remap established body functions to learn to interact with digital information in an environment that is cross-sensory by nature and uses vocal utterances in order to influence (abstract) virtual objects. We thus establish a correlation among learning, control of the interface, and the perceived sense of presence in the VE. The application enables intuitive interaction by mapping actions (the prosodic aspects of the human voice) to a certain response (i.e., visualization). A series of single-user and multiuser studies shows that users can gain control of the intuitive interface and learn to adapt to new and previously unseen tasks in VEs. Despite the abstract nature of the presented environment, presence scores were generally very high.
Resumo:
Individuals with vestibular dysfunction may experience visual vertigo (VV), in which symptoms are provoked or exacerbated by excessive or disorientating visual stimuli (e.g. supermarkets). VV can significantly improve when customized vestibular rehabilitation exercises are combined with exposure to optokinetic stimuli. Virtual reality (VR), which immerses patients in realistic, visually challenging environments, has also been suggested as an adjunct to VR to improve VV symptoms. This pilot study compared the responses of sixteen patients with unilateral peripheral vestibular disorder randomly allocated to a VR regime incorporating exposure to a static (Group S) or dynamic (Group D) VR environment. Participants practiced vestibular exercises, twice weekly for four weeks, inside a static (Group S) or dynamic (Group D) virtual crowded square environment, presented in an immersive projection theatre (IPT), and received a vestibular exercise program to practice on days not attending clinic. A third Group D1 completed both the static and dynamic VR training. Treatment response was assessed with the Dynamic Gait Index and questionnaires concerning symptom triggers and psychological state. At final assessment, significant betweengroup differences were noted between Groups D (p = 0.001) and D1 (p = 0.03) compared to Group S for VV symptoms with the former two showing a significant 59.2% and 25.8% improvement respectively compared to 1.6% for the latter. Depression scores improved only for Group S (p = 0.01) while a trend towards significance was noted for Group D regarding anxiety scores (p = 0.07). Conclusion: Exposure to dynamic VR environments should be considered as a useful adjunct to vestibular rehabilitation programs for patients with peripheral vestibular disorders and VV symptoms.
Resumo:
This work investigates novel alternative means of interaction in a virtual environment (VE).We analyze whether humans can remap established body functions to learn to interact with digital information in an environment that is cross-sensory by nature and uses vocal utterances in order to influence (abstract) virtual objects. We thus establish a correlation among learning, control of the interface, and the perceived sense of presence in the VE. The application enables intuitive interaction by mapping actions (the prosodic aspects of the human voice) to a certain response (i.e., visualization). A series of single-user and multiuser studies shows that users can gain control of the intuitive interface and learn to adapt to new and previously unseen tasks in VEs. Despite the abstract nature of the presented environment, presence scores were generally very high.
Resumo:
Individuals with vestibular dysfunction may experience visual vertigo (VV), in which symptoms are provoked or exacerbated by excessive or disorientating visual stimuli (e.g. supermarkets). VV can significantly improve when customized vestibular rehabilitation exercises are combined with exposure to optokinetic stimuli. Virtual reality (VR), which immerses patients in realistic, visually challenging environments, has also been suggested as an adjunct to VR to improve VV symptoms. This pilot study compared the responses of sixteen patients with unilateral peripheral vestibular disorder randomly allocated to a VR regime incorporating exposure to a static (Group S) or dynamic (Group D) VR environment. Participants practiced vestibular exercises, twice weekly for four weeks, inside a static (Group S) or dynamic (Group D) virtual crowded square environment, presented in an immersive projection theatre (IPT), and received a vestibular exercise program to practice on days not attending clinic. A third Group D1 completed both the static and dynamic VR training. Treatment response was assessed with the Dynamic Gait Index and questionnaires concerning symptom triggers and psychological state. At final assessment, significant betweengroup differences were noted between Groups D (p = 0.001) and D1 (p = 0.03) compared to Group S for VV symptoms with the former two showing a significant 59.2% and 25.8% improvement respectively compared to 1.6% for the latter. Depression scores improved only for Group S (p = 0.01) while a trend towards significance was noted for Group D regarding anxiety scores (p = 0.07). Conclusion: Exposure to dynamic VR environments should be considered as a useful adjunct to vestibular rehabilitation programs for patients with peripheral vestibular disorders and VV symptoms.
Resumo:
Observers are often required to adjust actions with objects that change their speed. However, no evidence for a direct sense of acceleration has been found so far. Instead, observers seem to detect changes in velocity within a temporal window when confronted with motion in the frontal plane (2D motion). Furthermore, recent studies suggest that motion-in-depth is detected by tracking changes of position in depth. Therefore, in order to sense acceleration in depth a kind of second-order computation would have to be carried out by the visual system. In two experiments, we show that observers misperceive acceleration of head-on approaches at least within the ranges we used [600-800 ms] resulting in an overestimation of arrival time. Regardless of the viewing condition (only monocular or monocular and binocular), the response pattern conformed to a constant velocity strategy. However, when binocular information was available, overestimation was highly reduced.
Resumo:
Participants in an immersive virtual environment interact with the scene from an egocentric point of view that is, where there bodies appear to be located rather than from outside as if looking through a window. People interact through normal body movements, such as head-turning,reaching, and bending, and within the tracking limitations move through the environment or effect changes within it in natural ways.