985 resultados para Visual Cues
Resumo:
This study analyzed the spatial memory capacities of rats in darkness with visual and/or olfactory cues through ontogeny. Tests were conducted with the homing board, where rats had to find the correct escape hole. Four age groups (24 days, 48 days, 3-6 months, and 12 months) were trained in 3 conditions: (a) 3 identical light cues; (b) 5 different olfactory cues; and (c) both types of cues, followed by removal of the olfactory cues. Results indicate that immature rats first take into account olfactory information but are unable to orient with only the help of discrete visual cues. Olfaction enables the use of visual information by 48-day-old rats. Visual information predominantly supports spatial cognition in adult and 12-month-old rats. Results point out cooperation between vision and olfaction for place navigation during ontogeny in rats.
Resumo:
Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.
Resumo:
Rats with fornix transection, or with cytotoxic retrohippocampal lesions that removed entorhinal cortex plus ventral subiculum, performed a task that permits incidental learning about either allocentric (Allo) or egocentric (Ego) spatial cues without the need to navigate by them. Rats learned eight visual discriminations among computer-displayed scenes in a Y-maze, using the constant-negative paradigm. Every discrimination problem included two familiar scenes (constants) and many less familiar scenes (variables). On each trial, the rats chose between a constant and a variable scene, with the choice of the variable rewarded. In six problems, the two constant scenes had correlated spatial properties, either Alto (each constant appeared always in the same maze arm) or Ego (each constant always appeared in a fixed direction from the start arm) or both (Allo + Ego). In two No-Cue (NC) problems, the two constants appeared in randomly determined arms and directions. Intact rats learn problems with an added Allo or Ego cue faster than NC problems; this facilitation provides indirect evidence that they learn the associations between scenes and spatial cues, even though that is not required for problem solution. Fornix and retrohippocampal-lesioned groups learned NC problems at a similar rate to sham-operated controls and showed as much facilitation of learning by added spatial cues as did the controls; therefore, both lesion groups must have encoded the spatial cues and have incidentally learned their associations with particular constant scenes. Similar facilitation was seen in subgroups that had short or long prior experience with the apparatus and task. Therefore, neither major hippocampal input-output system is crucial for learning about allocentric or egocentric cues in this paradigm, which does not require rats to control their choices or navigation directly by spatial cues.
Resumo:
When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The goal of this study was to investigate the effects of manipulation of the characteristics of visual stimulus on postural control in dyslexic children. A total of 18 dyslexic and 18 non-dyslexic children stood upright inside a moving room, as still as possible, and looked at a target at different conditions of distance between the participant and a moving room frontal wall (25-150 cm) and vision (full and central). The first trial was performed without vision (baseline). Then four trials were performed in which the room remained stationary and eight trials with the room moving, lasting 60 s each. Mean sway amplitude, coherence, relative phase, and angular deviation were calculated. The results revealed that dyslexic children swayed with larger magnitude in both stationary and moving conditions. When the room remained stationary, all children showed larger body sway magnitude at 150 cm distance. Dyslexic children showed larger body sway magnitude in central compared to full vision condition. In the moving condition, body sway magnitude was similar between dyslexic and non-dyslexic children but the coupling between visual information and body sway was weaker in dyslexic children. Moreover, in the absence of peripheral visual cues, induced body sway in dyslexic children was temporally delayed regarding visual stimulus. Taken together, these results indicate that poor postural control performance in dyslexic children is related to how sensory information is acquired from the environment and used to produce postural responses. In conditions in which sensory cues are less informative, dyslexic children take longer to process sensory stimuli in order to obtain precise information, which leads to performance deterioration. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Identifying a human body stimulus involves mentally rotating an embodied spatial representation of one's body (motoric embodiment) and projecting it onto the stimulus (spatial embodiment). Interactions between these two processes (spatial and motoric embodiment) may thus reveal cues about the underlying reference frames. The allocentric visual reference frame, and hence the perceived orientation of the body relative to gravity, was modulated using the York Tumbling Room, a fully furnished cubic room with strong directional cues that can be rotated around a participant's roll axis. Sixteen participants were seated upright (relative to gravity) in the Tumbling Room and made judgments about body and hand stimuli that were presented in the frontal plane at orientations of 0°, 90°, 180° (upside down), or 270° relative to them. Body stimuli have an intrinsic visual polarity relative to the environment whereas hands do not. Simultaneously the room was oriented 0°, 90°, 180° (upside down), or 270° relative to gravity resulting in sixteen combinations of orientations. Body stimuli were more accurately identified when room and body stimuli were aligned. However, such congruency did not facilitate identifying hand stimuli. We conclude that static allocentric visual cues can affect embodiment and hence performance in an egocentric mental transformation task. Reaction times to identify either hands or bodies showed no dependence on room orientation.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Office of Driver and Pedestrian Research, Washington, D.C.
Resumo:
Rats, like other crepuscular animals, have excellent auditory capacities and they discriminate well between different sounds [Heffner HE, Heffner RS, Hearing in two cricetid rodents: wood rats (Neotoma floridana) and grasshopper mouse (Onychomys leucogaster). J Comp Psychol 1985;99(3):275-88]. However, most experimental literature concerning spatial orientation almost exclusively emphasizes the use of visual landmarks [Cressant A, Muller RU, Poucet B. Failure of centrally placed objects to control the firing fields of hippocampal place cells. J Neurosci 1997;17(7):2531-42; and Goodridge JP, Taube JS. Preferential use of the landmark navigational system by head direction cells in rats. Behav Neurosci 1995;109(1):49-61]. To address the important issue of whether rats are able to achieve a place navigation task relative to auditory beacons, we designed a place learning task in the water maze. We controlled cue availability by conducting the experiment in total darkness. Three auditory cues did not allow place navigation whereas three visual cues in the same positions did support place navigation. One auditory beacon directly associated with the goal location did not support taxon navigation (a beacon strategy allowing the animal to find the goal just by swimming toward the cue). Replacing the auditory beacons by one single visual beacon did support taxon navigation. A multimodal configuration of two auditory cues and one visual cue allowed correct place navigation. The deletion of the two auditory or of the one visual cue did disrupt the spatial performance. Thus rats can combine information from different sensory modalities to achieve a place navigation task. In particular, auditory cues support place navigation when associated with a visual one.
Resumo:
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.