834 resultados para Japanese In motion pictures
Resumo:
El treball desenvolupat en aquesta tesi aprofundeix i aporta solucions innovadores en el camp orientat a tractar el problema de la correspondència en imatges subaquàtiques. En aquests entorns, el que realment complica les tasques de processat és la falta de contorns ben definits per culpa d'imatges esborronades; un fet aquest que es deu fonamentalment a il·luminació deficient o a la manca d'uniformitat dels sistemes d'il·luminació artificials. Els objectius aconseguits en aquesta tesi es poden remarcar en dues grans direccions. Per millorar l'algorisme d'estimació de moviment es va proposar un nou mètode que introdueix paràmetres de textura per rebutjar falses correspondències entre parells d'imatges. Un seguit d'assaigs efectuats en imatges submarines reals han estat portats a terme per seleccionar les estratègies més adients. Amb la finalitat d'aconseguir resultats en temps real, es proposa una innovadora arquitectura VLSI per la implementació d'algunes parts de l'algorisme d'estimació de moviment amb alt cost computacional.
Resumo:
The purpose of this study was to evaluate discrimination of angular velocity in individuals with normal vestibular function using a newly developed adaptive psychophysical measure. Vestibular psychophysical testing may complement existing clinical measures in diagnosing and treating patients with imbalance.
Resumo:
This paper examines the detailed embryologic development of the otolith organs, part of the vestibular system responsible for balance, in Japanese quail. The data from this study will provide baseline data for comparison with experiments conducted by NASA during space flight.
Resumo:
In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the “correct” size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.
Resumo:
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Resumo:
The perceived displacement of motion-defined contours in peripheral vision was examined in four experiments. In Experiment 1, in line with Ramachandran and Anstis' finding [Ramachandran, V. S., & Anstis, S. M. (1990). Illusory displacement of equiluminous kinetic edges. Perception, 19, 611-616], the border between a field of drifting dots and a static dot pattern was apparently displaced in the same direction as the movement of the dots. When a uniform dark area was substituted for the static dots, a similar displacement was found, but this was smaller and statistically insignificant. In Experiment 2, the border between two fields of dots moving in opposite directions was displaced in the direction of motion of the dots in the more eccentric field, so that the location of a boundary defined by a diverging pattern is perceived as more eccentric, and that defined by a converging as less eccentric. Two explanations for this effect (that the displacement reflects a greater weight given to the more eccentric motion, or that the region containing stronger centripetal motion components expands perceptually into that containing centrifugal motion) were tested in Experiment 3, by varying the velocity of the more eccentric region. The results favoured the explanation based on the expansion of an area in centripetal motion. Experiment 4 showed that the difference in perceived location was unlikely to be due to differences in the discriminability of contours in diverging and converging pattems, and confirmed that this effect is due to a difference between centripetal and centrifugal motion rather than motion components in other directions. Our result provides new evidence for a bias towards centripetal motion in human vision, and suggests that the direction of motion-induced displacement of edges is not always in the direction of an adjacent moving pattern. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We test Slobin's (2003) Thinking-for-Speaking hypothesis on data from different groups of Turkish-German bilinguals, those living in Germany and those who have returned to Germany.