822 resultados para Depth Perception
Resumo:
The ability of the human eye to perceive depth was measured using a specially designed instrument. Visual acuity and both monocular and binocular stereoacuity were measured when viewing the instrument directly and via a videoconferencing link. Ten subjects with an average age of 32.5 years (range 24-50) took part in the study. The group mean visual acuity using both eyes under normal test conditions was -0.04 logMAR (Snellen 6/5) compared with 0.18 logMAR (Snellen 6/10) for the video-link. The mean stereoacuity using both eyes was 37 (SD 18) under normal test conditions. When a videoconferencing link was used, the mean stereoacuity fell to 1218 (SD 1203) using one eye and to 1651 (SD 1419) using both eyes. The ability to perceive depth remotely via a video-link was significantly decreased compared with normal test conditions.
Resumo:
This thesis explores the debate and issues regarding the status of visual ;,iferellces in the optical writings of Rene Descartes, George Berkeley and James 1. Gibson. It gathers arguments from across their works and synthesizes an account of visual depthperception that accurately reflects the larger, metaphysical implications of their philosophical theories. Chapters 1 and 2 address the Cartesian and Berkelean theories of depth-perception, respectively. For Descartes and Berkeley the debate can be put in the following way: How is it possible that we experience objects as appearing outside of us, at various distances, if objects appear inside of us, in the representations of the individual's mind? Thus, the Descartes-Berkeley component of the debate takes place exclusively within a representationalist setting. Representational theories of depthperception are rooted in the scientific discovery that objects project a merely twodimensional patchwork of forms on the retina. I call this the "flat image" problem. This poses the problem of depth in terms of a difference between two- and three-dimensional orders (i.e., a gap to be bridged by one inferential procedure or another). Chapter 3 addresses Gibson's ecological response to the debate. Gibson argues that the perceiver cannot be flattened out into a passive, two-dimensional sensory surface. Perception is possible precisely because the body and the environment already have depth. Accordingly, the problem cannot be reduced to a gap between two- and threedimensional givens, a gap crossed with a projective geometry. The crucial difference is not one of a dimensional degree. Chapter 3 explores this theme and attempts to excavate the empirical and philosophical suppositions that lead Descartes and Berkeley to their respective theories of indirect perception. Gibson argues that the notion of visual inference, which is necessary to substantiate representational theories of indirect perception, is highly problematic. To elucidate this point, the thesis steps into the representationalist tradition, in order to show that problems that arise within it demand a tum toward Gibson's information-based doctrine of ecological specificity (which is to say, the theory of direct perception). Chapter 3 concludes with a careful examination of Gibsonian affordallces as the sole objects of direct perceptual experience. The final section provides an account of affordances that locates the moving, perceiving body at the heart of the experience of depth; an experience which emerges in the dynamical structures that cross the body and the world.
Resumo:
In the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges and junctions may provide a 3D model of the scene but it will not inform about the actual "size" of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, this is computationally complex due to the difficulty of the object recognition process. Here we propose a source of information for absolute depth estimation that does not rely on specific objects: we introduce a procedure for absolute depth estimation based on the recognition of the whole scene. The shape of the space of the scene and the structures present in the scene are strongly related to the scale of observation. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene, and therefore its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection.
Resumo:
Quality assessment is a key factor for stereoscopic 3D video content as some observers are affected by visual discomfort in the eye when viewing 3D video, especially when combining positive and negative parallax with fast motion. In this paper, we propose techniques to assess objective quality related to motion and depth maps, which facilitate depth perception analysis. Subjective tests were carried out in order to understand the source of the problem. Motion is an important feature affecting 3D experience but also often the cause of visual discomfort. The automatic algorithm developed tries to quantify the impact on viewer experience when common cases of discomfort occur, such as high-motion sequences, scene changes with abrupt parallax changes, or complete absence of stereoscopy, with a goal of preventing the viewer from having a bad stereoscopic experience.
Resumo:
Bibliography: p. 29-31.
Resumo:
Previous studies have suggested separate channels for detection of first-order luminance modulations (LM) and second-order modulations of the local amplitude (AM) of a texture. Mixtures of LM and AM with different phase relationships appear very different: in-phase compounds (LM + AM) look like 3-D corrugated surfaces, while out-of-phase compounds (LM - AM) appear flat and/or transparent. This difference may arise because the in-phase compounds are consistent with multiplicative shading, while the out-of-phase compounds are not. We investigated the role of these modulation components in surface depth perception. We used a textured background with thin bars formed by local changes in luminance and/or texture amplitude. These stimuli appear as embossed surfaces with wide and narrow regions. Keeping the AM modulation depth fixed at a suprathreshold level, we determined the amount of luminance contrast required for observers to correctly indicate the width (narrow or wide) of 'raised' regions in the display. Performance (compared to the LM-only case) was facilitated by the presence of AM, but, unexpectedly, performance for LM - AM was as good as for LM + AM. Thus, these results suggest that there is an interaction between first-order and second-order mechanisms during depth perception based on shading cues, but the phase dependence is not yet understood.
Resumo:
Previous studies have suggested separate channels for the detection of first-order luminance (LM) and second-order modulations of the local amplitude (AM) of a texture (Schofield and Georgeson, 1999 Vision Research 39 2697 - 2716; Georgeson and Schofield, 2002 Spatial Vision 16 59). It has also been shown that LM and AM mixtures with different phase relationships are easily separated in identification tasks, and (informally) appear very different with the in-phase compound (LM + AM), producing the most realistic depth percept. We investigated the role of these LM and AM components in depth perception. Stimuli consisted of a noise texture background with thin bars formed as local increments or decrements in luminance and/or noise amplitude. These stimuli appear as embossed surfaces with wide and narrow regions. When luminance and amplitude changes have the same sign and magnitude (LM + AM) the overall modulation is consistent with multiplicative shading, but this is not so when the two modulations have opposite sign (LM - AM). Keeping the AM modulation depth fixed at a suprathreshold level, we determined the amount of luminance contrast required for observers to correctly indicate the width (narrow or wide) of raised regions in the display. Performance (compared to the LM-only case) was facilitated by the presence of AM, but, unexpectedly, performance for LM - AM was even better than for LM + AM. Further tests suggested that this improvement in performance is not due to an increase in the detectability of luminance in the compound stimuli. Thus, contrary to previous findings, these results suggest the possibility of interaction between first-order and second-order mechanisms in depth perception.
Resumo:
This is an abstract of an invited talk presented at the AVA Animal Vision Meeting / Camocon 2015 in Liverpool UK, on the 23rd of August 2015.
Resumo:
We seek to determine the relationship between threshold and suprathreshold perception for position offset and stereoscopic depth perception under conditions that elevate their respective thresholds. Two threshold-elevating conditions were used: (1) increasing the interline gap and (2) dioptric blur. Although increasing the interline gap increases position (Vernier) offset and stereoscopic disparity thresholds substantially, the perception of suprathreshold position offset and stereoscopic depth remains unchanged. Perception of suprathreshold position offset also remains unchanged when the Vernier threshold is elevated by dioptric blur. We show that such normalization of suprathreshold position offset can be attributed to the topographical-map-based encoding of position. On the other hand, dioptric blur increases the stereoscopic disparity thresholds and reduces the perceived suprathreshold stereoscopic depth, which can be accounted for by a disparity-computation model in which the activities of absolute disparity encoders are multiplied by a Gaussian weighting function that is centered on the horopter. Overall, the statement "equal suprathreshold perception occurs in threshold-elevated and unelevated conditions when the stimuli are equally above their corresponding thresholds" describes the results better than the statement "suprathreshold stimuli are perceived as equal when they are equal multiples of their respective threshold values."
Resumo:
Omnidirectional cameras offer a much wider field of view than the perspective ones and alleviate the problems due to occlusions. However, both types of cameras suffer from the lack of depth perception. A practical method for obtaining depth in computer vision is to project a known structured light pattern on the scene avoiding the problems and costs involved by stereo vision. This paper is focused on the idea of combining omnidirectional vision and structured light with the aim to provide 3D information about the scene. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. It is also discussed how this sensor can be used in robot navigation applications
Resumo:
Omnidirectional cameras offer a much wider field of view than the perspective ones and alleviate the problems due to occlusions. However, both types of cameras suffer from the lack of depth perception. A practical method for obtaining depth in computer vision is to project a known structured light pattern on the scene avoiding the problems and costs involved by stereo vision. This paper is focused on the idea of combining omnidirectional vision and structured light with the aim to provide 3D information about the scene. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. It is also discussed how this sensor can be used in robot navigation applications
Resumo:
Stereoscopic depth perception utilizes the disparity cues between the images that fall on the retinae of the two eyes. The purpose of this study was to determine what role aging and optical blur play in stereoscopic disparity sensitivity for real depth stimuli. Forty-six volunteers were tested ranging in age from 15 to 60 years. Crossed and uncrossed disparity thresholds were measured using white light under conditions of best optical correction. The uncrossed disparity thresholds were also measured with optical blur (from +1.0D to +5.0D added to the best correction). Stereothresholds were measured using the Frisby Stereo Test, which utilizes a four-alternative forced-choice staircase procedure. The threshold disparities measured for young adults were frequently lower than 10 arcsec, a value considerably lower than the clinical estimates commonly obtained using Random Dot Stereograms (20 arcsec) or Titmus Fly Test (40 arcsec) tests. Contrary to previous reports, disparity thresholds increased between the ages of 31 and 45 years. This finding should be taken into account in clinical evaluation of visual function of older patients. Optical blur degrades visual acuity and stereoacuity similarly under white-light conditions, indicating that both functions are affected proportionally by optical defocus.
Resumo:
Abstract Originalsprache (englisch) Visual perception relies on a two-dimensional projection of the viewed scene on the retinas of both eyes. Thus, visual depth has to be reconstructed from a number of different cues that are subsequently integrated to obtain robust depth percepts. Existing models of sensory integration are mainly based on the reliabilities of individual cues and disregard potential cue interactions. In the current study, an extended Bayesian model is proposed that takes into account both cue reliability and consistency. Four experiments were carried out to test this model's predictions. Observers had to judge visual displays of hemi-cylinders with an elliptical cross section, which were constructed to allow for an orthogonal variation of several competing depth cues. In Experiment 1 and 2, observers estimated the cylinder's depth as defined by shading, texture, and motion gradients. The degree of consistency among these cues was systematically varied. It turned out that the extended Bayesian model provided a better fit to the empirical data compared to the traditional model which disregards covariations among cues. To circumvent the potentially problematic assessment of single-cue reliabilities, Experiment 3 used a multiple-observation task, which allowed for estimating perceptual weights from multiple-cue stimuli. Using the same multiple-observation task, the integration of stereoscopic disparity, shading, and texture gradients was examined in Experiment 4. It turned out that less reliable cues were downweighted in the combined percept. Moreover, a specific influence of cue consistency was revealed. Shading and disparity seemed to be processed interactively while other cue combinations could be well described by additive integration rules. These results suggest that cue combination in visual depth perception is highly flexible and depends on single-cue properties as well as on interrelations among cues. The extension of the traditional cue combination model is defended in terms of the necessity for robust perception in ecologically valid environments and the current findings are discussed in the light of emerging computational theories and neuroscientific approaches.
Resumo:
Perceptual learning is a training induced improvement in performance. Mechanisms underlying the perceptual learning of depth discrimination in dynamic random dot stereograms were examined by assessing stereothresholds as a function of decorrelation. The inflection point of the decorrelation function was defined as the level of decorrelation corresponding to 1.4 times the threshold when decorrelation is 0%. In general, stereothresholds increased with increasing decorrelation. Following training, stereothresholds and standard errors of measurement decreased systematically for all tested decorrelation values. Post training decorrelation functions were reduced by a multiplicative constant (approximately 5), exhibiting changes in stereothresholds without changes in the inflection points. Disparity energy model simulations indicate that a post-training reduction in neuronal noise can sufficiently account for the perceptual learning effects. In two subjects, learning effects were retained over a period of six months, which may have application for training stereo deficient subjects.