51 resultados para Binocular visual fields
Resumo:
The tear film, cornea and lens dictate the refractive power of the eye and the retinal image quality is principally defined by diffraction, whole eye wavefront error, scatter, and chromatic aberration. Diffraction and wave aberration are fundamentally pupil diameter dependent; however scatter can be induced by refractive surgery and in the normal ageing eye becomes an increasingly important factor defining retinal image quality. The component of visual quality most affected by the tear film, refractive surgery and multifocal contact and intraocular lenses is the wave aberration of the eye. This body of work demonstrates the effects of each of these anomalies on the visual quality of the eye. When assessing normal or borderline self-diagnosed dry eye subjects using aberrometry, combining lubricating eye drops and spray does not offer any benefit over individual products. However, subjects perceive a difference in comfort for all interventions after one hour. Total higher order aberrations increase after laser assisted sub-epithelial keratectomy performed using a solid-state laser on myopes, but this causes no significant decrease in contrast sensitivity or increase in glare disability. Mean sensitivity and reliability indices for perimetry were comparable to pre-surgery results. Multifocal contact lenses and intraocular lenses are designed to maximise vision when the patient is binocular, so any evaluation of the eyes individually is confounded by reduced individual visual acuity and visual quality. Different designs of aspheric multifocal contact lenses do not provide the same level of visual quality. Multifocal contact lenses adversely affect mean deviation values for perimetry and this should be considered when screening individuals with multifocal contact or intraocular lenses. Photographic image quality obtained through a multifocal contact or intraocular lens appears to be unchanged. Future work should evaluate the effect of these anomalies in combination; with the aim of providing the best visual quality possible and supplying normative data for screening purposes.
Resumo:
PURPOSE: To assess the visual performance and subjective experience of eyes implanted with a new bi-aspheric, segmented, multifocal intraocular lens: the Mplus X (Topcon Europe Medical, Capelle aan den IJssel, Netherlands). METHODS: Seventeen patients (mean age: 64.0 ± 12.8 years) had binocular implantation (34 eyes) with the Mplus X. Three months after the implantation, assessment was made of: manifest refraction; uncorrected and corrected distance visual acuity; uncorrected and distance corrected near visual acuity; defocus curves in photopic conditions; contrast sensitivity; halometry as an objective measure of glare; and patient satisfaction with unaided near vision using the Near Acuity Visual Questionnaire. RESULTS: Mean residual manifest refraction was -0.13 ± 0.51 diopters (D). Twenty-five eyes (74%) were within a mean spherical equivalent of ±0.50 D. Mean uncorrected distance visual acuity was +0.10 ± 0.12 logMAR monocularly and 0.02 ± 0.09 logMAR binocularly. Thirty-two eyes (94%) could read 0.3 or better without any reading correction and all patients could read 0.3 or better with a reading correction. Mean monocular uncorrected near visual acuity was 0.18 ± 0.16 logMAR, improving to 0.15 ± 0.15 logMAR with distance correction. Mean binocular uncorrected near visual acuity was 0.11 ± 0.11 logMAR, improving to 0.09 ± 0.12 logMAR with distance correction. Mean binocular contrast sensitivity was 1.75 ± 0.14 log units at 3 cycles per degree, 1.88 ± 0.20 log units at 6 cycles per degree, 1.66 ± 0.19 log units at 12 cycles per degree, and 1.11 ± 0.20 log units at 18 cycles per degree. Mean binocular and monocular halometry showed a glare profile of less than 1° of debilitating light scatter. Mean Near Acuity Visual Questionnaire Rasch score (0 = no difficulty, 100 = extreme difficulty) for satisfaction for near vision was 20.43 ± 14.64 log-odd units. CONCLUSIONS: The Mplus X provides a good visual outcome at distance and near with minimal dysphotopsia. Patients were very satisfied with their uncorrected near vision. © SLACK Incorporated.
Resumo:
Rotation invariance is important for an iris recognition system since changes of head orientation and binocular vergence may cause eye rotation. The conventional methods of iris recognition cannot achieve true rotation invariance. They only achieve approximate rotation invariance by rotating the feature vector before matching or unwrapping the iris ring at different initial angles. In these methods, the complexity of the method is increased, and when the rotation scale is beyond the certain scope, the error rates of these methods may substantially increase. In order to solve this problem, a new rotation invariant approach for iris feature extraction based on the non-separable wavelet is proposed in this paper. Firstly, a bank of non-separable orthogonal wavelet filters is used to capture characteristics of the iris. Secondly, a method of Markov random fields is used to capture rotation invariant iris feature. Finally, two-class kernel Fisher classifiers are adopted for classification. Experimental results on public iris databases show that the proposed approach has a low error rate and achieves true rotation invariance. © 2010.
Resumo:
When visual sensor networks are composed of cameras which can adjust the zoom factor of their own lens, one must determine the optimal zoom levels for the cameras, for a given task. This gives rise to an important trade-off between the overlap of the different cameras’ fields of view, providing redundancy, and image quality. In an object tracking task, having multiple cameras observe the same area allows for quicker recovery, when a camera fails. In contrast having narrow zooms allow for a higher pixel count on regions of interest, leading to increased tracking confidence. In this paper we propose an approach for the self-organisation of redundancy in a distributed visual sensor network, based on decentralised multi-objective online learning using only local information to approximate the global state. We explore the impact of different zoom levels on these trade-offs, when tasking omnidirectional cameras, having perfect 360-degree view, with keeping track of a varying number of moving objects. We further show how employing decentralised reinforcement learning enables zoom configurations to be achieved dynamically at runtime according to an operator’s preference for maximising either the proportion of objects tracked, confidence associated with tracking, or redundancy in expectation of camera failure. We show that explicitly taking account of the level of overlap, even based only on local knowledge, improves resilience when cameras fail. Our results illustrate the trade-off between maintaining high confidence and object coverage, and maintaining redundancy, in anticipation of future failure. Our approach provides a fully tunable decentralised method for the self-organisation of redundancy in a changing environment, according to an operator’s preferences.
Resumo:
Simple features such as edges are the building blocks of spatial vision, and so I ask: how arevisual features and their properties (location, blur and contrast) derived from the responses ofspatial filters in early vision; how are these elementary visual signals combined across the twoeyes; and when are they not combined? Our psychophysical evidence from blur-matchingexperiments strongly supports a model in which edges are found at the spatial peaks ofresponse of odd-symmetric receptive fields (gradient operators), and their blur B is givenby the spatial scale of the most active operator. This model can explain some surprisingaspects of blur perception: edges look sharper when they are low contrast, and when theirlength is made shorter. Our experiments on binocular fusion of blurred edges show that singlevision is maintained for disparities up to about 2.5*B, followed by diplopia or suppression ofone edge at larger disparities. Edges of opposite polarity never fuse. Fusion may be served bybinocular combination of monocular gradient operators, but that combination - involvingbinocular summation and interocular suppression - is not completely understood.In particular, linear summation (supported by psychophysical and physiological evidence)predicts that fused edges should look more blurred with increasing disparity (up to 2.5*B),but results surprisingly show that edge blur appears constant across all disparities, whetherfused or diplopic. Finally, when edges of very different blur are shown to the left and righteyes fusion may not occur, but perceived blur is not simply given by the sharper edge, nor bythe higher contrast. Instead, it is the ratio of contrast to blur that matters: the edge with theAbstracts 1237steeper gradient dominates perception. The early stages of binocular spatial vision speak thelanguage of luminance gradients.
Resumo:
The visual system combines spatial signals from the two eyes to achieve single vision. But if binocular disparity is too large, this perceptual fusion gives way to diplopia. We studied and modelled the processes underlying fusion and the transition to diplopia. The likely basis for fusion is linear summation of inputs onto binocular cortical cells. Previous studies of perceived position, contrast matching and contrast discrimination imply the computation of a dynamicallyweighted sum, where the weights vary with relative contrast. For gratings, perceived contrast was almost constant across all disparities, and this can be modelled by allowing the ocular weights to increase with disparity (Zhou, Georgeson & Hess, 2014). However, when a single Gaussian-blurred edge was shown to each eye perceived blur was invariant with disparity (Georgeson & Wallis, ECVP 2012) – not consistent with linear summation (which predicts that perceived blur increases with disparity). This blur constancy is consistent with a multiplicative form of combination (the contrast-weighted geometric mean) but that is hard to reconcile with the evidence favouring linear combination. We describe a 2-stage spatial filtering model with linear binocular combination and suggest that nonlinear output transduction (eg. ‘half-squaring’) at each stage may account for the blur constancy.