911 resultados para Low vision
Resumo:
OBJECTIVE: To assess and improve the accuracy of lay screeners compared with vision professionals in detecting visual impairment in secondary schoolchildren in rural China. METHODS: After brief training, 32 teachers and a team of vision professionals independently measured vision in 1892 children in Xichang. The children also underwent vision measurement by health technicians in a concurrent government screening program. RESULTS: Of 32 teachers, 28 (87.5%) believed that teacher screening was worthwhile. Sensitivity (93.5%) and specificity (91.2%) of teachers detecting uncorrected presenting visual acuity of 20/40 or less were better than for presenting visual acuity (sensitivity, 85.2%; specificity, 84.8%). Failure of teachers to identify children owning but not wearing glasses and teacher bias toward better vision in children wearing glasses explain the worse results for initial vision. Wearing glasses was the student factor most strongly predictive of inaccurate teacher screening (P < .001). The sensitivity and specificity of the government screening program detecting low presenting visual acuity were 86.7% and 28.7%, respectively. CONCLUSIONS: Teacher vision screening after brief training can achieve accurate results in this setting, and there is support among teachers for screening. Screening of uncorrected rather than presenting visual acuity is recommended in settings with a high prevalence of corrected and uncorrected refractive error. Low specificity in the government program renders it ineffective.
Resumo:
Tese de dout., Engenharia Electrónica e de Computadores, Faculdade de Ciência e Tecnologia, Universidade do Algarve, 2007
Resumo:
Attention is usually modelled by sequential fixation of peaks in saliency maps. Those maps code local conspicuity: complexity, colour and texture. Such features have no relation to entire objects, unless also disparity and optical flow are considered, which often segregate entire objects from their background. Recently we developed a model of local gist vision: which types of objects are about where in a scene. This model addresses man-made objects which are dominated by a small shape repertoire: squares, rectangles, trapeziums, triangles, circles and ellipses. Only exploiting local colour contrast, the model can detect these shapes by a small hierarchy of cell layers devoted to low- and mid-level geometry. The model has been tested successfully on video sequences containing traffic signs and other scenes, and partial occlusions were not problematic.
Resumo:
Ultrasonic, infrared, laser and other sensors are being applied in robotics. Although combinations of these have allowed robots to navigate, they are only suited for specific scenarios, depending on their limitations. Recent advances in computer vision are turning cameras into useful low-cost sensors that can operate in most types of environments. Cameras enable robots to detect obstacles, recognize objects, obtain visual odometry, detect and recognize people and gestures, among other possibilities. In this paper we present a completely biologically inspired vision system for robot navigation. It comprises stereo vision for obstacle detection, and object recognition for landmark-based navigation. We employ a novel keypoint descriptor which codes responses of cortical complex cells. We also present a biologically inspired saliency component, based on disparity and colour.
Resumo:
in RoboCup 2007: Robot Soccer World Cup XI
Resumo:
This paper presents a vision-based localization approach for an underwater robot in a structured environment. The system is based on a coded pattern placed on the bottom of a water tank and an onboard down looking camera. Main features are, absolute and map-based localization, landmark detection and tracking, and real-time computation (12.5 Hz). The proposed system provides three-dimensional position and orientation of the vehicle along with its velocity. Accuracy of the drift-free estimates is very high, allowing them to be used as feedback measures of a velocity-based low-level controller. The paper details the localization algorithm, by showing some graphical results, and the accuracy of the system
Resumo:
This paper deals with the problem of navigation for an unmanned underwater vehicle (UUV) through image mosaicking. It represents a first step towards a real-time vision-based navigation system for a small-class low-cost UUV. We propose a navigation system composed by: (i) an image mosaicking module which provides velocity estimates; and (ii) an extended Kalman filter based on the hydrodynamic equation of motion, previously identified for this particular UUV. The obtained system is able to estimate the position and velocity of the robot. Moreover, it is able to deal with visual occlusions that usually appear when the sea bottom does not have enough visual features to solve the correspondence problem in a certain area of the trajectory
Resumo:
It is well known that image processing requires a huge amount of computation, mainly at low level processing where the algorithms are dealing with a great number of data-pixel. One of the solutions to estimate motions involves detection of the correspondences between two images. For normalised correlation criteria, previous experiments shown that the result is not altered in presence of nonuniform illumination. Usually, hardware for motion estimation has been limited to simple correlation criteria. The main goal of this paper is to propose a VLSI architecture for motion estimation using a matching criteria more complex than Sum of Absolute Differences (SAD) criteria. Today hardware devices provide many facilities for the integration of more and more complex designs as well as the possibility to easily communicate with general purpose processors
Resumo:
Urban surveillance footage can be of poor quality, partly due to the low quality of the camera and partly due to harsh lighting and heavily reflective scenes. For some computer surveillance tasks very simple change detection is adequate, but sometimes a more detailed change detection mask is desirable, eg, for accurately tracking identity when faced with multiple interacting individuals and in pose-based behaviour recognition. We present a novel technique for enhancing a low-quality change detection into a better segmentation using an image combing estimator in an MRF based model.
Resumo:
This article describes an application of computers to a consumer-based production engineering environment. Particular consideration is given to the utilisation of low-cost computer systems for the visual inspection of components on a production line in real time. The process of installation is discussed, from identifying the need for artificial vision and justifying the cost, through to choosing a particular system and designing the physical and program structure.
Resumo:
There are a range of studies based in the low carbon arena which use various ‘futures’- based techniques as ways of exploring uncertainties. These techniques range from ‘scenarios’ and ‘roadmaps’ through to ‘transitions’ and ‘pathways’ as well as ‘vision’-based techniques. The overall aim of the paper is therefore to compare and contrast these techniques to develop a simple working typology with the further objective of identifying the implications of this analysis for RETROFIT 2050. Using recent examples of city-based and energy-based studies throughout, the paper compares and contrasts these techniques and finds that the distinctions between them have often been blurred in the field of low carbon. Visions, for example, have been used in both transition theory and futures/Foresight methods, and scenarios have also been used in transition-based studies as well as futures/Foresight studies. Moreover, Foresight techniques which capture expert knowledge and map existing knowledge to develop a set of scenarios and roadmaps which can inform the development of transitions and pathways can not only help potentially overcome any ‘disconnections’ that may exist between the social and the technical lenses in which such future trajectories are mapped, but also promote a strong ‘co-evolutionary’ content.
Resumo:
Background. Current models of concomitant, intermittent strabismus, heterophoria, convergence and accommodation anomalies are either theoretically complex or incomplete. We propose an alternative and more practical way to conceptualize clinical patterns. Methods. In each of three hypothetical scenarios (normal; high AC/A and low CA/C ratios; low AC/A and high CA/C ratios) there can be a disparity-biased or blur-biased “style”, despite identical ratios. We calculated a disparity bias index (DBI) to reflect these biases. We suggest how clinical patterns fit these scenarios and provide early objective data from small illustrative clinical groups. Results. Normal adults and children showed disparity bias (adult DBI 0.43 (95%CI 0.50-0.36), child DBI 0.20 (95%CI 0.31-0.07) (p=0.001). Accommodative esotropes showed less disparity-bias (DBI 0.03). In the high AC/A and low CA/C scenario, early presbyopes had mean DBI of 0.17 (95%CI 0.28-0.06), compared to DBI of -0.31 in convergence excess esotropes. In the low AC/A and high CA/C scenario near exotropes had mean DBI of 0.27, while we predict that non-strabismic, non-amblyopic hyperopes with good vision without spectacles will show lower DBIs. Disparity bias ranged between 1.25 and -1.67. Conclusions. Establishing disparity or blur bias, together with knowing whether convergence to target demand exceeds accommodation or vice versa explains clinical patterns more effectively than AC/A and CA/C ratios alone. Excessive bias or inflexibility in near-cue use increases risk of clinical problems. We suggest clinicians look carefully at details of accommodation and convergence changes induced by lenses, dissociation and prisms and use these to plan treatment in relation to the model.
Resumo:
Sparse coding aims to find a more compact representation based on a set of dictionary atoms. A well-known technique looking at 2D sparsity is the low rank representation (LRR). However, in many computer vision applications, data often originate from a manifold, which is equipped with some Riemannian geometry. In this case, the existing LRR becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to applications. In this paper, we generalize the LRR over the Euclidean space to the LRR model over a specific Rimannian manifold—the manifold of symmetric positive matrices (SPD). Experiments on several computer vision datasets showcase its noise robustness and superior performance on classification and segmentation compared with state-of-the-art approaches.
Resumo:
The aim of this paper is to present the current development status of a low cost system for surface reconstruction with structured light. The acquisition system is composed of a single off-the-shelf digital camera and a pattern projector. A pattern codification strategy was developed to allow the pattern recognition automatically and a calibration methodology ensures the determination of the direction vector of each pattern. The experiments indicated that an accuracy of 0.5mm in depth could be achieved for typical applications.
Resumo:
The integration of CMOS cameras with embedded processors and wireless communication devices has enabled the development of distributed wireless vision systems. Wireless Vision Sensor Networks (WVSNs), which consist of wirelessly connected embedded systems with vision and sensing capabilities, provide wide variety of application areas that have not been possible to realize with the wall-powered vision systems with wired links or scalar-data based wireless sensor networks. In this paper, the design of a middleware for a wireless vision sensor node is presented for the realization of WVSNs. The implemented wireless vision sensor node is tested through a simple vision application to study and analyze its capabilities, and determine the challenges in distributed vision applications through a wireless network of low-power embedded devices. The results of this paper highlight the practical concerns for the development of efficient image processing and communication solutions for WVSNs and emphasize the need for cross-layer solutions that unify these two so-far-independent research areas.