7 resultados para Visual and acoustic signaling
em Universidad de Alicante
Resumo:
Purpose To evaluate visual, optical, and quality of life (QoL) outcomes and intercorrelations after bilateral implantation of posterior chamber phakic intraocular lenses. Methods Twenty eyes with high to moderate myopia of 10 patients that underwent PRL implantation (Phakic Refractive Lens, Carl Zeiss Meditec AG) were examined. Refraction, visual acuity, photopic and low mesopic contrast sensitivity (CS) with and without glare, ocular aberrations, as well as QoL outcomes (National Eye Institute Refractive Error Quality of Life Instrument-42, NEI RQL-42) were evaluated at 12 months postoperatively. Results Significant improvement in uncorrected (UDVA) and best-corrected distance (CDVA) visual acuities were found postoperatively (p < 0.01), with significant reduction in spherical equivalent (p < 0.01). Low mesopic CS without glare was significantly better than measurements with glare for 1.5, 3, and 6 cycles/degree (p < 0.01). No significant correlations between higher order root mean square (RMS) with CDVA (r = −0.26, p = 0.27) and CS (r ≤ 0.45, p ≥ 0.05) were found. Postoperative binocular photopic CS for 12 cycles/degree and 18 cycles/degree correlated significantly with several RQL-42 scales. Glare index correlated significantly with CS measures and scotopic pupil size (r = −0.551, p = 0.04), but not with higher order RMS (r = −0.02, p = 0.94). Postoperative higher order RMS, postoperative primary coma and postoperative spherical aberration was significant higher for 5-mm pupil diameter (p < 0.01) compared with controls. Conclusions Correction of moderate to high myopia by means of PRL implantation had a positive impact on CS and QoL. The aberrometric increase induced by the surgery does not seem to limit CS and QoL. However, perception of glare is still a relevant disturbance in some cases possibly related to the limitation of the optical zone of the PRL.
Resumo:
In this article we describe a semantic localization dataset for indoor environments named ViDRILO. The dataset provides five sequences of frames acquired with a mobile robot in two similar office buildings under different lighting conditions. Each frame consists of a point cloud representation of the scene and a perspective image. The frames in the dataset are annotated with the semantic category of the scene, but also with the presence or absence of a list of predefined objects appearing in the scene. In addition to the frames and annotations, the dataset is distributed with a set of tools for its use in both place classification and object recognition tasks. The large number of labeled frames in conjunction with the annotation scheme make this dataset different from existing ones. The ViDRILO dataset is released for use as a benchmark for different problems such as multimodal place classification and object recognition, 3D reconstruction or point cloud data compression.
Resumo:
Resumen del póster expuesto en el 6th EOS Topical Meeting on Visual and Physiological Optics (EMVPO 2012), Dublín, 20-22 Agosto 2012.
Resumo:
Póster presentado en el 6th EOS Meeting on Visual and Physiological Optics (EMVPO 2012), Dublín, 20-22 Agosto 2012.
Resumo:
PURPOSE: To evaluate and compare the visual, refractive, contrast sensitivity, and aberrometric outcomes with a diffractive bifocal and trifocal intraocular lens (IOL) of the same material and haptic design. METHODS: Sixty eyes of 30 patients undergoing bilateral cataract surgery were enrolled and randomly assigned to one of two groups: the bifocal group, including 30 eyes implanted with the bifocal diffractive IOL AT LISA 801 (Carl Zeiss Meditec, Jena, Germany), and the trifocal group, including eyes implanted with the trifocal diffractive IOL AT LISA tri 839 MP (Carl Zeiss Meditec). Analysis of visual and refractive outcomes, contrast sensitivity, ocular aberrations (OPD-Scan III; Nidek, Inc., Gagamori, Japan), and defocus curve were performed during a 3-month follow-up period. RESULTS: No statistically significant differences between groups were found in 3-month postoperative uncorrected and corrected distance visual acuity (P > .21). However, uncorrected, corrected, and distance-corrected near and intermediate visual acuities were significantly better in the trifocal group (P < .01). No significant differences between groups were found in postoperative spherical equivalent (P = .22). In the binocular defocus curve, the visual acuity was significantly better for defocus of -0.50 to -1.50 diopters in the trifocal group (P < .04) and -3.50 to -4.00 diopters in the bifocal group (P < .03). No statistically significant differences were found between groups in most of the postoperative corneal, internal, and ocular aberrations (P > .31), and in contrast sensitivity for most frequencies analyzed (P > .15). CONCLUSIONS: Trifocal diffractive IOLs provide significantly better intermediate vision over bifocal IOLs, with equivalent postoperative levels of visual and ocular optical quality.
Resumo:
Purpose: To evaluate the influence of the difference between preoperative corneal and refractive astigmatism [ocular residual astigmatism (ORA)] on outcomes obtained after laser in situ keratomileusis (LASIK) surgery for correction of myopic astigmatism using the solid-state laser technology. Methods: One hundred one consecutive eyes with myopia or myopic astigmatism of 55 patients undergoing LASIK surgery using the Pulzar Z1 solid-state laser (CustomVis Laser Pty Ltd, currently CV Laser) were included. Visual and refractive changes at 6 months postoperatively and changes in ORA and anterior corneal astigmatism and posterior corneal astigmatism (PCA) were analyzed. Results: Postoperatively, uncorrected distance visual acuity improved significantly (P < 0.01). Likewise, refractive cylinder magnitude and spherical equivalent were reduced significantly (P < 0.01). In contrast, no significant changes were observed in ORA magnitude (P = 0.81) and anterior corneal astigmatism (P = 0.12). The mean overall efficacy and safety indices were 0.96 and 1.01, respectively. These indices were not correlated with preoperative ORA (r = −0.15, P = 0.15). Furthermore, a significant correlation was found between ORA (r = 0.81, P < 0.01) and PCA postoperatively, but not preoperatively (r = 0.12, P = 0.25). Likewise, a significant correlation of ORA with manifest refraction was only found postoperatively (r = −0.38, P < 0.01). Conclusions: The magnitude of ORA does not seem to be a predictive factor of efficacy and safety of myopic LASIK using a solid-state laser platform. The higher relevance of PCA after surgery in some cases may explain the presence of unexpected astigmatic residual refractive errors.
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.