226 resultados para Monocular
Resumo:
Autonomous aerial refueling is a key enabling technology for both manned and unmanned aircraft where extended flight duration or range are required. The results presented within this paper offer one potential vision-based sensing solution, together with a unique test environment. A hierarchical visual tracking algorithm based on direct methods is proposed and developed for the purposes of tracking a drogue during the capture stage of autonomous aerial refueling, and of estimating its 3D position. Intended to be applied in real time to a video stream from a single monocular camera mounted on the receiver aircraft, the algorithm is shown to be highly robust, and capable of tracking large, rapid drogue motions within the frame of reference. The proposed strategy has been tested using a complex robotic testbed and with actual flight hardware consisting of a full size probe and drogue. Results show that the vision tracking algorithm can detect and track the drogue at real-time frame rates of more than thirty frames per second, obtaining a robust position estimation even with strong motions and multiple occlusions of the drogue.
Resumo:
This work aims to develop a novel Cross-Entropy (CE) optimization-based fuzzy controller for Unmanned Aerial Monocular Vision-IMU System (UAMVIS) to solve the seeand- avoid problem using its accurate autonomous localization information. The function of this fuzzy controller is regulating the heading of this system to avoid the obstacle, e.g. wall. In the Matlab Simulink-based training stages, the Scaling Factor (SF) is adjusted according to the specified task firstly, and then the Membership Function (MF) is tuned based on the optimized Scaling Factor to further improve the collison avoidance performance. After obtained the optimal SF and MF, 64% of rules has been reduced (from 125 rules to 45 rules), and a large number of real flight tests with a quadcopter have been done. The experimental results show that this approach precisely navigates the system to avoid the obstacle. To our best knowledge, this is the first work to present the optimized fuzzy controller for UAMVIS using Cross-Entropy method in Scaling Factors and Membership Functions optimization.
Resumo:
Los sistemas de seguimiento mono-cámara han demostrado su notable capacidad para el análisis de trajectorias de objectos móviles y para monitorización de escenas de interés; sin embargo, tanto su robustez como sus posibilidades en cuanto a comprensión semántica de la escena están fuertemente limitadas por su naturaleza local y monocular, lo que los hace insuficientes para aplicaciones realistas de videovigilancia. El objetivo de esta tesis es la extensión de las posibilidades de los sistemas de seguimiento de objetos móviles para lograr un mayor grado de robustez y comprensión de la escena. La extensión propuesta se divide en dos direcciones separadas. La primera puede considerarse local, ya que está orientada a la mejora y enriquecimiento de las posiciones estimadas para los objetos móviles observados directamente por las cámaras del sistema; dicha extensión se logra mediante el desarrollo de un sistema multi-cámara de seguimiento 3D, capaz de proporcionar consistentemente las posiciones 3D de múltiples objetos a partir de las observaciones capturadas por un conjunto de sensores calibrados y con campos de visión solapados. La segunda extensión puede considerarse global, dado que su objetivo consiste en proporcionar un contexto global para relacionar las observaciones locales realizadas por una cámara con una escena de mucho mayor tamaño; para ello se propone un sistema automático de localización de cámaras basado en las trayectorias observadas de varios objetos móviles y en un mapa esquemático de la escena global monitorizada. Ambas líneas de investigación se tratan utilizando, como marco común, técnicas de estimación bayesiana: esta elección está justificada por la versatilidad y flexibilidad proporcionada por dicho marco estadístico, que permite la combinación natural de múltiples fuentes de información sobre los parámetros a estimar, así como un tratamiento riguroso de la incertidumbre asociada a las mismas mediante la inclusión de modelos de observación específicamente diseñados. Además, el marco seleccionado abre grandes posibilidades operacionales, puesto que permite la creación de diferentes métodos numéricos adaptados a las necesidades y características específicas de distintos problemas tratados. El sistema de seguimiento 3D con múltiples cámaras propuesto está específicamente diseñado para permitir descripciones esquemáticas de las medidas realizadas individualmente por cada una de las cámaras del sistema: esta elección de diseño, por tanto, no asume ningún algoritmo específico de detección o seguimiento 2D en ninguno de los sensores de la red, y hace que el sistema propuesto sea aplicable a redes reales de vigilancia con capacidades limitadas tanto en términos de procesamiento como de transmision. La combinación robusta de las observaciones capturadas individualmente por las cámaras, ruidosas, incompletas y probablemente contaminadas por falsas detecciones, se basa en un metodo de asociación bayesiana basado en geometría y color: los resultados de dicha asociación permiten el seguimiento 3D de los objetos de la escena mediante el uso de un filtro de partículas. El sistema de fusión de observaciones propuesto tiene, como principales características, una gran precisión en términos de localización 3D de objetos, y una destacable capacidad de recuperación tras eventuales errores debidos a un número insuficiente de datos de entrada. El sistema automático de localización de cámaras se basa en la observación de múltiples objetos móviles y un mapa esquemático de las áreas transitables del entorno monitorizado para inferir la posición absoluta de dicho sensor. Para este propósito, se propone un novedoso marco bayesiano que combina modelos dinámicos inducidos por el mapa en los objetos móviles presentes en la escena con las trayectorias observadas por la cámara, lo que representa un enfoque nunca utilizado en la literatura existente. El sistema de localización se divide en dos sub-tareas diferenciadas, debido a que cada una de estas tareas requiere del diseño de algoritmos específicos de muestreo para explotar en profundidad las características del marco desarrollado: por un lado, análisis de la ambigüedad del caso específicamente tratado y estimación aproximada de la localización de la cámara, y por otro, refinado de la localización de la cámara. El sistema completo, diseñado y probado para el caso específico de localización de cámaras en entornos de tráfico urbano, podría tener aplicación también en otros entornos y sensores de diferentes modalidades tras ciertas adaptaciones. ABSTRACT Mono-camera tracking systems have proved their capabilities for moving object trajectory analysis and scene monitoring, but their robustness and semantic possibilities are strongly limited by their local and monocular nature and are often insufficient for realistic surveillance applications. This thesis is aimed at extending the possibilities of moving object tracking systems to a higher level of scene understanding. The proposed extension comprises two separate directions. The first one is local, since is aimed at enriching the inferred positions of the moving objects within the area of the monitored scene directly covered by the cameras of the system; this task is achieved through the development of a multi-camera system for robust 3D tracking, able to provide 3D tracking information of multiple simultaneous moving objects from the observations reported by a set of calibrated cameras with semi-overlapping fields of view. The second extension is global, as is aimed at providing local observations performed within the field of view of one camera with a global context relating them to a much larger scene; to this end, an automatic camera positioning system relying only on observed object trajectories and a scene map is designed. The two lines of research in this thesis are addressed using Bayesian estimation as a general unifying framework. Its suitability for these two applications is justified by the flexibility and versatility of that stochastic framework, which allows the combination of multiple sources of information about the parameters to estimate in a natural and elegant way, addressing at the same time the uncertainty associated to those sources through the inclusion of models designed to this end. In addition, it opens multiple possibilities for the creation of different numerical methods for achieving satisfactory and efficient practical solutions to each addressed application. The proposed multi-camera 3D tracking method is specifically designed to work on schematic descriptions of the observations performed by each camera of the system: this choice allows the use of unspecific off-the-shelf 2D detection and/or tracking subsystems running independently at each sensor, and makes the proposal suitable for real surveillance networks with moderate computational and transmission capabilities. The robust combination of such noisy, incomplete and possibly unreliable schematic descriptors relies on a Bayesian association method, based on geometry and color, whose results allow the tracking of the targets in the scene with a particle filter. The main features exhibited by the proposal are, first, a remarkable accuracy in terms of target 3D positioning, and second, a great recovery ability after tracking losses due to insufficient input data. The proposed system for visual-based camera self-positioning uses the observations of moving objects and a schematic map of the passable areas of the environment to infer the absolute sensor position. To this end, a new Bayesian framework combining trajectory observations and map-induced dynamic models for moving objects is designed, which represents an approach to camera positioning never addressed before in the literature. This task is divided into two different sub-tasks, setting ambiguity analysis and approximate position estimation, on the one hand, and position refining, on the other, since they require the design of specific sampling algorithms to correctly exploit the discriminative features of the developed framework. This system, designed for camera positioning and demonstrated in urban traffic environments, can also be applied to different environments and sensors of other modalities after certain required adaptations.
Resumo:
Los sistemas de seguimiento mono-cámara han demostrado su notable capacidad para el análisis de trajectorias de objectos móviles y para monitorización de escenas de interés; sin embargo, tanto su robustez como sus posibilidades en cuanto a comprensión semántica de la escena están fuertemente limitadas por su naturaleza local y monocular, lo que los hace insuficientes para aplicaciones realistas de videovigilancia. El objetivo de esta tesis es la extensión de las posibilidades de los sistemas de seguimiento de objetos móviles para lograr un mayor grado de robustez y comprensión de la escena. La extensión propuesta se divide en dos direcciones separadas. La primera puede considerarse local, ya que está orientada a la mejora y enriquecimiento de las posiciones estimadas para los objetos móviles observados directamente por las cámaras del sistema; dicha extensión se logra mediante el desarrollo de un sistema multi-cámara de seguimiento 3D, capaz de proporcionar consistentemente las posiciones 3D de múltiples objetos a partir de las observaciones capturadas por un conjunto de sensores calibrados y con campos de visión solapados. La segunda extensión puede considerarse global, dado que su objetivo consiste en proporcionar un contexto global para relacionar las observaciones locales realizadas por una cámara con una escena de mucho mayor tamaño; para ello se propone un sistema automático de localización de cámaras basado en las trayectorias observadas de varios objetos móviles y en un mapa esquemático de la escena global monitorizada. Ambas líneas de investigación se tratan utilizando, como marco común, técnicas de estimación bayesiana: esta elección está justificada por la versatilidad y flexibilidad proporcionada por dicho marco estadístico, que permite la combinación natural de múltiples fuentes de información sobre los parámetros a estimar, así como un tratamiento riguroso de la incertidumbre asociada a las mismas mediante la inclusión de modelos de observación específicamente diseñados. Además, el marco seleccionado abre grandes posibilidades operacionales, puesto que permite la creación de diferentes métodos numéricos adaptados a las necesidades y características específicas de distintos problemas tratados. El sistema de seguimiento 3D con múltiples cámaras propuesto está específicamente diseñado para permitir descripciones esquemáticas de las medidas realizadas individualmente por cada una de las cámaras del sistema: esta elección de diseño, por tanto, no asume ningún algoritmo específico de detección o seguimiento 2D en ninguno de los sensores de la red, y hace que el sistema propuesto sea aplicable a redes reales de vigilancia con capacidades limitadas tanto en términos de procesamiento como de transmision. La combinación robusta de las observaciones capturadas individualmente por las cámaras, ruidosas, incompletas y probablemente contaminadas por falsas detecciones, se basa en un metodo de asociación bayesiana basado en geometría y color: los resultados de dicha asociación permiten el seguimiento 3D de los objetos de la escena mediante el uso de un filtro de partículas. El sistema de fusión de observaciones propuesto tiene, como principales características, una gran precisión en términos de localización 3D de objetos, y una destacable capacidad de recuperación tras eventuales errores debidos a un número insuficiente de datos de entrada. El sistema automático de localización de cámaras se basa en la observación de múltiples objetos móviles y un mapa esquemático de las áreas transitables del entorno monitorizado para inferir la posición absoluta de dicho sensor. Para este propósito, se propone un novedoso marco bayesiano que combina modelos dinámicos inducidos por el mapa en los objetos móviles presentes en la escena con las trayectorias observadas por la cámara, lo que representa un enfoque nunca utilizado en la literatura existente. El sistema de localización se divide en dos sub-tareas diferenciadas, debido a que cada una de estas tareas requiere del diseño de algoritmos específicos de muestreo para explotar en profundidad las características del marco desarrollado: por un lado, análisis de la ambigüedad del caso específicamente tratado y estimación aproximada de la localización de la cámara, y por otro, refinado de la localización de la cámara. El sistema completo, diseñado y probado para el caso específico de localización de cámaras en entornos de tráfico urbano, podría tener aplicación también en otros entornos y sensores de diferentes modalidades tras ciertas adaptaciones. ABSTRACT Mono-camera tracking systems have proved their capabilities for moving object trajectory analysis and scene monitoring, but their robustness and semantic possibilities are strongly limited by their local and monocular nature and are often insufficient for realistic surveillance applications. This thesis is aimed at extending the possibilities of moving object tracking systems to a higher level of scene understanding. The proposed extension comprises two separate directions. The first one is local, since is aimed at enriching the inferred positions of the moving objects within the area of the monitored scene directly covered by the cameras of the system; this task is achieved through the development of a multi-camera system for robust 3D tracking, able to provide 3D tracking information of multiple simultaneous moving objects from the observations reported by a set of calibrated cameras with semi-overlapping fields of view. The second extension is global, as is aimed at providing local observations performed within the field of view of one camera with a global context relating them to a much larger scene; to this end, an automatic camera positioning system relying only on observed object trajectories and a scene map is designed. The two lines of research in this thesis are addressed using Bayesian estimation as a general unifying framework. Its suitability for these two applications is justified by the flexibility and versatility of that stochastic framework, which allows the combination of multiple sources of information about the parameters to estimate in a natural and elegant way, addressing at the same time the uncertainty associated to those sources through the inclusion of models designed to this end. In addition, it opens multiple possibilities for the creation of different numerical methods for achieving satisfactory and efficient practical solutions to each addressed application. The proposed multi-camera 3D tracking method is specifically designed to work on schematic descriptions of the observations performed by each camera of the system: this choice allows the use of unspecific off-the-shelf 2D detection and/or tracking subsystems running independently at each sensor, and makes the proposal suitable for real surveillance networks with moderate computational and transmission capabilities. The robust combination of such noisy, incomplete and possibly unreliable schematic descriptors relies on a Bayesian association method, based on geometry and color, whose results allow the tracking of the targets in the scene with a particle filter. The main features exhibited by the proposal are, first, a remarkable accuracy in terms of target 3D positioning, and second, a great recovery ability after tracking losses due to insufficient input data. The proposed system for visual-based camera self-positioning uses the observations of moving objects and a schematic map of the passable areas of the environment to infer the absolute sensor position. To this end, a new Bayesian framework combining trajectory observations and map-induced dynamic models for moving objects is designed, which represents an approach to camera positioning never addressed before in the literature. This task is divided into two different sub-tasks, setting ambiguity analysis and approximate position estimation, on the one hand, and position refining, on the other, since they require the design of specific sampling algorithms to correctly exploit the discriminative features of the developed framework. This system, designed for camera positioning and demonstrated in urban traffic environments, can also be applied to different environments and sensors of other modalities after certain required adaptations.
Resumo:
An automatic machine learning strategy for computing the 3D structure of monocular images from a single image query using Local Binary Patterns is presented. The 3D structure is inferred through a training set composed by a repository of color and depth images, assuming that images with similar structure present similar depth maps. Local Binary Patterns are used to characterize the structure of the color images. The depth maps of those color images with a similar structure to the query image are adaptively combined and filtered to estimate the final depth map. Using public databases, promising results have been obtained outperforming other state-of-the-art algorithms and with a computational cost similar to the most efficient 2D-to-3D algorithms.
Resumo:
This work aims to develop a novel Cross-Entropy (CE) optimization-based fuzzy controller for Unmanned Aerial Monocular Vision-IMU System (UAMVIS) to solve the seeand-avoid problem using its accurate autonomous localization information. The function of this fuzzy controller is regulating the heading of this system to avoid the obstacle, e.g. wall. In the Matlab Simulink-based training stages, the Scaling Factor (SF) is adjusted according to the specified task firstly, and then the Membership Function (MF) is tuned based on the optimized Scaling Factor to further improve the collison avoidance performance. After obtained the optimal SF and MF, 64% of rules has been reduced (from 125 rules to 45 rules), and a large number of real flight tests with a quadcopter have been done. The experimental results show that this approach precisely navigates the system to avoid the obstacle. To our best knowledge, this is the first work to present the optimized fuzzy controller for UAMVIS using Cross-Entropy method in Scaling Factors and Membership Functions optimization.
Resumo:
El principal objetivo de este trabajo es proporcionar una solución en tiempo real basada en visión estéreo o monocular precisa y robusta para que un vehículo aéreo no tripulado (UAV) sea autónomo en varios tipos de aplicaciones UAV, especialmente en entornos abarrotados sin señal GPS. Este trabajo principalmente consiste en tres temas de investigación de UAV basados en técnicas de visión por computador: (I) visual tracking, proporciona soluciones efectivas para localizar visualmente objetos de interés estáticos o en movimiento durante el tiempo que dura el vuelo del UAV mediante una aproximación adaptativa online y una estrategia de múltiple resolución, de este modo superamos los problemas generados por las diferentes situaciones desafiantes, tales como cambios significativos de aspecto, iluminación del entorno variante, fondo del tracking embarullado, oclusión parcial o total de objetos, variaciones rápidas de posición y vibraciones mecánicas a bordo. La solución ha sido utilizada en aterrizajes autónomos, inspección de plataformas mar adentro o tracking de aviones en pleno vuelo para su detección y evasión; (II) odometría visual: proporciona una solución eficiente al UAV para estimar la posición con 6 grados de libertad (6D) usando únicamente la entrada de una cámara estéreo a bordo del UAV. Un método Semi-Global Blocking Matching (SGBM) eficiente basado en una estrategia grueso-a-fino ha sido implementada para una rápida y profunda estimación del plano. Además, la solución toma provecho eficazmente de la información 2D y 3D para estimar la posición 6D, resolviendo de esta manera la limitación de un punto de referencia fijo en la cámara estéreo. Una robusta aproximación volumétrica de mapping basada en el framework Octomap ha sido utilizada para reconstruir entornos cerrados y al aire libre bastante abarrotados en 3D con memoria y errores correlacionados espacialmente o temporalmente; (III) visual control, ofrece soluciones de control prácticas para la navegación de un UAV usando Fuzzy Logic Controller (FLC) con la estimación visual. Y el framework de Cross-Entropy Optimization (CEO) ha sido usado para optimizar el factor de escala y la función de pertenencia en FLC. Todas las soluciones basadas en visión en este trabajo han sido probadas en test reales. Y los conjuntos de datos de imágenes reales grabados en estos test o disponibles para la comunidad pública han sido utilizados para evaluar el rendimiento de estas soluciones basadas en visión con ground truth. Además, las soluciones de visión presentadas han sido comparadas con algoritmos de visión del estado del arte. Los test reales y los resultados de evaluación muestran que las soluciones basadas en visión proporcionadas han obtenido rendimientos en tiempo real precisos y robustos, o han alcanzado un mejor rendimiento que aquellos algoritmos del estado del arte. La estimación basada en visión ha ganado un rol muy importante en controlar un UAV típico para alcanzar autonomía en aplicaciones UAV. ABSTRACT The main objective of this dissertation is providing real-time accurate robust monocular or stereo vision-based solution for Unmanned Aerial Vehicle (UAV) to achieve the autonomy in various types of UAV applications, especially in GPS-denied dynamic cluttered environments. This dissertation mainly consists of three UAV research topics based on computer vision technique: (I) visual tracking, it supplys effective solutions to visually locate interesting static or moving object over time during UAV flight with on-line adaptivity approach and multiple-resolution strategy, thereby overcoming the problems generated by the different challenging situations, such as significant appearance change, variant surrounding illumination, cluttered tracking background, partial or full object occlusion, rapid pose variation and onboard mechanical vibration. The solutions have been utilized in autonomous landing, offshore floating platform inspection and midair aircraft tracking for sense-and-avoid; (II) visual odometry: it provides the efficient solution for UAV to estimate the 6 Degree-of-freedom (6D) pose using only the input of stereo camera onboard UAV. An efficient Semi-Global Blocking Matching (SGBM) method based on a coarse-to-fine strategy has been implemented for fast depth map estimation. In addition, the solution effectively takes advantage of both 2D and 3D information to estimate the 6D pose, thereby solving the limitation of a fixed small baseline in the stereo camera. A robust volumetric occupancy mapping approach based on the Octomap framework has been utilized to reconstruct indoor and outdoor large-scale cluttered environments in 3D with less temporally or spatially correlated measurement errors and memory; (III) visual control, it offers practical control solutions to navigate UAV using Fuzzy Logic Controller (FLC) with the visual estimation. And the Cross-Entropy Optimization (CEO) framework has been used to optimize the scaling factor and the membership function in FLC. All the vision-based solutions in this dissertation have been tested in real tests. And the real image datasets recorded from these tests or available from public community have been utilized to evaluate the performance of these vision-based solutions with ground truth. Additionally, the presented vision solutions have compared with the state-of-art visual algorithms. Real tests and evaluation results show that the provided vision-based solutions have obtained real-time accurate robust performances, or gained better performance than those state-of-art visual algorithms. The vision-based estimation has played a critically important role for controlling a typical UAV to achieve autonomy in the UAV application.
Resumo:
The prevalent view of binocular rivalry holds that it is a competition between the two eyes mediated by reciprocal inhibition among monocular neurons. This view is largely due to the nature of conventional rivalry-inducing stimuli, which are pairs of dissimilar images with coherent patterns within each eye’s image. Is it the eye of origin or the coherency of patterns that determines perceptual alternations between coherent percepts in binocular rivalry? We break the coherency of conventional stimuli and replace them by complementary patchworks of intermingled rivalrous images. Can the brain unscramble the pieces of the patchwork arriving from different eyes to obtain coherent percepts? We find that pattern coherency in itself can drive perceptual alternations, and the patchworks are reassembled into coherent forms by most observers. This result is in agreement with recent neurophysiological and psychophysical evidence demonstrating that there is more to binocular rivalry than mere eye competition.
Resumo:
Human area V1 offers an excellent opportunity to study, using functional MRI, a range of properties in a specific cortical visual area, whose borders are defined objectively and convergently by retinotopic criteria. The retinotopy in V1 (also known as primary visual cortex, striate cortex, or Brodmann’s area 17) was defined in each subject by using both stationary and phase-encoded polar coordinate stimuli. Data from V1 and neighboring retinotopic areas were displayed on flattened cortical maps. In additional tests we revealed the paired cortical representations of the monocular “blind spot.” We also activated area V1 preferentially (relative to other extrastriate areas) by presenting radial gratings alternating between 6% and 100% contrast. Finally, we showed evidence for orientation selectivity in V1 by measuring transient functional MRI increases produced at the change in response to gratings of differing orientations. By systematically varying the orientations presented, we were able to measure the bandwidth of the orientation “transients” (45°).
Resumo:
The majority of neurons in the primary visual cortex of primates can be activated by stimulation of either eye; moreover, the monocular receptive fields of such neurons are located in about the same region of visual space. These well-known facts imply that binocular convergence in visual cortex can explain our cyclopean view of the world. To test the adequacy of this assumption, we examined how human subjects integrate binocular events in time. Light flashes presented synchronously to both eyes were compared to flashes presented alternately (asynchronously) to one eye and then the other. Subjects perceived very-low-frequency (2 Hz) asynchronous trains as equivalent to synchronous trains flashed at twice the frequency (the prediction based on binocular convergence). However, at higher frequencies of presentation (4-32 Hz), subjects perceived asynchronous and synchronous trains to be increasingly similar. Indeed, at the flicker-fusion frequency (approximately 50 Hz), the apparent difference between the two conditions was only 2%. We suggest that the explanation of these anomalous findings is that we parse visual input into sequential episodes.
Resumo:
PURPOSE: To compare disk halo size in response to a glare source in eyes with an aspheric apodized diffractive multifocal intraocular lens (IOL) or aspheric monofocal IOL. SETTING: Rementeria Ophthalmological Clinic, Madrid, Spain. DESIGN: Prospective randomized masked study. METHOD: Halo radius was measured using a vision monitor (MonCv3) with low-luminance optotypes in eyes that had cataract surgery and bilateral implantion of an Acrysof Restor SN6AD1 multifocal IOL or Acrysof IQ monofocal IOL 6 to 9 months previously. The visual angle subtended by the disk halo radius was calculated in minutes of arc (arcmin). Patient complaints of halo disturbances were recorded. Monocular uncorrected distance visual acutity (UDVA) and corrected distance visual acuity (CDVA) were measured using high-contrast (96%) and low-contrast (10%) logMAR letter charts. RESULTS: The study comprised 39 eyes of 39 subjects (aged 70 to 80 years); 21 eyes had a multifocal IOL and 18 eyes a monofocal IOL. The mean halo radius was 35 arcmin larger in the multifocal IOL group than the monofocal group (P<.05). Greater halo effects were reported in the multifocal IOL group (P<.05). The mean monocular high-contrast UDVA and low-contrast UDVA did not vary significantly between groups, whereas the mean monocular high-contrast CDVA and low-contrast CDVA were significantly worse at 0.12 logMAR and 0.13 logMAR in the multifocal than in the monofocal IOL group, respectively (P <.01). A significant positive correlation was detected by multiple linear regression between the halo radius and low-contrast UDVA in the multifocal IOL group (r = 0.72, P<.001). CONCLUSIONS: The diffractive multifocal IOL gave rise to a larger disk halo size, which was correlated with a worse low-contrast UDVA.
Resumo:
PURPOSE: To compare visual outcomes, rotational stability, and centration in a randomized controlled trial in patients undergoing cataract surgery who were bilaterally implanted with two different trifocal intraocular lenses (IOLs) with a similar optical zone but different haptic shape. METHODS: Twenty-one patients (42 eyes) with cataract and less than 1.50 D of corneal astigmatism underwent implantation of one FineVision/MicoF IOL in one eye and one POD FineVision IOL in the contralateral eye (PhysIOL, Liège, Belgium) at IOA Madrid Innova Ocular, Madrid, Spain. IOL allocation was random. Outcome measures, all evaluated 3 months postoperatively, included monocular and binocular uncorrected distance (UDVA), corrected distance (CDVA), distance-corrected intermediate (DCIVA), and near (DCNVA) visual acuity (at 80, 40, and 25 cm) under photopic conditions, refraction, IOL centration, haptic rotation, dysphotopsia, objective quality of vision and aberration quantification, patient satisfaction, and spectacle independence. RESULTS: Three months postoperatively, mean monocular UDVA, CDVA, DCIVA, and DCNVA (40 cm) under photopic conditions were 0.04 ± 0.07, 0.01 ± 0.04, 0.15 ± 0.11, and 0.16 ± 0.08 logMAR for the eyes implanted with the POD FineVision IOL and 0.03 ± 0.05, 0.01 ± 0.02, 0.17 ± 0.12, and 0.14 ± 0.08 logMAR for those receiving the FineVision/MicroF IOL. Moreover, the POD FineVision IOL showed similar centration (P > .05) and better rotational stability (P < .05) than the FineVision/MicroF IOL. Regarding halos, there was a minimal but statistically significant difference, obtaining better results with FineVision/MicroF. Full spectacle independence was reported by all patients. CONCLUSIONS: This study revealed similar visual outcomes for both trifocal IOLs under test (POD FineVision and FineVision/MicroF). However, the POD FineVision IOL showed better rotational stability, as afforded by its design.
Resumo:
Purpose: To analyze the diagnostic criteria used in the scientific literature published in the past 25 years for accommodative and nonstrabismic binocular dysfunctions and to explore if the epidemiological analysis of diagnostic validity has been used to propose which clinical criteria should be used for diagnostic purposes. Methods: We carried out a systematic review of papers on accommodative and non-strabic binocular disorders published from 1986 to 2012 analysing the MEDLINE, CINAHL, PsycINFO and FRANCIS databases. We admitted original articles about diagnosis of these anomalies in any population. We identified 839 articles and 12 studies were included. The quality of included articles was assessed using the QUADAS-2 tool. Results: The review shows a wide range of clinical signs and cut-off points between authors. Only 3 studies (regarding accommodative anomalies) assessed diagnostic accuracy of clinical signs. Their results suggest using the accommodative amplitude and monocular accommodative facility for diagnosing accommodative insufficiency and a high positive relative accommodation for accommodative excess. The remaining 9 articles did not analyze diagnostic accuracy, assessing a diagnosis with the criteria the authors considered. We also found differences between studies in the way of considering patients’ symptomatology. 3 studies of 12 analyzed, performed a validation of a symptom survey used for convergence insufficiency. Conclusions: Scientific literature reveals differences between authors according to diagnostic criteria for accommodative and nonstrabismic binocular dysfunctions. Diagnostic accuracy studies show that there is only certain evidence for accommodative conditions. For binocular anomalies there is only evidence about a validated questionnaire for convergence insufficiency with no data of diagnostic accuracy.
Resumo:
Background: The aim was to evaluate the visual performance achieved with a new multifocal hybrid contact lens and to compare it with that obtained with two other currently available multifocal soft contact lenses. Methods: This pilot prospective comparative study comprised a total of 16 presbyopic eyes of eight patients ranging in age from 43 to 58 years. All patients were fitted with three different models of multifocal contact lens: Duette multifocal (SynergEyes), Air Optix AQUA multifocal (Alcon) and Biofinity multifocal (CooperVision). Fittings were performed randomly in each patient according to a random number sequence, with a wash-out period between fittings of seven days. At two weeks post-fitting, visual, photopic contrast sensitivity and ocular aberrometry were evaluated. Results: No statistically significant differences were found in distance and near visual acuity achieved with the three different types of multifocal contact lens (p ≥ 0.05). Likewise, no significant differences between lenses were found in the monocular and binocular defocus curve (p ≥ 0.10). Concerning contrast sensitivity, better monocular contrast sensitivities for 6, 12 and 18 cycles per degree were found with the Duette and Air Optix multifocal compared to Biofinity (p = 0.02). Binocularly, differences between lenses were not significant (p ≥ 0.27). Furthermore, trefoil aberration was significantly higher with Biofinity multifocal (p < 0.01) and Air Optix (p = 0.01) multifocal compared to Duette. Conclusions: The Duette multifocal hybrid contact lens seems to provide similar visual quality outcomes in presbyopic patients with low corneal astigmatism, when compared with other soft multifocal contact lenses. This preliminary result should be confirmed in studies with larger samples.
Resumo:
Objective: To assess the usefulness of microperimetry (MP) as an additional objective method for characterizing the fixation pattern in nystagmus. Design: Prospective study. Participants: Fifteen eyes of 8 subjects (age, 12–80 years) with nystagmus from the Lluís Alcanyís Foundation (University of Valencia, Spain) were included. Methods: All patients had a comprehensive ophthalmologic examination including a microperimetric examination (MAIA, CenterVue, Padova, Italy). The following microperimetric parameters were evaluated: average threshold (AT), macular integrity index (MI), fixating points within a circle of 1° (P1) and 2° of radius (P2), bivariate contour ellipse area (BCEA) considering 63% and 95% of fixating points, and horizontal and vertical axes of that ellipse. Results: In monocular conditions, 6 eyes showed a fixation classified as stable, 6 eyes showed a relatively unstable fixation, and 3 eyes showed an unstable fixation. Statistically significant differences were found between the horizontal and vertical components of movement (p = 0.001), as well as in their ranges (p < 0.001). Intereye comparison showed differences between eyes in some subjects, but only statistically significant differences were found in the fixation coordinates X and Y (p < 0.001). No significant intereye differences were found between microperimetric parameters. Between monocular and binocular conditions, statistically significant differences in the X and Y coordinates were found in all eyes (p < 0.02) except one. No significant differences were found between MP parameters for monocular or binocular conditions. Strong correlations of corrected distance visual acuity (CDVA) with AT (r = 0.812, p = 0.014), MI (r = –0.812, p = 0.014), P1 (r = 0.729, p = 0.002), horizontal diameter of BCEA (r = –0.700, p = 0.004), and X range (r = –0.722, p = 0.005) were found. Conclusions: MP seems to be a useful technology for the characterization of the fixation pattern in nystagmus, which seems to be related to the level of visual acuity achieved by the patient.