955 resultados para Visual methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introducción: una de las causas de pobre ganancia visual luego de un tratamiento exitoso de desprendimiento de retina, sin complicaciones, es el daño de los fotoreceptores, reflejada en una disrupción de la capa de la zona elipsoide y membrana limitante externa (MLE). En otras patologías se ha demostrado que la hiperautofluorescencia foveal se correlaciona con la integridad de la zona elipsoide y MLE y una mejor recuperación visual. Objetivos: evaluar la asociación entre la hiperautofluorescencia foveal, la integridad de la capa de la zona elipsoide y recuperación visual luego de desprendimiento de retina regmatógeno (DRR) exitosamente tratado. Evaluar la concordancia inter-evaluador de estos exámenes. Metodología: estudio de corte transversal de autofluorescencia foveal y tomografía óptica coherente macular de dominio espectral en 65 pacientes con DRR evaluados por 3 evaluadores independientes. La concordancia inter-evaluador se estudio mediante Kappa de Cohen y la asociación entre las diferentes variables mediante la prueba chi cuadrado y pruebas Z para comparación de proporciones. Resultados: La concordancia de la autofluorescencia fue razonable y la de la tomografía óptica coherente macular buena a muy buena. Sujetos que presentaron hiperautofluorescencia foveal asociada a integridad de la capa de la zona elipsoide tuvieron 20% más de posibilidad de recuperar agudeza visual final mejor a 20/50 que los que no cumplieron éstas características. Conclusión: Existe una asociación clínicamente importante entre la hiperautofluorescencia foveal, la integridad de la capa de zona elipsoide y la mejor agudeza visual final, sin embargo ésta no fue estadísticamente significativa (p=0.39)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introducción: Varias características pueden afectar el pronóstico visual después de resolver quirúrgicamente el desprendimiento de retina. Existen características no observables por el ojo humano por si solo pero si por tomografía óptica coherente que se relacionan con la recuperación visual. Objetivo: Describir las características clínicas y topográfica en los periodos pre y postquirúrgico de ojos que ha sufrido DR regmatógeno con compromiso macular y su relación con la calidad de recuperación visual después de una cirugía considerada exitosa desde el punto de vista anatómico. Materiales y métodos: Estudio descriptivo en el que se comparan algunas características en tres periodos perioeperatorios, uno antes y dos después de cirugía (3 y 6 meses) de 24 ojos con DRregmatógeno y compromiso macular intervenidos mediante retinopexia combinada con vitrectomía pars plana. Resultados: La recuperación visual mejor o igual que logMAR 0,397 (20/50) se dió en 41,7% de ojos y 16,7%. alcanzaron agudeza visual logMAR 0,301 (20/40). Cinco ojos no alcanzaron una ganancia de líneas de visión mayor a cinco. El líquido submacular ausente se observó en la mayoría de ojos que recuperaron más de cinco líneas al igual que aquellos con elipsoide conservado. La regularidad del neuroepitelio y el edema en el periodo posquirúrgico no mostraron comportamientos claros respecto a recuperación visual al igual que la altura del desprendimiento y el número de cuadrantes afectados. Una mejor recuperación visual fue más frecuente en aquellos con menos de cinco semanas de desprendimiento de retina. Conclusiones: El retraso menor a cinco semanas en la resolución del desprendimiento de retina, la conservación del elipsoide y la ausencia de líquido submacular en el periodo postquirúrgico se observó más frecuentemente en ojos con mejor recuperación visual.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objetivo: Establecer la correlación entre condiciones de iluminación, ángulo visual, discriminación de contrastes y agudeza visual en la aparición de síntomas visuales en operarios de computador. Materiales y métodos: Estudio de corte transversal y correlación en muestra de 136 trabajadores administrativos de un “call center” perteneciente a una entidad de salud en la ciudad de Bogotá, utilizando un cuestionario con el que se evaluaron las variables sociodemográficas y ocupacionales; aplicando la escala de síntomas visión – computador (CVSS17), realizando evaluación médica y midiendo iluminación y distancia operario pantalla de computador y con los datos recolectados se realizó un análisis estadístico bivariado y se estableció la correlación entre las condiciones de iluminación, ángulo visual, discriminación de contrataste y agudeza visual; frente a la aparición de síntomas visuales asociados con el uso del computador. El análisis se llevó a cabo con medidas de tendencia central y dispersión y con el coeficiente de correlación paramétrico de Pearson o no-paramétrico de Spearman, previamente se evaluó la normalidad con la prueba de Shapiro-Wilk. Las pruebas estadísticas se evaluarán a un nivel de significancia del 5% (p<0.05). Resultados: El promedio de edad de los participantes en el estudio fue de 36,3 años con un rango entre los 22 y 57 años y en donde el género predominante fue el femenino con el 79,4%. Se encontraron síntomas visuales asociados al uso de pantalla de computador del 59,6%, siendo los más frecuentes la epifora (70,6%), fotofobia (67,6%) y ardor ocular (54,4%). Se reportó una correlación inversa significativa entre niveles de iluminación y manifestación de fotofobia (p=0.02; r= 0,262). Por otra parte no se encontró correlación significativa entre los síntomas referidos con ángulo de visión y agudeza visual y discriminación de contrastes. Conclusión: Las condiciones laborales de iluminación del grupo de estudio están relacionadas con la manifestación de fotofobia, Se encontró asociación entre síntomas visuales y variables sociodemográficas, específicamente con el género, fotofobia a pantalla, fatiga visual y fotofobia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. Some children with visual stress and/or headaches have fewer symptoms when wearing colored lenses. Although subjective reports of improved perception exist, few objective correlates of these effects have been established. Methods. In a pilot study, 10 children who wore Intuitive Colorimeter lenses, and claimed benefit, and two asymptomatic children were tested. Steady-state potentials were measured in response to low contrast patterns modulating at a frequency of 12 Hz. Four viewing conditions were compared: 1) no lens; 2) Colorimeter lens; 3) lens of complementary color; and 4) spectrally neutral lens with similar photopic transmission. Results. The asymptomatic children showed little or no difference between the lens and no lens conditions. When all the symptomatic children were tested together, a similar result was found. However, when the symptomatic children were divided into two groups depending on their symptoms, an interaction emerged. Children with visual stress but no headaches showed the largest amplitude visual evoked potential response in the no lens condition, whereas those children whose symptoms included severe headaches or migraine showed the largest amplitude visual evoked potential response when wearing their prescribed lens. Conclusions. The results suggest that it is possible to measure objective correlates of the beneficial subjective perceptual effects of colored lenses, at least in some children who have a history of migraine or severe headaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent theories propose that semantic representation and sensorimotor processing have a common substrate via simulation. We tested the prediction that comprehension interacts with perception, using a standard psychophysics methodology.While passively listening to verbs that referred to upward or downward motion, and to control verbs that did not refer to motion, 20 subjects performed a motion-detection task, indicating whether or not they saw motion in visual stimuli containing threshold levels of coherent vertical motion. A signal detection analysis revealed that when verbs were directionally incongruent with the motion signal, perceptual sensitivity was impaired. Word comprehension also affected decision criteria and reaction times, but in different ways. The results are discussed with reference to existing explanations of embodied processing and the potential of psychophysical methods for assessing interactions between language and perception.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of a Visual Telepresence System is to provide the operator with a high fidelity image from a remote stereo camera pair linked to a pan/tilt device such that the operator may reorient the camera position by use of head movement. Systems such as these which utilise virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the displays is generally fixed and is most suitable only for viewing elements of a scene at a particular distance. To address such limitations, a prototype system has been developed where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator. This paper explores why it is necessary to actively adjust the display system as well as the cameras and justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms. The performance and accuracy of the system is assessed with respect to eye movement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper details an investigation into sensory substitution by means of direct electrical stimulation of the tongue for the purpose of information input to the human brain. In particular, a device has been constructed and a series of trials have been performed in order to demonstrate the efficacy and performance of an electro-tactile array mounted onto the tongue surface for the purpose of sensory augmentation. Tests have shown that by using a low resolution array a computer-human feedback loop can be successfully implemented by humans in order to complete tasks such as object tracking, surface shape identification and shape recognition with no training or prior experience with the device. Comparisons of this technique have been made with visual alternatives and these show that the tongue based tactile array can match such methods in convenience and accuracy in performing simple tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Near ground maneuvers, such as hover, approach and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground often using ultrasonic or laser range finders. Near ground maneuvers are naturally mastered by flying birds and insects as objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-to-contact (Tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for Unmanned Aerial Vehicles (UAV) relative ground distance control. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the Tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented on-board an experimental quadrotor UAV and shown not only to successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Near-ground maneuvers, such as hover, approach, and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground, often using ultrasonic or laser range finders. Near-ground maneuvers are naturally mastered by flying birds and insects because objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-tocontact (tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for relative ground distance control for unmanned aerial vehicles. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented onboard an experimental quadrotor unmannedaerial vehicle and is shown to not only successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present study, we compared 2 methods for collecting ixodid ticks on the verges of animal trails in a primary Amazon forest area in northern Brazil. (i) Dragging: This method was based on passing a 1-m(2) white flannel over the vegetation and checking the flannel for the presence of caught ticks every 5-10 m. (ii) Visual search: This method consisted of looking for guesting ticks on the tips of leaves of the vegetation bordering animal trails in the forest. A total of 103 adult ticks belonging to 4 Amblyomma species were collected by the visual search method on 5 collecting dates, while only 44 adult ticks belonging to 3 Amblyomma species were collected by dragging on 5 other collecting dates. These values were statistically different (Mann-Whitney Test, P = 0.0472). On the other hand, dragging was more efficient for subadult ticks, since no larva or nymph was collected by visual search, whereas 18 nymphs and 7 larvae were collected by dragging. The visual search method proved to be suitable for collecting adult ticks in the Amazon forest: however, field studies should include a second method, such as dragging in order to maximize the collection of subadult ticks. Indeed, these 2 methods can be performed by a single investigator at the same time, while he/she walks on an animal trail in the forest. (C) 2010 Elsevier GmbH. All rights reserved.