5 resultados para 3D Video Telecommunication Multimedia
em Universidad de Alicante
Resumo:
Objective: To evaluate two cases of intermittent exotropia (IX(T)) treated by vision therapy the efficacy of the treatment by complementing the clinical examination with a 3-D video-oculography to register and to evidence the potential applicability of this technology for such purpose. Methods: We report the binocular alignment changes occurring after vision therapy in a woman of 36 years with an IX(T) of 25 prism diopters (Δ) at far and 18 Δ at near and a child of 10 years with 8 Δ of IX(T) in primary position associated to 6 Δ of left eye hypotropia. Both patients presented good visual acuity with correction in both eyes. Instability of ocular deviation was evident by VOG analysis, revealing also the presence of vertical and torsional components. Binocular vision therapy was prescribed and performed including different types of vergence, accommodation, and consciousness of diplopia training. Results: After therapy, excellent ranges of fusional vergence and a “to-the-nose” near point of convergence were obtained. The 3-D VOG examination (Sensoro Motoric Instruments, Teltow, Germany) confirmed the compensation of the deviation with a high level of stability of binocular alignment. Significant improvement could be observed after therapy in the vertical and torsional components that were found to become more stable. Patients were very satisfied with the outcome obtained by vision therapy. Conclusion: 3D-VOG is a useful technique for providing an objective register of the compensation of the ocular deviation and the stability of the binocular alignment achieved after vision therapy in cases of IX(T), providing a detailed analysis of vertical and torsional improvements.
Resumo:
Objetivo: Evaluar la eficacia del tratamiento en 3 casos de exotropia intermitente (XT(i)) mediante ejercicios de terapia visual, completando la exploración clínica con Videooculografia-30 y evidenciar la potencial aplicabilidad de esta tecnología para dicho propósito. Métodos: Exponemos los cambios ocurridos tras ejercicios de terapia visual en una mujer de 36 años con XT(i) de -25 dioptrías prismáticas (dp) de lejos y 18 dp de cerca; Un niño de 10 años de edad con 8 dp de XT(i) en posición primaria, asociados a +6 dp de hipotropia izquierda; y un hombre de 63 años con XT(i) de 6 dp en posición primaria asociada a +7 dp de hipertropia derecha. Todos los pacientes presentaron buena agudeza visual corregida en ambos ojos. La inestabilidad de la desviación ocular se evidenció mediante análisis de VOG-30, revelando la presencia de components verticales y torsionales. Se realizaron ejercicios de terapia visual, incluyendo diferentes tipos de ejercicios de vergencias, acomodación y percepción de la diplopía. Resultados: Tras la terapia visual se obtuvieron excelentes rangos de vergencias fusionales y de punto próximo de convergencia («hasta la nariz»). El examen mediante VOG-3D (Sensoro Motoric lnstruments, Teltow, Germany) confirmó la compensación de la desviación con estabilidad del alineamiento ocular. Se observó una significativa mejora después de la terapia en los components verticals y torsionales, lo cuales se hicieron más estables. Los pacientes se mostraron muy satisfechos de los resultados obtenidos. Conclusión: La VOG-3D es una técnica útil para dotamos de un método objetivo de registro de la compensación y estabilidad de la desviación ocular después de realizar ejercicios de terapia visual en casos de XT(i), ofreciéndonos un detallado análisis de la mejoría de los components verticales y torsionales.
Resumo:
Vídeo grabado con la cámara Nintendo 3DS, que puede visualizarse en 3D, con la técnica de fusión libre en visión cruzada, abriendo el vídeo con el reproductor VLC media player en modo "sin reparar" en cualquier pantalla convencional (ordenador, TV, etc), y tomando como marco izquierdo el vídeo desdoblado como Direct3D-output. Para visionado en paralelo (menos recomendable) sobre una pantalla de ordenador convencional, intercambiar los 2 marcos de vídeo.
Resumo:
We present a purposeful initiative to open new grounds for teaching Geometrical Optics. It is based on the creation of an innovative education networking involving academic staff from three Spanish universities linked together around Optics. Nowadays, students demand online resources such as innovative multimedia tools for complementing the understanding of their studies. Geometrical Optics relies on basics of light phenomena like reflection and refraction and the use of simple optical elements such as mirrors, prisms, lenses, and fibers. The mathematical treatment is simple and the equations are not too complicated. But from our long time experience in teaching to undergraduate students, we realize that important concepts are missed by these students because they do not work ray tracing as they should do. Moreover, Geometrical Optics laboratory is crucial by providing many short Optics experiments and thus stimulating students interest in the study of such a topic. Multimedia applications help teachers to cover those student demands. In that sense, our educational networking shares and develops online materials based on 1) video-tutorials of laboratory experiences and of ray tracing exercises, 2) different online platforms for student self-examinations and 3) computer assisted geometrical optics exercises. That will result in interesting educational synergies and promote student autonomy for learning Optics.
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.