973 resultados para Projection Mapping, Augmented Reality, OpenFrameworks
Resumo:
In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are particularly useful in conditions where the Lambertian assumption holds, and the 3D points have constant color despite viewing angle. The goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real-time applications. The developed image-based 3D pose and structure estimation methods are finally demonstrated in practise in indoor 3D reconstruction use, and in a live augmented reality application.
Resumo:
In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
Resumo:
Image overlay projection is a form of augmented reality that allows surgeons to view underlying anatomical structures directly on the patient surface. It improves intuitiveness of computer-aided surgery by removing the need for sight diversion between the patient and a display screen and has been reported to assist in 3-D understanding of anatomical structures and the identification of target and critical structures. Challenges in the development of image overlay technologies for surgery remain in the projection setup. Calibration, patient registration, view direction, and projection obstruction remain unsolved limitations to image overlay techniques. In this paper, we propose a novel, portable, and handheld-navigated image overlay device based on miniature laser projection technology that allows images of 3-D patient-specific models to be projected directly onto the organ surface intraoperatively without the need for intrusive hardware around the surgical site. The device can be integrated into a navigation system, thereby exploiting existing patient registration and model generation solutions. The position of the device is tracked by the navigation system’s position sensor and used to project geometrically correct images from any position within the workspace of the navigation system. The projector was calibrated using modified camera calibration techniques and images for projection are rendered using a virtual camera defined by the projectors extrinsic parameters. Verification of the device’s projection accuracy concluded a mean projection error of 1.3 mm. Visibility testing of the projection performed on pig liver tissue found the device suitable for the display of anatomical structures on the organ surface. The feasibility of use within the surgical workflow was assessed during open liver surgery. We show that the device could be quickly and unobtrusively deployed within the sterile environment.
Resumo:
Virtual studio technology plays an important role for modern television productions. Blue-screen matting is a common technique for integrating real actors or moderators into computer generated sceneries. Augmented reality offers the possibility to mix real and virtual in a more general context. This article proposes a new technological approach for combining real studio content with computergenerated information. Digital light projection allows a controlled spatial, temporal, chrominance and luminance modulation of illumination – opening new possibilities for TV studios.
Resumo:
The use of 3D imaging techniques has been early adopted in the footwear industry. In particular, 3D imaging could be used to aid commerce and improve the quality and sales of shoes. Footwear customization is an added value aimed not only to improve product quality, but also consumer comfort. Moreover, customisation implies a new business model that avoids the competition of mass production coming from new manufacturers settled mainly in Asian countries. However, footwear customisation implies a significant effort at different levels. In manufacturing, rapid and virtual prototyping is required; indeed the prototype is intended to become the final product. The whole design procedure must be validated using exclusively virtual techniques to ensure the feasibility of this process, since physical prototypes should be avoided. With regard to commerce, it would be desirable for the consumer to choose any model of shoes from a large 3D database and be able to try them on looking at a magic mirror. This would probably reduce costs and increase sales, since shops would not require storing every shoe model and the process of trying several models on would be easier and faster for the consumer. In this paper, new advances in 3D techniques coming from experience in cinema, TV and games are successfully applied to footwear. Firstly, the characteristics of a high-quality stereoscopic vision system for footwear are presented. Secondly, a system for the interaction with virtual footwear models based on 3D gloves is detailed. Finally, an augmented reality system (magic mirror) is presented, which is implemented with low-cost computational elements that allow a hypothetical customer to check in real time the goodness of a given virtual footwear model from an aesthetical point of view.
Resumo:
Over the last fifty years mobility practices have changed dramatically, improving the way travel takes place, the time it takes but also on matters like road safety and prevention. High mortality caused by high accident levels has reached untenable levels. But the research into road mortality stayed limited to comparative statistical exercises which go no further than defining accident types. In terms of sharing information and mapping accidents, little progress has been mad, aside from the normal publication of figures, either through simplistic tables or web pages. With considerable technological advances on geographical information technologies, research and development stayed rather static with only a few good examples on dynamic mapping. The use of Global Positioning System (GPS) devices as normal equipments on automobile industry resulted in a more dynamic mobility patterns but also with higher degrees of uncertainty on road traffic. This paper describes a road accident georeferencing project for the Lisbon District involving fatalities and serious injuries during 2007. In the initial phase, individual information summaries were compiled giving information on accidents and its majour characteristics, collected by the security forces: the Public Safety Police Force (Polícia de Segurança Pública - PSP) and the National Guard (Guarda Nacional Republicana - GNR). The Google Earth platform was used to georeference the information in order to inform the public and the authorities of the accident locations, the nature of the location, and the causes and consequences of the accidents. This paper also gives future insights about augmented reality technologies, considered crucial to advances to road safety and prevention studies. At the end, this exercise could be considered a success because of numerous consequences, as for stakeholders who decide what to do but also for the public awareness to the problem of road mortality.
Resumo:
Dissertação para obtenção do Grau de Mestrado em Engenharia de Informática
Resumo:
PURPOSE: At 7 Tesla (T), conventional static field (B0 ) projection mapping techniques, e.g., FASTMAP, FASTESTMAP, lead to elevated specific absorption rates (SAR), requiring longer total acquisition times (TA). In this work, the series of adiabatic pulses needed for slab selection in FASTMAP is replaced by a single two-dimensional radiofrequency (2D-RF) pulse to minimize TA while ensuring equal shimming performance. METHODS: Spiral gradients and 2D-RF pulses were designed to excite thin slabs in the small tip angle regime. The corresponding selection profile was characterized in phantoms and in vivo. After optimization of the shimming protocol, the spectral linewidths obtained after 2D localized shimming were compared with conventional techniques and published values from (Emir et al NMR Biomed 2012;25:152-160) in six different brain regions. RESULTS: Results on healthy volunteers show no significant difference (P > 0.5) between the spectroscopic linewidths obtained with the adiabatic (TA = 4 min) and the new low-SAR and time-efficient FASTMAP sequence (TA = 42 s). The SAR can be reduced by three orders of magnitude and TA accelerated six times without impact on the shimming performances or quality of the resulting spectra. CONCLUSION: Multidimensional pulses can be used to minimize the RF energy and time spent for automated shimming using projection mapping at high field. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.
Resumo:
OBJECTIVE: Our aim was to evaluate a fluorescence-based enhanced-reality system to assess intestinal viability in a laparoscopic mesenteric ischemia model. MATERIALS AND METHODS: A small bowel loop was exposed, and 3 to 4 mesenteric vessels were clipped in 6 pigs. Indocyanine green (ICG) was administered intravenously 15 minutes later. The bowel was illuminated with an incoherent light source laparoscope (D-light-P, KarlStorz). The ICG fluorescence signal was analyzed with Ad Hoc imaging software (VR-RENDER), which provides a digital perfusion cartography that was superimposed to the intraoperative laparoscopic image [augmented reality (AR) synthesis]. Five regions of interest (ROIs) were marked under AR guidance (1, 2a-2b, 3a-3b corresponding to the ischemic, marginal, and vascularized zones, respectively). One hour later, capillary blood samples were obtained by puncturing the bowel serosa at the identified ROIs and lactates were measured using the EDGE analyzer. A surgical biopsy of each intestinal ROI was sent for mitochondrial respiratory rate assessment and for metabolites quantification. RESULTS: Mean capillary lactate levels were 3.98 (SD = 1.91) versus 1.05 (SD = 0.46) versus 0.74 (SD = 0.34) mmol/L at ROI 1 versus 2a-2b (P = 0.0001) versus 3a-3b (P = 0.0001), respectively. Mean maximal mitochondrial respiratory rate was 104.4 (±21.58) pmolO2/second/mg at the ROI 1 versus 191.1 ± 14.48 (2b, P = 0.03) versus 180.4 ± 16.71 (3a, P = 0.02) versus 199.2 ± 25.21 (3b, P = 0.02). Alanine, choline, ethanolamine, glucose, lactate, myoinositol, phosphocholine, sylloinositol, and valine showed statistically significant different concentrations between ischemic and nonischemic segments. CONCLUSIONS: Fluorescence-based AR may effectively detect the boundary between the ischemic and the vascularized zones in this experimental model.
Resumo:
En la actualidad existen gran variedad de productos que emplean la Realidad Aumentada (RA) en diversos campos y aplicaciones. Algunos de los que cuentan con mayor difusión son las aplicaciones para dispositivos móviles que combinan la cámara del dispositivo con la localización y la orientación obtenida con GPS, servicios de red inalámbrica y brújulas de estado sólido. Algunas de las aplicaciones con mayor difusión entre los usuarios son Layar, Wikitude, Juanio y Mixare. Sin embargo, no existe un estándar para los servicios de contenidos de RA; al contrario, cada aplicación emplea sus propios formatos de obligado cumplimiento, con sus propias características y especificaciones. Por otra parte la información cartográfica, objeto de publicación en las distintas plataformas de RA, reside tradicionalmente en los servidores de cartografía que gracias al cumplimiento de los estándares OGC garantizan su consulta y acceso. Este proyecto establece los interfaces de comunicación necesarios para proporcionar las fuentes de datos cartográficas tradicionales a través de los distintos formatos empleados en los navegadores de RA. Gracias a las características de extensibilidad de Geoserver y a sus características de configuración dinámica propias del framework Spring en el que se desarrolla, se han generado los servicios capaces de aprovechar las fuentes de datos gestionadas desde Geoserver para su difusión a través de los distintos navegadores de RA. Esta extensión modular de Geoserver permitirá aprovechar los repositorios de datos actuales para su difusión a través de las múltiples plataformas de Realidad Aumentada disponibles
Resumo:
El modelat d'escenes és clau en un gran ventall d'aplicacions que van des de la generació mapes fins a la realitat augmentada. Aquesta tesis presenta una solució completa per a la creació de models 3D amb textura. En primer lloc es presenta un mètode de Structure from Motion seqüencial, a on el model 3D de l'entorn s'actualitza a mesura que s'adquireix nova informació visual. La proposta és més precisa i robusta que l'estat de l'art. També s'ha desenvolupat un mètode online, basat en visual bag-of-words, per a la detecció eficient de llaços. Essent una tècnica completament seqüencial i automàtica, permet la reducció de deriva, millorant la navegació i construcció de mapes. Per tal de construir mapes en àrees extenses, es proposa un algorisme de simplificació de models 3D, orientat a aplicacions online. L'eficiència de les propostes s'ha comparat amb altres mètodes utilitzant diversos conjunts de dades submarines i terrestres.
Resumo:
Projects in the area of architectural design and urban planning typically engage several architects as well as experts from other professions. While the design and review meetings thus often involve a large number of cooperating participants, the actual design is still done by the individuals in the time in between those meetings using desktop PCs and CAD applications. A real collaborative approach to architectural design and urban planning is often limited to early paper-based sketches.In order to overcome these limitations, we designed and realized the ARTHUR system, an Augmented Reality (AR) enhanced round table to support complex design and planning decisions for architects. WhileAR has been applied to this area earlier, our approach does not try to replace the use of CAD systems but rather integrates them seamlessly into the collaborative AR environment. The approach is enhanced by intuitiveinteraction mechanisms that can be easily con-figured for different application scenarios.
Resumo:
In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instead of compiling monolithic C++ applications. The flexibility and the advantages of the presented framework are explained on the basis of an exemplary implementation of a classic Augmented Realityapplication and its extension to a collaborative remote expert scenario.
Resumo:
Spatial tracking is one of the most challenging and important parts of Mixed Reality environments. Many applications, especially in the domain of Augmented Reality, rely on the fusion of several tracking systems in order to optimize the overall performance. While the topic of spatial tracking sensor fusion has already seen considerable interest, most results only deal with the integration of carefully arranged setups as opposed to dynamic sensor fusion setups. A crucial prerequisite for correct sensor fusion is the temporal alignment of the tracking data from several sensors. Tracking sensors are typically encountered in Mixed Reality applications, are generally not synchronized. We present a general method to calibrate the temporal offset between different sensors by the Time Delay Estimation method which can be used to perform on-line temporal calibration. By applying Time Delay Estimation on the tracking data, we show that the temporal offset between generic Mixed Reality spatial tracking sensors can be calibrated. To show the correctness and the feasibility of this approach, we have examined different variations of our method and evaluated various combinations of tracking sensors. We furthermore integrated this time synchronization method into our UBITRACK Mixed Reality tracking framework to provide facilities for calibration and real-time data alignment.