761 resultados para Virtual and Augmented Reality


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes a novel technology in the field of swarm robotics that allows a swarm of robots to sense a virtual environment through virtual sensors. Virtual sensing is a desirable and helpful technology in swarm robotics research activity, because it allows the researchers to efficiently and quickly perform experiments otherwise more expensive and time consuming, or even impossible. In particular, we envision two useful applications for virtual sensing technology. On the one hand, it is possible to prototype and foresee the effects of a new sensor on a robot swarm, before producing it. On the other hand, thanks to this technology it is possible to study the behaviour of robots operating in environments that are not easily reproducible inside a lab for safety reasons or just because physically infeasible. The use of virtual sensing technology for sensor prototyping aims to foresee the behaviour of the swarm enhanced with new or more powerful sensors, without producing the hardware. Sensor prototyping can be used to tune a new sensor or perform performance comparison tests between alternative types of sensors. This kind of prototyping experiments can be performed through the presented tool, that allows to rapidly develop and test software virtual sensors of different typologies and quality, emulating the behaviour of several hardware real sensors. By investigating on which sensors is better to invest, a researcher can minimize the sensors’ production cost while achieving a given swarm performance. Through augmented reality, it is possible to test the performance of the swarm in a desired virtual environment that cannot be set into the lab for physical, logistic or economical reasons. The virtual environment is sensed by the robots through properly designed virtual sensors. Virtual sensing technology allows a researcher to quickly carry out real robots experiment in challenging scenarios without all the required hardware and environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

CONCLUSION: Our self-developed planning and navigation system has proven its capacity for accurate surgery on the anterior and lateral skull base. With the incorporation of augmented reality, image-guided surgery will evolve into 'information-guided surgery'. OBJECTIVE: Microscopic or endoscopic skull base surgery is technically demanding and its outcome has a great impact on a patient's quality of life. The goal of the project was aimed at developing and evaluating enabling navigation surgery tools for simulation, planning, training, education, and performance. This clinically applied technological research was complemented by a series of patients (n=406) who were treated by anterior and lateral skull base procedures between 1997 and 2006. MATERIALS AND METHODS: Optical tracking technology was used for positional sensing of instruments. A newly designed dynamic reference base with specific registration techniques using fine needle pointer or ultrasound enables the surgeon to work with a target error of < 1 mm. An automatic registration assessment method, which provides the user with a color-coded fused representation of CT and MR images, indicates to the surgeon the location and extent of registration (in)accuracy. Integration of a small tracker camera mounted directly on the microscope permits an advantageous ergonomic way of working in the operating room. Additionally, guidance information (augmented reality) from multimodal datasets (CT, MRI, angiography) can be overlaid directly onto the surgical microscope view. The virtual simulator as a training tool in endonasal and otological skull base surgery provides an understanding of the anatomy as well as preoperative practice using real patient data. RESULTS: Using our navigation system, no major complications occurred in spite of the fact that the series included difficult skull base procedures. An improved quality in the surgical outcome was identified compared with our control group without navigation and compared with the literature. The surgical time consumption was reduced and more minimally invasive approaches were possible. According to the participants' questionnaires, the educational effect of the virtual simulator in our residency program received a high ranking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interactive TV technology has been addressed in many previous works, but there is sparse research on the topic of interactive content broadcasting and how to support the production process. In this article, the interactive broadcasting process is broadly defined to include studio technology and digital TV applications at consumer set-top boxes. In particular, augmented reality studio technology employs smart-projectors as light sources and blends real scenes with interactive computer graphics that are controlled at end-user terminals. Moreover, TV producer-friendly multimedia authoring tools empower the development of novel TV formats. Finally, the support for user-contributed content raises the potential to revolutionize the hierarchical TV production process, by introducing the viewer as part of content delivery chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtual studio technology plays an important role for modern television productions. Blue-screen matting is a common technique for integrating real actors or moderators into computer generated sceneries. Augmented reality offers the possibility to mix real and virtual in a more general context. This article proposes a new technological approach for combining real studio content with computergenerated information. Digital light projection allows a controlled spatial, temporal, chrominance and luminance modulation of illumination – opening new possibilities for TV studios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial tracking is one of the most challenging and important parts of Mixed Reality environments. Many applications, especially in the domain of Augmented Reality, rely on the fusion of several tracking systems in order to optimize the overall performance. While the topic of spatial tracking sensor fusion has already seen considerable interest, most results only deal with the integration of carefully arranged setups as opposed to dynamic sensor fusion setups. A crucial prerequisite for correct sensor fusion is the temporal alignment of the tracking data from several sensors. Tracking sensors are typically encountered in Mixed Reality applications, are generally not synchronized. We present a general method to calibrate the temporal offset between different sensors by the Time Delay Estimation method which can be used to perform on-line temporal calibration. By applying Time Delay Estimation on the tracking data, we show that the temporal offset between generic Mixed Reality spatial tracking sensors can be calibrated. To show the correctness and the feasibility of this approach, we have examined different variations of our method and evaluated various combinations of tracking sensors. We furthermore integrated this time synchronization method into our UBITRACK Mixed Reality tracking framework to provide facilities for calibration and real-time data alignment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Immersion and interaction have been identified as key factors influencing the quality of experience in stereoscopic video systems. An experimental prototype designed to explore the influence of these factors in 3D video applications is described here1. The focus is on the real-time insertion algorithm of new 3D models into the original video streams. Using this algorithm, our prototype is aimed to explore a new interaction paradigm ? similar to the augmented reality approach ? with 3D video applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La Realidad Aumentada forma parte de múltiples proyectos de investigación desde hace varios años. La unión de la información del mundo real y la información digital ofrece un sinfín de posibilidades. Las más conocidas van orientadas a los juegos pero, gracias a ello, también se pueden implementar Interfaces Naturales. En otras palabras, conseguir que el usuario maneje un dispositivo electrónico con sus propias acciones: movimiento corporal, expresiones faciales, etc. El presente proyecto muestra el desarrollo de la capa de sistema de una Interfaz Natural, Mokey, que permite la simulación de un teclado mediante movimientos corporales del usuario. Con esto, se consigue que cualquier aplicación de un ordenador que requiera el uso de un teclado, pueda ser usada con movimientos corporales, aunque en el momento de su creación no fuese diseñada para ello. La capa de usuario de Mokey es tratada en el proyecto realizado por Carlos Lázaro Basanta. El principal objetivo de Mokey es facilitar el acceso de una tecnología tan presente en la vida de las personas como es el ordenador a los sectores de la población que tienen alguna discapacidad motora o movilidad reducida. Ya que vivimos en una sociedad tan informatizada, es esencial que, si se quiere hablar de inclusión social, se permita el acceso de la actual tecnología a esta parte de la población y no crear nuevas herramientas exclusivas para ellos, que generarían una situación de discriminación, aunque esta no sea intencionada. Debido a esto, es esencial que el diseño de Mokey sea simple e intuitivo, y al mismo tiempo que esté dotado de la suficiente versatilidad, para que el mayor número de personas discapacitadas puedan encontrar una configuración óptima para ellos. En el presente documento, tras exponer las motivaciones de este proyecto, se va a hacer un análisis detallado del estado del arte, tanto de la tecnología directamente implicada, como de otros proyectos similares. Se va prestar especial atención a la cámara Microsoft Kinect, ya que es el hardware que permite a Mokey detectar la captación de movimiento. Tras esto, se va a proceder a una explicación detallada de la Interfaz Natural desarrollada. Se va a prestar especial atención a todos aquellos algoritmos que han sido implementados para la detección del movimiento, así como para la simulación del teclado. Finalmente, se va realizar un análisis exhaustivo del funcionamiento de Mokey con otras aplicaciones. Se va a someter a una batería de pruebas muy amplia que permita determinar su rendimiento en las situaciones más comunes. Del mismo modo, se someterá a otra batería de pruebas destinada a definir su compatibilidad con los diferentes tipos de programas existentes en el mercado. Para una mayor precisión a la hora de analizar los datos, se va a proceder a comparar Mokey con otra herramienta similar, FAAST, pudiendo observar de esta forma las ventajas que tiene una aplicación especialmente pensada para gente discapacitada sobre otra que no tenía este fin. ABSTRACT. During the last few years, Augmented Reality has been an important part of several research projects, as the combination of the real world and the digital information offers a whole new set of possibilities. Among them, one of the most well-known possibilities are related to games by implementing Natural Interfaces, which main objective is to enable the user to handle an electronic device with their own actions, such as corporal movements, facial expressions… The present project shows the development of Mokey, a Natural Interface that simulates a keyboard by user’s corporal movements. Hence, any application that requires the use of a keyboard can be handled with this Natural Interface, even if the application was not designed in that way at the beginning. The main objective of Mokey is to simplify the use of the computer for those people that are handicapped or have some kind of reduced mobility. As our society has been almost completely digitalized, this kind of interfaces are essential to avoid social exclusion and discrimination, even when it is not intentional. Thus, some of the most important requirements of Mokey are its simplicity to use, as well as its versatility. In that way, the number of people that can find an optimal configuration for their particular condition will grow exponentially. After stating the motivations of this project, the present document will provide a detailed state of the art of both the technologies applied and other similar projects, highlighting the Microsoft Kinect camera, as this hardware allows Mokey to detect movements. After that, the document will describe the Natural Interface that has been developed, paying special attention to the algorithms that have been implemented to detect movements and synchronize the keyboard. Finally, the document will provide an exhaustive analysis of Mokey’s functioning with other applications by checking its behavior with a wide set of tests, so as to determine its performance in the most common situations. Likewise, the interface will be checked against another set of tests that will define its compatibility with different softwares that already exist on the market. In order to have better accuracy while analyzing the data, Mokey’s interface will be compared with a similar tool, FAAST, so as to highlight the advantages of designing an application that is specially thought for disabled people.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Building Information Modelling (BIM) provides a shared source of information about a built asset, which creates a collaborative virtual environment for project teams. Literature suggests that to collaborate efficiently, the relationship between the project team is based on sympathy, obligation, trust and rapport. Communication increases in importance when working collaboratively but effective communication can only be achieved when the stakeholders are willing to act, react, listen and share information. Case study research and interviews with Architecture, Engineering and Construction (AEC) industry experts suggest that synchronous face-to-face communication is project teams’ preferred method, allowing teams to socialise and build rapport, accelerating the creation of trust between the stakeholders. However, virtual unified communication platforms are a close second-preferred option for communication between the teams. Effective methods for virtual communication in professional practice, such as virtual collaboration environments (CVE), that build trust and achieve similar spontaneous responses as face-to-face communication, are necessary to face the global challenges and can be achieved with the right people, processes and technology. This research paper investigates current industry methods for virtual communication within BIM projects and explores the suitability of avatar interaction in a collaborative virtual environment as an alternative to face-to-face communication to enhance collaboration between design teams’ professional practice on a project. Hence, this paper presents comparisons between the effectiveness of these communication methods within construction design teams with results of further experiments conducted to test recommendations for more efficient methods for virtual communication to add value in the workplace between design teams.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines whether virtual reality (VR) is more superior to paper-based instructions in increasing the speed at which individuals learn a new assembly task. Specifically, the work seeks to quantify any learning benefits when individuals have been given the opportunity and compares the performance of two groups using virtual and hardcopy media types to pre-learn the task. A build experiment based on multiple builds of an aircraft panel showed that a group of people who pre-learned the assembly task using a VR environment completed their builds faster (average build time 29.5% lower). The VR group also made fewer references to instructional materials (average number of references 38% lower) and made fewer errors than a group using more traditional, hard copy instructions. These outcomes were more pronounced during build one with differences in build time and number of references showing limited statistical differences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese pretende descrever o desenvolvimento e arquitectura do software que constitui o Miradouro Virtual@, mais especificamente do componente referente à interface. O Miradouro Virtual@ é um dispositivo cujo propósito à semelhança dos tradicionais binóculos turísticos, é observar a paisagem, mas cuja interacção não está limitada à simples observação individual. Recorre à realidade aumentada para sobrepôr imagens geradas por computador a imagens reais, capturadas por um dispositivo para aquisição de imagem real (tipicamente uma câmara de vídeo), e mostra-as num ecrã touchscreen, permitindo deste modo, combinar elementos virtuais e multimédia com a paisagem real. A imagem final, composta, dá ao utilizador uma nova dimensão do espaço envolvente, permitindo-lhe explorar uma nova camada de informação não visível anteriormente. Sendo sensíveis à orientação do Miradouro Virtual@, os elementos virtuais e multimédia adaptam-se de acordo com os movimentos do dispositivo. O Miradouro Virtual@ é um produto composto por diversos elementos de hardware e software. O foco desta tese recai apenas nos componentes de software, mais especificamente na interface. Pretende dar a conhecer as limitações da versão anterior do software e mostrar as soluções encontradas que permitiram ultrapassar algumas dessas limitações. ABSTRACT; This thesis focuses on the design and development of the Virtual Sightseeing™ software, more specifically on the interface component. The Virtual Sightseeing™ is a device similar to the traditional scenic viewers that takes advantage of its generally known and popularity to build an innovative system. It works by using augmented reality to superimpose, in real-time, images generated by a computer onto a live stream captured by a video camera and displaying them on a touchscreen display. It allows adding multimedia elements to the real scenery by composing them in the image that is presented to the user. The multimedia information and virtual elements that are displayed are sensitive to the orientation and position of the device. They change as the user manually changes the orientation of the device. The Virtual Sightseeing™ is comprised of several hardware and software components. The focus of this thesis is on the software part, more specifically on the interface component. It intends to show the known limitations of the previous software version and how they were overcome in this new version.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.