937 resultados para Head-Mounted Displays
Resumo:
Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part- the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2-C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions.
Resumo:
This thesis researches the current state of small teleoperated devices, the need for them and developing one. Small teleoperated devices give the possibility to perform tasks that are impossible or dangerous for humans. This work concentrates on small devices and cheap components and discloses one way of developing a teleoperated vehicle, but not necessarily the optimal way. Development and the current state of teleoperation were studied by a literature review, in which the data was searched from literature as well as from the Internet. The need for teleoperated devices was mapped through a survey, where 11 professionals from variating fields were interviewed how they could utilize a teleoperated devices and with what kind of features. Also, a prototype was built as a proof of concept of small teleoperated devices. The prototype is controlled by a single-board microcomputer that also streams video to the controlling device. The video can be viewed on a display or with a head mounted display.
Resumo:
Over the past few years a number of research studies, mainly involving desktop-based or head-mounted Virtual Reality (VR) systems, have been undertaken to determine what VR can contribute to the education process. In our study we have used the findings from a number of these studies to help in formulating a new study into the perceived merits and limitations of using VR in general, and immersive CAVE-like systems in particular, as an education tool. We conducted our study with a group of final year undergraduate students who were registered on a module that described VR in terms of the scientific issues, application areas, and strengths and weaknesses of the technology.
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
Augmented Reality systems overlay computer generated information onto a user's natural senses. Where this additional information is visual, the information is overlaid on the user's natural visual field of view through a head mounted (or “head-up”) display device. Integrated Home Systems provides a network that links every electrical device in the home which provides to a user both control and data transparency across the network.
Resumo:
Virtual Reality (VR) can provide visual stimuli for EEG studies that can be altered in real time and can produce effects that are difficult or impossible to reproduce in a non-virtual experimental platform. As part of this experiment the Oculus Rift, a commercial-grade, low-cost, Head Mounted Display (HMD) was assessed as a visual stimuli platform for experiments recording EEG. Following, the device was used to investigate the effect of congruent visual stimuli on Event Related Desynchronisation (ERD) due to motion imagery.
Resumo:
Grazie alla crescente evoluzione tecnologica è oggi possibile, tramite Head Mounted Display (HMD), vivere una realtà virtuale ricca nei dettagli, interattiva ed immersiva. L’avanzamento in questo settore ha infatti portato a una vera e propria rivoluzione, aprendo la possibilità di utilizzare questa tecnologia in molteplici ambiti. L’ostacolo riscontrato è che a un progresso di tale entità non si associa un adeguato aggiornamento e perfezionamento riguardo alle metodologie di interazione con oggetti 3D, dell’utilizzo di interfacce grafiche e del generale design ambientale. La diretta conseguenza di questo mancato aggiornamento è quella di indebolire o addirittura annullare l’effetto presenza dell'HMD, requisito indispensabile che consente all’utente di immergersi sensorialmente nel contesto simulato. L’obiettivo di questo studio consiste nel comprendere cosa è necessario tenere in considerazione e quali regole vanno cambiate per poter mantenere un'alta sensazione di presenza per l'utente all’interno di una realtà virtuale. A questo scopo è stato creato un ambiente virtuale 3D in grado di supportare l'utilizzo di un HMD, l'Oculus Rift, e di diversi dispositivi di input in grado di consentire controllo tramite movimenti naturali, il Razer Hydra ed il Leap Motion, in modo da poter effettuare un'analisi diretta sul livello del fattore presenza percepito nell'effettuare diverse interazioni con l'ambiente virtuale e le interfacce grafiche attraverso questi dispositivi. Questa analisi ha portato all'individuazione di molteplici aspetti in queste tipologie di interazioni e di design di intrefacce utente che, pur essendo di uso comune negli ambienti 3D contemporanei, se vissuti in un contesto di realtà virtuale non risultano più funzionali e indeboliscono il senso di presenza percepito dall'utente. Per ognuno di questi aspetti è stata proposta ed implementata una soluzione alternativa (basata su concetti teorici quali Natural Mapping, Diegesis, Affordance, Flow) in grado di risultare funzionale anche in un contesto di realtà virtuale e di garantire una forte sensazione di presenza all'utente. Il risultato finale di questo studio sono quindi nuovi metodi di design di ambienti virtuali per realtà aumentata. Questi metodi hanno permesso la creazione di un ambiente virtuale 3D pensato per essere vissuto tramite HMD dove l'utente è in grado di utilizzare movimenti naturali per interagire con oggetti 3D ed operare interfacce grafiche.
Resumo:
Die voranschreitende Entwicklung von Konzepten und Systemen zur Nutzung digitaler Informationen im industriellen Umfeld eröffnet verschiedenste Möglichkeiten zur Optimierung der Informationsverarbeitung und damit der Prozesseffektivität und -effizienz. Werden die relevanten Daten zu Produkten oder Prozessen jedoch lediglich in digitaler Form zur Verfügung gestellt, fällt ein Eingriff des Menschen in die virtuelle Welt immer schwerer. Auf Grundlage dessen wird am Beispiel der RFIDTechnologie dargestellt, inwiefern digitale Informationen durch die Verwendung von in den Arbeitsablauf integrierten Systemen für den Menschen nutzbar werden. Durch die Entwicklung eines Systems zur papierlosen Produktion und Logistik werden exemplarisch Einsatzszenarien zur Unterstützung des Mitarbeiters in Montageprozessen sowie zur Vermeidung von Fehlern in der Kommissionierung aufgezeigt. Dazu findet neben einer am Kopf getragenen Datenbrille zur Visualisierung der Informationen ein mobiles RFID-Lesegerät Anwendung, mit Hilfe dessen die digitalen Transponderdaten ohne zusätzlichen Aufwand für den Anwender genutzt werden können.
Resumo:
While navigation systems for cars are in widespread use, only recently, indoor navigation systems based on smartphone apps became technically feasible. Hence tools in order to plan and evaluate particular designs of information provision are needed. Since tests in real infrastructures are costly and environmental conditions cannot be held constant, one must resort to virtual infrastructures. This paper presents the development of an environment for the support of the design of indoor navigation systems whose center piece consists in a hands-free navigation method using the Microsoft Kinect in the four-sided Definitely Affordable Virtual Environment (DAVE). Navigation controls using the user's gestures and postures as the input to the controls are designed and implemented. The installation of expensive and bulky hardware like treadmills is avoided while still giving the user a good impression of the distance she has traveled in virtual space. An advantage in comparison to approaches using a head mounted display is that the DAVE allows the users to interact with their smartphone. Thus the effects of different indoor navigation systems can be evaluated already in the planning phase using the resulting system
Resumo:
PURPOSE External beam radiation therapy is currently considered the most common treatment modality for intraocular tumors. Localization of the tumor and efficient compensation of tumor misalignment with respect to the radiation beam are crucial. According to the state of the art procedure, localization of the target volume is indirectly performed by the invasive surgical implantation of radiopaque clips or is limited to positioning the head using stereoscopic radiographies. This work represents a proof-of-concept for direct and noninvasive tumor referencing based on anterior eye topography acquired using optical coherence tomography (OCT). METHODS A prototype of a head-mounted device has been developed for automatic monitoring of tumor position and orientation in the isocentric reference frame for LINAC based treatment of intraocular tumors. Noninvasive tumor referencing is performed with six degrees of freedom based on anterior eye topography acquired using OCT and registration of a statistical eye model. The proposed prototype was tested based on enucleated pig eyes and registration accuracy was measured by comparison of the resulting transformation with tilt and torsion angles manually induced using a custom-made test bench. RESULTS Validation based on 12 enucleated pig eyes revealed an overall average registration error of 0.26 ± 0.08° in 87 ± 0.7 ms for tilting and 0.52 ± 0.03° in 94 ± 1.4 ms for torsion. Furthermore, dependency of sampling density on mean registration error was quantitatively assessed. CONCLUSIONS The tumor referencing method presented in combination with the statistical eye model introduced in the past has the potential to enable noninvasive treatment and may improve quality, efficacy, and flexibility of external beam radiotherapy of intraocular tumors.
Resumo:
In this paper we present the design and implementation of a wearable application in Prolog. The application program is a "sound spatializer." Given an audio signal and real time data from a head-mounted compass, a signal is generated for stereo headphones that will appear to come from a position in space. We describe high-level and low-level optimizations and transformations that have been applied in order to fit this application on the wearable device. The end application operates comfortably in real-time on a wearable computer, and has a memory foot print that remains constant over time enabling it to run on continuous audio streams. Comparison with a version hand-written in C shows that the C version is no more than 20-40% faster; a small price to pay for a high level description.
Resumo:
AUTOFLY-Aid Project aims to develop and demonstrate novel automation support algorithms and tools to the flight crew for flight critical collision avoidance using “dynamic 4D trajectory management”. The automation support system is envisioned to improve the primary shortcomings of TCAS, and to aid the pilot through add-on avionics/head-up displays and reality augmentation devices in dynamically evolving collision avoidance scenarios. The main theoretical innovative and novel concepts to be developed by AUTOFLY-Aid project are a) design and development of the mathematical models of the full composite airspace picture from the flight deck’s perspective, as seen/measured/informed by the aircraft flying in SESAR 2020, b) design and development of a dynamic trajectory planning algorithm that can generate at real-time (on the order of seconds) flyable (i.e. dynamically and performance-wise feasible) alternative trajectories across the evolving stochastic composite airspace picture (which includes new conflicts, blunder risks, terrain and weather limitations) and c) development and testing of the Collision Avoidance Automation Support System on a Boeing 737 NG FNPT II Flight Simulator with synthetic vision and reality augmentation while providing the flight crew with quantified and visual understanding of collision risks in terms of time and directions and countermeasures.
Resumo:
El desarrollo de las tecnologías de captura de contenido audiovisual, y la disminución del tamaño de sensores y cámaras, hace posible, a día de hoy, la captura de escenas desde múltiples puntos de vista simultáneamente, generando distintos formatos de vídeo 3D, cuyo elemento común es la inclusión de vídeo multivista. En cuanto a las tecnologías de presentación de vídeo 3D, actualmente existen diversas opciones tecnológicas, entre las cuales empiezan a tomar una gran importancia las gafas de realidad virtual, también conocidas como Head-Mounted Devices (HMD). Este tipo de gafas principalmente han sido utilizadas para la visualización de vídeo panorámico (o 360). Sin embargo, al permitir localizar al usuario (posición de la cabeza y orientación), habilitan también la posibilidad de desarrollar sistemas para la visualización de vídeo multivista, ofreciendo una funcionalidad similar a la de los monitores autoestereoscópicos. En este Trabajo Fin de Grado se ha desarrollado un prototipo de un sistema que permite visualizar vídeo 3D multicámara en las Oculus Rift, un dispositivo HMD. Este sistema toma como entrada una secuencia de vídeos multivista (real o generada por ordenador) y permite, a partir de la información proporcionada por los sensores de las Oculus Rift, variar el punto de vista adaptándolo a la posición del usuario. El sistema desarrollado simula la visualización de un monitor autoestereoscópico y es parametrizable. El sistema permite variar una serie de parámetros como la distancia interocular o la densidad de cámaras, y dispone de varios modos de funcionamiento. Esto permitirá que el sistema pueda utilizarse para distintas secuencias Super MultiView (SMV), volviéndolo a la vez útil para la realización de pruebas subjetivas de calidad de experiencia.
Resumo:
Federal Aviation Administration, Atlantic City International Airport, N.J.
Resumo:
The need to measure the response of the oculomotor system, such as ocular accommodation, accurately and in real-world environments is essential. New instruments have been developed over the past 50 years to measure eye focus including the extensively utilised and well validated Canon R-1, but in general these have had limitations such as a closed field-of-view, a poor temporal resolution and the need for extensive instrumentation bulk preventing naturalistic performance of environmental tasks. The use of photoretinoscopy and more specifically the PowerRefractor was examined in this regard due to its remote nature, binocular measurement of accommodation, eye movement and pupil size and its open field-of-view. The accuracy of the PowerRefractor to measure refractive error was on averaging similar, but more variable than subjective refraction and previously validated instrumentation. The PowerRefractor was found to be tolerant to eye movements away from the visual axis, but could not function with small pupil sizes in brighter illumination. The PowerRefractor underestimated the lead of accommodation and overestimated the slope of the accommodation stimulus response curve. The PowerRefractor and the SRW-5000 were used to measure the oculomotor responses in a variety of real-world environment: spectacles compared to single vision contract lenses; the use of multifocal contact lenses by pre-presbyopes (relevant to studies on myopia retardation); and ‘accommodating’ intraocular lenses. Due to the accuracy concerns with the PowerRefractor, a purpose-built photoretinoscope was designed to measure the oculomotor response to a monocular head-mounted display. In conclusion, this thesis has shown the ability of photoretinoscopy to quantify changes in the oculomotor system. However there are some major limitations to the PowerRefractor, such as the need for individual calibration for accurate measures of accommodation and vergence, and the relatively large pupil size necessary for measurement.