925 resultados para Virtual reality -- Computer programs
Resumo:
Individuals with vestibular dysfunction may experience visual vertigo (VV), in which symptoms are provoked or exacerbated by excessive or disorientating visual stimuli (e.g. supermarkets). VV can significantly improve when customized vestibular rehabilitation exercises are combined with exposure to optokinetic stimuli. Virtual reality (VR), which immerses patients in realistic, visually challenging environments, has also been suggested as an adjunct to VR to improve VV symptoms. This pilot study compared the responses of sixteen patients with unilateral peripheral vestibular disorder randomly allocated to a VR regime incorporating exposure to a static (Group S) or dynamic (Group D) VR environment. Participants practiced vestibular exercises, twice weekly for four weeks, inside a static (Group S) or dynamic (Group D) virtual crowded square environment, presented in an immersive projection theatre (IPT), and received a vestibular exercise program to practice on days not attending clinic. A third Group D1 completed both the static and dynamic VR training. Treatment response was assessed with the Dynamic Gait Index and questionnaires concerning symptom triggers and psychological state. At final assessment, significant betweengroup differences were noted between Groups D (p = 0.001) and D1 (p = 0.03) compared to Group S for VV symptoms with the former two showing a significant 59.2% and 25.8% improvement respectively compared to 1.6% for the latter. Depression scores improved only for Group S (p = 0.01) while a trend towards significance was noted for Group D regarding anxiety scores (p = 0.07). Conclusion: Exposure to dynamic VR environments should be considered as a useful adjunct to vestibular rehabilitation programs for patients with peripheral vestibular disorders and VV symptoms.
Resumo:
Body change illusions have been of great interest in recent years for the understanding of how the brain represents the body. Appropriate multisensory stimulation can induce an illusion of ownership over a rubber or virtual arm, simple types of out-of-the-body experiences, and even ownership with respect to an alternate whole body. Here we use immersive virtual reality to investigate whether the illusion of a dramatic increase in belly size can be induced in males through (a) first person perspective position (b) synchronous visual-motor correlation between real and virtual arm movements, and (c) self-induced synchronous visual-tactile stimulation in the stomach area.
Resumo:
Altering the normal association between touch and its visual correlate can result in the illusory perception of a fake limb as part of our own body. Thus, when touch is seen to be applied to a rubber hand while felt synchronously on the corresponding hidden real hand, an illusion of ownership of the rubber hand usually occurs. The illusion has also been demonstrated using visuomotor correlation between the movements of the hidden real hand and the seen fake hand. This type of paradigm has been used with respect to the whole body generating out-of-the-body and body substitution illusions. However, such studies have only ever manipulated a single factor and although they used a form of virtual reality have not exploited the power of immersive virtual reality (IVR) to produce radical transformations in body ownership.
Resumo:
Cognitive neuroscientists have discovered various experimental setups that suggest that our body representation is surprisingly flexible, where the brain can easily be tricked into the illusion that a rubber hand is your hand or that a manikin body is your body. These multisensory illusions work well in immersive virtual reality (IVR). What is even more surprising is that such embodiment induces perceptual, attitudinal and behavioural changes that are concomitant with the displayed body type. Here we outline some recent findings in this field, and suggest that this offers a powerful tool for neuroscience, psychology and a new path for IVR.
Resumo:
Cue exposure treatment (CET) consists of controlled and repeated exposure to drugrelated stimuli in order to reduce cue-reactivity. Virtual reality (VR) has proved to be a promising tool for exposition. However, identifying the variables that can modulate the efficacy of this technique is essential for selecting the most appropriate exposure modality. The aim of this study was to determine the relation between several individual variables and self-reported craving in smokers exposed to VR environments. Fortysix smokers were exposed to seven complex virtual environments that reproduce typical situations in which people smoke. Self-reported craving was selected as the criterion variable and three types of variables were selected as the predictor variables: related to nicotine dependence, related to anxiety and impulsivity, and related to the sense of presence in the virtual environments. Sense of presence was the only predictor of self-reported craving in all the experimental virtual environments. Nicotine dependence variables added predictive power to the model only in the virtual breakfast at home. No relation was found between anxiety or impulsivity and self-reported craving. Virtual reality technology can be very helpful for improving CET for substance use disorders. However, the use of virtual environments would make sense only insofar as the sense of presence was high. Otherwise, the effectiveness of exposure might be affected. © 2012 by the Massachusetts Institute of Technology.
Resumo:
The benefits and applications of virtual reality (VR) in the construction industry have been investigated for almost a decade. However, the practical implementation of VR in the construction industry has yet to reach maturity owing to technical constraints. The need for effective information management presents challenges: both transfer of building data to, and organisation of building information within, the virtual environment require consideration. This paper reviews the applications and benefits of VR in the built environment field and reports on a collaboration between Loughborough University and South Bank University to overcome constraints on the use of the overall VR model for whole lifecycle visualisation. The work at each research centre is concerned with an aspect of information management within VR applications for the built environment, and both data transfer and internal data organisation have been investigated. In this paper, similarities and differences between computer-aided design (CAD) and VR packages are first discussed. Three different approaches to the creation of VR models during the design stage are identified and described, with a view to providing sharing understanding across the interdiscipliary groups involved. The suitable organisation of building information within the virtual environment is then further investigated. This work focused on the visualisation of the degradation of a building, through its lifespan, with the view to provide a visual aid for developing an effective and economic project maintenance programme. Finally consideration is given to the potential of emerging standards to facilitate an integrated use of VR. The convergence towards similar data structures in VR and other construction packages may enable visualisation to be better utilised in the overall lifecycle model.
Resumo:
Virtual reality has the potential to improve visualisation of building design and construction, but its implementation in the industry has yet to reach maturity. Present day translation of building data to virtual reality is often unidirectional and unsatisfactory. Three different approaches to the creation of models are identified and described in this paper. Consideration is given to the potential of both advances in computer-aided design and the emerging standards for data exchange to facilitate an integrated use of virtual reality. Commonalities and differences between computer-aided design and virtual reality packages are reviewed, and trials of current system, are described. The trials have been conducted to explore the technical issues related to the integrated use of CAD and virtual environments within the house building sector of the construction industry and to investigate the practical use of the new technology.
Resumo:
User interaction within a virtual environment may take various forms: a teleconferencing application will require users to speak to each other (Geak, 1993), with computer supported co-operative working; an Engineer may wish to pass an object to another user for examination; in a battle field simulation (McDonough, 1992), users might exchange fire. In all cases it is necessary for the actions of one user to be presented to the others sufficiently quickly to allow realistic interaction. In this paper we take a fresh look at the approach of virtual reality operating systems by tackling the underlying issues of creating real-time multi-user environments.
Resumo:
Virtual Reality is a relatively new technology in the relatively young field of computer science. The design of Virtual Reality has only recently come into discussion, as well as the implications for this sort of design. I hope to determine how a user can work most efficiently and accurately in a Virtual World. By studying this, I hope to help in the standardization of Virtual Reality design.
Resumo:
This thesis reports on research done for the integration of eye tracking technology into virtual reality environments, with the goal of using it in rehabilitation of patients who suffered from stroke. For the last few years, eye tracking has been a focus on medical research, used as an assistive tool to help people with disabilities interact with new technologies and as an assessment tool to track the eye gaze during computer interactions. However, tracking more complex gaze behaviors and relating them to motor deficits in people with disabilities is an area that has not been fully explored, therefore it became the focal point of this research. During the research, two exploratory studies were performed in which eye tracking technology was integrated in the context of a newly created virtual reality task to assess the impact of stroke. Using an eye tracking device and a custom virtual task, the system developed is able to monitor the eye gaze pattern changes over time in patients with stroke, as well as allowing their eye gaze to function as an input for the task. Based on neuroscientific hypotheses of upper limb motor control, the studies aimed at verifying the differences in gaze patterns during the observation and execution of the virtual goal-oriented task in stroke patients (N=10), and also to assess normal gaze behavior in healthy participants (N=20). Results were found consistent and supported the hypotheses formulated, showing that eye gaze could be used as a valid assessment tool on these patients. However, the findings of this first exploratory approach are limited in order to fully understand the effect of stroke on eye gaze behavior. Therefore, a novel model-driven paradigm is proposed to further understand the relation between the neuronal mechanisms underlying goal-oriented actions and eye gaze behavior.
Resumo:
La città medievale di Leopoli-Cencelle (fondata da Papa Leone IV nell‘854 d.C. non lontano da Civitavecchia) è stata oggetto di studio e di periodiche campagne di scavo a partire dal 1994. Le stratigrafie investigate con metodi tradizionali, hanno portato alla luce le numerose trasformazioni che la città ha subìto nel corso della sua esistenza in vita. Case, torri, botteghe e strati di vissuto, sono stati interpretati sin dall’inizio dello scavo basandosi sulla documentazione tradizionale e bi-dimensionale, legata al dato cartaceo e al disegno. Il presente lavoro intende re-interpretare i dati di scavo con l’ausilio delle tecnologie digitali. Per il progetto sono stati utilizzati un laser scanner, tecniche di Computer Vision e modellazione 3D. I tre metodi sono stati combinati in modo da poter visualizzare tridimensionalmente gli edifici abitativi scavati, con la possibilità di sovrapporre semplici modelli 3D che permettano di formulare ipotesi differenti sulla forma e sull’uso degli spazi. Modellare spazio e tempo offrendo varie possibilità di scelta, permette di combinare i dati reali tridimensionali, acquisiti con un laser scanner, con semplici modelli filologici in 3D e offre l’opportunità di valutare diverse possibili interpretazioni delle caratteristiche dell’edificio in base agli spazi, ai materiali, alle tecniche costruttive. Lo scopo del progetto è andare oltre la Realtà Virtuale, con la possibilità di analizzare i resti e di re-interpretare la funzione di un edificio, sia in fase di scavo che a scavo concluso. Dal punto di vista della ricerca, la possibilità di visualizzare le ipotesi sul campo favorisce una comprensione più profonda del contesto archeologico. Un secondo obiettivo è la comunicazione a un pubblico di “non-archeologi”. Si vuole offrire a normali visitatori la possibilità di comprendere e sperimentare il processo interpretativo, fornendo loro qualcosa in più rispetto a una sola ipotesi definitiva.
Resumo:
Three-dimensional (3D) ultrasound volume acquisition, analysis and display of fetal structures have enhanced their visualization and greatly improved the general understanding of their anatomy and pathology. The dynamic display of volume data generally depends on proprietary software, usually supplied with the ultrasound system, and on the operator's ability to maneuver the dataset digitally. We have used relatively simple tools and an established storage, display and manipulation format to generate non-linear virtual reality object movies of prenatal images (including moving sequences and 3D-rendered views) that can be navigated easily and interactively on any current computer. This approach permits a viewing or learning experience that is superior to watching a linear movie passively.
Resumo:
Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.