64 resultados para virtual legal reality
Resumo:
The proceedings of the conference
Resumo:
As the fidelity of virtual environments (VE) continues to increase, the possibility of using them as training platforms is becoming increasingly realistic for a variety of application domains, including military and emergency personnel training. In the past, there was much debate on whether the acquisition and subsequent transfer of spatial knowledge from VEs to the real world is possible, or whether the differences in medium during training would essentially be an obstacle to truly learning geometric space. In this paper, the authors present various cognitive and environmental factors that not only contribute to this process, but also interact with each other to a certain degree, leading to a variable exposure time requirement in order for the process of spatial knowledge acquisition (SKA) to occur. The cognitive factors that the authors discuss include a variety of individual user differences such as: knowledge and experience; cognitive gender differences; aptitude and spatial orientation skill; and finally, cognitive styles. Environmental factors discussed include: Size, Spatial layout complexity and landmark distribution. It may seem obvious that since every individual's brain is unique - not only through experience, but also through genetic predisposition that a one size fits all approach to training would be illogical. Furthermore, considering that various cognitive differences may further emerge when a certain stimulus is present (e.g. complex environmental space), it would make even more sense to understand how these factors can impact spatial memory, and to try to adapt the training session by providing visual/auditory cues as well as by changing the exposure time requirements for each individual. The impact of this research domain is important to VE training in general, however within service and military domains, guaranteeing appropriate spatial training is critical in order to ensure that disorientation does not occur in a life or death scenario.
Resumo:
Technological innovations have had a profound influence on how we study the sensory perception in humans and other animals. One example was the introduction of affordable computers, which radically changed the nature of visual experiments. It is clear that vision research is now at cusp of a similar shift, this time driven by the use of commercially available, low-cost, high- fidelity virtual reality (VR). In this review we will focus on: (a) the research questions VR allows experimenters to address and why these research questions are important, (b) the things that need to be considered when using VR to study human perception, (c) the drawbacks of current VR systems, and (d) the future direction vision research may take, now that VR has become a viable research tool.
Resumo:
This paper investigates the use of really simple syndication (RSS) to dynamically change virtual environments. The case study presented here uses meteorological data downloaded from the Internet in the form of an RSS feed, this data is used to simulate current weather patterns in a virtual environment. The downloaded data is aggregated and interpreted in conjunction with a configuration file, used to associate relevant weather information to the rendering engine. The engine is able to animate a wide range of basic weather patterns. Virtual reality is a way of immersing a user into a different environment, the amount of immersion the user experiences is important. Collaborative virtual reality will benefit from this work by gaining a simple way to incorporate up-to-date RSS feed data into any environment scenario. Instead of simulating weather conditions in training scenarios, actual weather conditions can be incorporated, improving the scenario and immersion.
Resumo:
Efficient Navigation is essential for the user-acceptance of the Virtual Environments (VEs), but it is also inherently, a difficult task to perform. Resulting research in the area provides users with great variety of navigation assistance in VEs however it is still known to be inadequate, complex and suffers through many limitations. In this paper we discuss the task of navigation in the virtual environments and record the wayfinding assistance currently available for the VEs. The paper introduces taxonomy of navigation and categorizes the aids on basis of the functions performed. The paper provides views on current work performed in the area of non-speech auditory aids. Further we conclude by providing views on the important areas that require further investigation and research.
Resumo:
This paper addresses the crucial problem of wayfinding assistance in the Virtual Environments (VEs). A number of navigation aids such as maps, agents, trails and acoustic landmarks are available to support the user for navigation in VEs, however it is evident that most of the aids are visually dominated. This work-in-progress describes a sound based approach that intends to assist the task of 'route decision' during navigation in a VE using music. Furthermore, with use of musical sounds it aims to reduce the cognitive load associated with other visually as well as physically dominated tasks. To achieve these goals, the approach exploits the benefits provided by music to ease and enhance the task of wayfinding, whilst making the user experience in the VE smooth and enjoyable.
Resumo:
As Virtual Reality pushes the boundaries of the human computer interface new ways of interaction are emerging. One such technology is the integration of haptic interfaces (force-feedback devices) into virtual environments. This modality offers an improved sense of immersion to that achieved when relying only on audio and visual modalities. The paper introduces some of the technical obstacles such as latency and network traffic that need to be overcome for maintaining a high degree of immersion during haptic tasks. The paper describes the advantages of integrating haptic feedback into systems, and presents some of the technical issues inherent in a networked haptic virtual environment. A generic control interface has been developed to seamlessly mesh with existing networked VR development libraries.
Resumo:
Virtual Reality (VR) has been used in a variety of forms to assist in the treatment of a wide range of psychological illness. VR can also fulfil the need that psychologists have for safe environments in which to conduct experiments. Currently the main barrier against using this technology is the complexity in developing applications. This paper presents two different co-operative psychological applications which have been developed using a single framework. These applications require different levels of co-operation between the users and clients, ranging from full psychologist involvement to their minimal intervention. This paper will also discuss our approach to developing these different environments and our experiences to date in utilising these environments.
Resumo:
It is reported in the literature that distances from the observer are underestimated more in virtual environments (VEs) than in physical world conditions. On the other hand estimation of size in VEs is quite accurate and follows a size-constancy law when rich cues are present. This study investigates how estimation of distance in a CAVETM environment is affected by poor and rich cue conditions, subject experience, and environmental learning when the position of the objects is estimated using an experimental paradigm that exploits size constancy. A group of 18 healthy participants was asked to move a virtual sphere controlled using the wand joystick to the position where they thought a previously-displayed virtual cube (stimulus) had appeared. Real-size physical models of the virtual objects were also presented to the participants as a reference of real physical distance during the trials. An accurate estimation of distance implied that the participants assessed the relative size of sphere and cube correctly. The cube appeared at depths between 0.6 m and 3 m, measured along the depth direction of the CAVE. The task was carried out in two environments: a poor cue one with limited background cues, and a rich cue one with textured background surfaces. It was found that distances were underestimated in both poor and rich cue conditions, with greater underestimation in the poor cue environment. The analysis also indicated that factors such as subject experience and environmental learning were not influential. However, least square fitting of Stevens’ power law indicated a high degree of accuracy during the estimation of object locations. This accuracy was higher than in other studies which were not based on a size-estimation paradigm. Thus as indirect result, this study appears to show that accuracy when estimating egocentric distances may be increased using an experimental method that provides information on the relative size of the objects used.
Resumo:
Retinal blurring resulting from the human eye's depth of focus has been shown to assist visual perception. Infinite focal depth within stereoscopically displayed virtual environments may cause undesirable effects, for instance, objects positioned at a distance in front of or behind the observer's fixation point will be perceived in sharp focus with large disparities thereby causing diplopia. Although published research on incorporation of synthetically generated Depth of Field (DoF) suggests that this might act as an enhancement to perceived image quality, no quantitative testimonies of perceptional performance gains exist. This may be due to the difficulty of dynamic generation of synthetic DoF where focal distance is actively linked to fixation distance. In this paper, such a system is described. A desktop stereographic display is used to project a virtual scene in which synthetically generated DoF is actively controlled from vergence-derived distance. A performance evaluation experiment on this system which involved subjects carrying out observations in a spatially complex virtual environment was undertaken. The virtual environment consisted of components interconnected by pipes on a distractive background. The subject was tasked with making an observation based on the connectivity of the components. The effects of focal depth variation in static and actively controlled focal distance conditions were investigated. The results and analysis are presented which show that performance gains may be achieved by addition of synthetic DoF. The merits of the application of synthetic DoF are discussed.
Resumo:
The problems encountered by individuals with disabilities when accessing large public buildings is described and a solution based on the generation of virtual models of the built environment is proposed. These models are superimposed on a control network infrastructure, currently utilised in intelligent building applications such as lighting, heating and access control. The use of control network architectures facilitates the creation of distributed models that closely mirror both the physical and control properties of the environment. The model of the environment is kept local to the installation which allows the virtual representation of a large building to be decomposed into an interconnecting series of smaller models. This paper describes two methods of interacting with the virtual model, firstly a two dimensional aural representation that can be used as the basis of a portable navigational device. Secondly an augmented reality called DAMOCLES that overlays additional information on a user’s normal field of view. The provision of virtual environments offers new possibilities in the man-machine interface so that intuitive access to network based services and control functions can be given to a user.