4 resultados para time, team, task and context

em Digital Peer Publishing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The full-body control of virtual characters is a promising technique for application fields such as Virtual Prototyping. However it is important to assess to what extent the user full-body behavior is modified when immersed in a virtual environment. In the present study we have measured reach durations for two types of task (controlling a simple rigid shape vs. a virtual character) and two types of viewpoint (1st person vs. 3rd person). The paper first describes the architecture of the motion capture approach retained for the on-line full-body reach experiment. We then present reach measurement results performed in a non-virtual environment. They show that the target height parameter leads to reach duration variation of ∓25% around the average duration for the highest and lowest targets. This characteristic is highly accentuated in the virtual world as analyzed in the discussion section. In particular, the discrepancy observed for the first person viewpoint modality suggests to adopt a third person viewpoint when controling the posture of a virtual character in a virtual environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe the use of log file analysis to investigate whether the use of CSCL applications corresponds to its didactical purposes. Exemplarily we examine the use of the web-based system CommSy as software support for project-oriented university courses. We present two findings: (1) We suggest measures to shape the context of CSCL applications and support their initial and continuous use. (2) We show how log files can be used to analyze how, when and by whom a CSCL system is used and thus help to validate further empirical findings. However, log file analyses can only be interpreted reasonably when additional data concerning the context of use is available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose the use of specific system architecture, based on mobile device, for navigation in urban environments. The aim of this work is to assess how virtual and augmented reality interface paradigms can provide enhanced location based services using real-time techniques in the context of these two different technologies. The virtual reality interface is based on faithful graphical representation of the localities of interest, coupled with sensory information on the location and orientation of the user, while the augmented reality interface uses computer vision techniques to capture patterns from the real environment and overlay additional way-finding information, aligned with real imagery, in real-time. The knowledge obtained from the evaluation of the virtual reality navigational experience has been used to inform the design of the augmented reality interface. Initial results of the user testing of the experimental augmented reality system for navigation are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.