30 resultados para collaborative assessment environments
em CentAUR: Central Archive University of Reading - UK
Resumo:
Participants' eye-gaze is generally not captured or represented in immersive collaborative virtual environment (ICVE) systems. We present EyeCVE. which uses mobile eye-trackers to drive the gaze of each participant's virtual avatar, thus supporting remote mutual eye-contact and awareness of others' gaze in a perceptually unfragmented shared virtual workspace. We detail trials in which participants took part in three-way conferences between remote CAVE (TM) systems linked via EyeCVE. Eye-tracking data was recorded and used to evaluate interaction, confirming; the system's support for the use of gaze as a communicational and management resource in multiparty conversational scenarios. We point toward subsequent investigation of eye-tracking in ICVEs for enhanced remote social-interaction and analysis.
Resumo:
Large scientific applications are usually developed, tested and used by a group of geographically dispersed scientists. The problems associated with the remote development and data sharing could be tackled by using collaborative working environments. There are various tools and software to create collaborative working environments. Some software frameworks, currently available, use these tools and software to enable remote job submission and file transfer on top of existing grid infrastructures. However, for many large scientific applications, further efforts need to be put to prepare a framework which offers application-centric facilities. Unified Air Pollution Model (UNI-DEM), developed by Danish Environmental Research Institute, is an example of a large scientific application which is in a continuous development and experimenting process by different institutes in Europe. This paper intends to design a collaborative distributed computing environment for UNI-DEM in particular but the framework proposed may also fit to many large scientific applications as well.
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
Virtual learning environments (VLEs) would appear to be particular effective in computer-supported collaborative work (CSCW) for active learning. Most research studies looking at computer-supported collaborative design have focused on either synchronous or asynchronous modes of communication, but near-synchronous working has received relatively little attention. Yet it could be argued that near-synchronous communication encourages creative, rhetorical and critical exchanges of ideas, building on each other’s contributions. Furthermore, although many researchers have carried out studies on collaborative design protocol, argumentation and constructive interaction, little is known about the interaction between drawing and dialogue in near-synchronous collaborative design. The paper reports the first stage of an investigation into the requirements for the design and development of interactive systems to support the learning of collaborative design activities. The aim of the study is to understand the collaborative design processes while sketching in a shared white board and audio conferencing media. Empirical data on design processes have been obtained from observation of seven sessions with groups of design students solving an interior space-planning problem of a lounge-diner in a virtual learning environment, Lyceum, an in-house software developed by the Open University to support its students in collaborative learning.
Resumo:
Synchronous collaborative systems allow geographically distributed users to form a virtual work environment enabling cooperation between peers and enriching the human interaction. The technology facilitating this interaction has been studied for several years and various solutions can be found at present. In this paper, we discuss our experiences with one such widely adopted technology, namely the Access Grid [1]. We describe our experiences with using this technology, identify key problem areas and propose our solution to tackle these issues appropriately. Moreover, we propose the integration of Access Grid with an Application Sharing tool, developed by the authors. Our approach allows these integrated tools to utilise the enhanced features provided by our underlying dynamic transport layer.
Resumo:
A desktop tool for replay and analysis of gaze-enhanced multiparty virtual collaborative sessions is described. We linked three CAVE (TM)-like environments, creating a multiparty collaborative virtual space where avatars are animated with 3D gaze as well as head and hand motions in real time. Log files are recorded for subsequent playback and analysis Using the proposed software tool. During replaying the user can rotate the viewpoint and navigate in the simulated 3D scene. The playback mechanism relies on multiple distributed log files captured at every site. This structure enables an observer to experience latencies of movement and information transfer for every site as this is important fir conversation analysis. Playback uses an event-replay algorithm, modified to allow fast traversal of the scene by selective rendering of nodes, and to simulate fast random access. The tool's is analysis module can show each participant's 3D gaze points and areas where gaze has been concentrated.
Resumo:
Climate controls upland habitats, soils and their associated ecosystem services; therefore, understanding possible changes in upland climatic conditions can provide a rapid assessment of climatic vulnerability over the next century. We used 3 different climatic indices that were optimised to fit the upland area classified by the EU as a Severely Disadvantaged Area (SDA) 1961–1990. Upland areas within the SDA covered all altitudinal ranges, whereas the maximum altitude of lowland areas outside of the SDA was ca. 300 m. In general, the climatic index based on the ratio between annual accumulated temperature (as a measure of growing season length) and annual precipitation predicted 96% of the SDA mapped area, which was slightly better than those indices based on annual or seasonal water deficit. Overall, all climatic indices showed that upland environments were exposed to some degree of change by 2071–2100 under UKCIP02 climate projections for high and low emissions scenarios. The projected area declined by 13 to 51% across 3 indices for the low emissions scenario and by 24 to 84% for the high emissions scenario. Mean altitude of the upland area increased by +11 to +86 m for the low scenario and +21 to +178 m for the high scenario. Low altitude areas in eastern and southern Great Britain were most vulnerable to change. These projected climatic changes are likely to affect upland habitat composition, long-term soil carbon storage and wider ecosystem service provision, although it is not yet possible to determine the rate at which this might occur.
Resumo:
We present a novel way of interacting with an immersive virtual environment which involves inexpensive motion-capture using the Wii Remote®. A software framework is also presented to visualize and share this information across two remote CAVETM-like environments. The resulting application can be used to assist rehabilitation by sending motion information across remote sites. The application’s software and hardware components are scalable enough to be used on a desktop computer when home-based rehabilitation is preferred.
Resumo:
Pocket Data Mining (PDM) is our new term describing collaborative mining of streaming data in mobile and distributed computing environments. With sheer amounts of data streams are now available for subscription on our smart mobile phones, the potential of using this data for decision making using data stream mining techniques has now been achievable owing to the increasing power of these handheld devices. Wireless communication among these devices using Bluetooth and WiFi technologies has opened the door wide for collaborative mining among the mobile devices within the same range that are running data mining techniques targeting the same application. This paper proposes a new architecture that we have prototyped for realizing the significant applications in this area. We have proposed using mobile software agents in this application for several reasons. Most importantly the autonomic intelligent behaviour of the agent technology has been the driving force for using it in this application. Other efficiency reasons are discussed in details in this paper. Experimental results showing the feasibility of the proposed architecture are presented and discussed.
Resumo:
Semi-open street roofs protect pedestrians from intense sunshine and rains. Their effects on natural ventilation of urban canopy layers (UCL) are less understood. This paper investigates two idealized urban models consisting of 4(2×2) or 16(4×4) buildings under a neutral atmospheric condition with parallel (0°) or non-parallel (15°,30°,45°) approaching wind. The aspect ratio (building height (H) / street width (W)) is 1 and building width is B=3H. Computational fluid dynamic (CFD) simulations were first validated by experimental data, confirming that standard k-ε model predicted airflow velocity better than RNG k-ε model, realizable k–ε model and Reynolds stress model. Three ventilation indices were numerically analyzed for ventilation assessment, including flow rates across street roofs and openings to show the mechanisms of air exchange, age of air to display how long external air reaches a place after entering UCL, and purging flow rate to quantify the net UCL ventilation capacity induced by mean flows and turbulence. Five semi-open roof types are studied: Walls being hung above street roofs (coverage ratio λa=100%) at z=1.5H, 1.2H, 1.1H ('Hung1.5H', 'Hung1.2H', 'Hung1.1H' types); Walls partly covering street roofs (λa=80%) at z=H ('Partly-covered' type); Walls fully covering street roofs (λa=100%) at z=H ('Fully-covered' type).They basically obtain worse UCL ventilation than open street roof type due to the decreased roof ventilation. 'Hung1.1H', 'Hung1.2H', 'Hung1.5H' types are better designs than 'Fully-covered' and 'Partly-covered' types. Greater urban size contains larger UCL volume and requires longer time to ventilate. The methodologies and ventilation indices are confirmed effective to quantify UCL ventilation.
Resumo:
Awareness of emerging situations in a dynamic operational environment of a robotic assistive device is an essential capability of such a cognitive system, based on its effective and efficient assessment of the prevailing situation. This allows the system to interact with the environment in a sensible (semi)autonomous / pro-active manner without the need for frequent interventions from a supervisor. In this paper, we report a novel generic Situation Assessment Architecture for robotic systems directly assisting humans as developed in the CORBYS project. This paper presents the overall architecture for situation assessment and its application in proof-of-concept Demonstrators as developed and validated within the CORBYS project. These include a robotic human follower and a mobile gait rehabilitation robotic system. We present an overview of the structure and functionality of the Situation Assessment Architecture for robotic systems with results and observations as collected from initial validation on the two CORBYS Demonstrators.