838 resultados para 3D multi-user virtual environments
Resumo:
The population of English Language Learners (ELLs) globally has been increasing substantially every year. In the United States alone, adult ELLs are the fastest growing portion of learners in adult education programs (Yang, 2005). There is a significant need to improve the teaching of English to ELLs in the United States and other English-speaking dominant countries. However, for many ELLs, speaking, especially to Native English Speakers (NESs), causes considerable language anxiety, which in turn plays a vital role in hindering their language development and academic progress (Pichette, 2009; Woodrow, 2006). ^ Task-based Language Teaching (TBLT), such as simulation activities, has long been viewed as an effective approach for second-language development. The current advances in technology and rapid emergence of Multi-User Virtual Environments (MUVEs) have provided an opportunity for educators to consider conducting simulations online for ELLs to practice speaking English to NESs. Yet to date, empirical research on the effects of MUVEs on ELLs' language development and speaking is limited (Garcia-Ruiz, Edwards, & Aquino-Santos, 2007). ^ This study used a true experimental treatment control group repeated measures design to compare the perceived speaking anxiety levels (as measured by an anxiety scale administered per simulation activity) of 11 ELLs (5 in the control group, 6 in the experimental group) when speaking to Native English Speakers (NESs) during 10 simulation activities. Simulations in the control group were done face-to-face, while those in the experimental group were done in the MUVE of Second Life. ^ The results of the repeated measures ANOVA revealed after the Huynh-Feldt epsilon correction, demonstrated for both groups a significant decrease in anxiety levels over time from the first simulation to the tenth and final simulation. When comparing the two groups, the results revealed a statistically significant difference, with the experimental group demonstrating a greater anxiety reduction. These results suggests that language instructors should consider including face-to-face and MUVE simulations with ELLs paired with NESs as part of their language instruction. Future investigations should investigate the use of other multi-user virtual environments and/or measure other dimensions of the ELL/NES interactions.^
Resumo:
The population of English Language Learners (ELLs) globally has been increasing substantially every year. In the United States alone, adult ELLs are the fastest growing portion of learners in adult education programs (Yang, 2005). There is a significant need to improve the teaching of English to ELLs in the United States and other English-speaking dominant countries. However, for many ELLs, speaking, especially to Native English Speakers (NESs), causes considerable language anxiety, which in turn plays a vital role in hindering their language development and academic progress (Pichette, 2009; Woodrow, 2006). Task-based Language Teaching (TBLT), such as simulation activities, has long been viewed as an effective approach for second-language development. The current advances in technology and rapid emergence of Multi-User Virtual Environments (MUVEs) have provided an opportunity for educators to consider conducting simulations online for ELLs to practice speaking English to NESs. Yet to date, empirical research on the effects of MUVEs on ELLs’ language development and speaking is limited (Garcia-Ruiz, Edwards, & Aquino-Santos, 2007). This study used a true experimental treatment control group repeated measures design to compare the perceived speaking anxiety levels (as measured by an anxiety scale administered per simulation activity) of 11 ELLs (5 in the control group, 6 in the experimental group) when speaking to Native English Speakers (NESs) during 10 simulation activities. Simulations in the control group were done face-to-face, while those in the experimental group were done in the MUVE of Second Life. The results of the repeated measures ANOVA revealed after the Huynh-Feldt epsilon correction, demonstrated for both groups a significant decrease in anxiety levels over time from the first simulation to the tenth and final simulation. When comparing the two groups, the results revealed a statistically significant difference, with the experimental group demonstrating a greater anxiety reduction. These results suggests that language instructors should consider including face-to-face and MUVE simulations with ELLs paired with NESs as part of their language instruction. Future investigations should investigate the use of other multi-user virtual environments and/or measure other dimensions of the ELL/NES interactions.
Resumo:
3D virtual reality, including the current generation of multi-user virtual worlds, has had a long history of use in education and training, and it experienced a surge of renewed interest with the advent of Second Life in 2003. What followed shortly after were several years marked by considerable hype around the use of virtual worlds for teaching, learning and research in higher education. For the moment, uptake of the technology seems to have plateaued, with academics either maintaining the status quo and continuing to use virtual worlds as they have previously done or choosing to opt out altogether. This paper presents a brief review of the use of virtual worlds in the Australian and New Zealand higher education sector in the past and reports on its use in the sector at the present time, based on input from members of the Australian and New Zealand Virtual Worlds Working Group. It then adopts a forward-looking perspective amid the current climate of uncertainty, musing on future directions and offering suggestions for potential new applications in light of recent technological developments and innovations in the area.
Using Agents for Mining Maintenance Data while interacting in 3D Objectoriented Virtual Environments
Resumo:
This report demonstrates the development of: (a) object-oriented representation to provide 3D interactive environment using data provided by Woods Bagot; (b) establishing basis of agent technology for mining building maintenance data, and (C) 3D interaction in virtual environments using object-oriented representation. Applying data mining over industry maintenance database has been demonstrated in the previous report.
Resumo:
In this work, we present the GATE, an approach based on middleware for interperceptive applications. Through the services offered by the GATE, we extension we extend the concept of Interperception for integration with several devices, including set-top box, mobile devices (cell phones), among others. Through this extension ensures the implementation of virtual environments in these devices. Thus, users who access the version of the computer environment may interact with those who access the same environment by other devices. This extension is just a part of the services provided by the GATE, that remerges as a new proposal for multi-user virtual environments creation.
Resumo:
Current software tools for documenting and developing models of buildings focus on supporting a single user who is a specialist in the specific software used within their own discipline. Extensions to these tools for use by teams maintain the single discipline view and focus on version and file management. There is a perceived need in industry to have tools that specifically support collaboration among individuals from multiple disciplines with both a graphical representation of the design and a persistent data model. This project involves the development of a prototype of such a software tool. We have identified multi-user 3D virtual worlds as an appropriate software base for the development of a collaborative design tool. These worlds are inherently multi-user and therefore directly support collaboration through a sense of awareness of others in the virtual world, their location within the world, and provide various channels for direct and indirect communication. Such software platforms also provide a 3D building and modelling environment that can be adapted to the needs of the building and construction industry. DesignWorld is a prototype system for collaborative design developed by augmenting the Second Life (SL) commercial software platform1 with a collection web-based tools for communication and design. Agents manage communication between the 3D virtual world and the web-based tools. In addition, agents maintain a persistent external model of designs in the 3D world which can be augmented with data such as relationships, disciplines and versions not usually associated with 3D virtual worlds but required in design scenarios.
Resumo:
Process modeling is a complex organizational task that requires many iterations and communication between the business analysts and the domain specialists involved in the process modeling. The challenge of process modeling is exacerbated, when the process of modeling has to be performed in a cross-organizational, distributed environment. Some systems have been developed to support collaborative process modeling, all of which use traditional 2D interfaces. We present an environment for collaborative process modeling, using 3D virtual environment technology. We make use of avatar instantiations of user ego centres, to allow for the spatial embodiment of the user with reference to the process model. We describe an innovative prototype collaborative process modeling approach, implemented as a modeling environment in Second Life. This approach leverages the use of virtual environments to provide user context for editing and collaborative exercises. We present a positive preliminary report on a case study, in which a test group modelled a business process using the system in Second Life.
Resumo:
Modelling business processes for analysis or redesign usually requires the collaboration of many stakeholders. These stakeholders may be spread across locations or even companies, making co-located collaboration costly and difficult to organize. Modern process modelling technologies support remote collaboration but lack support for visual cues used in co-located collaboration. Previously we presented a prototype 3D virtual world process modelling tool that supports a number of visual cues to facilitate remote collaborative process model creation and validation. However, the added complexity of having to navigate a virtual environment and using an avatar for communication made the tool difficult to use for novice users. We now present an evolved version of the technology that addresses these issues by providing natural user interfaces for non-verbal communication, navigation and model manipulation.
Resumo:
A demo video showing the BPMVM prototype using several natural user interfaces, such as multi-touch input, full-body tracking and virtual reality.
Resumo:
Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.
Resumo:
Second Life (SL) is an ideal platform for language learning. It is called a Multi-User Virtual Environment, where users can have varieties of learning experiences in life-like environments. Numerous attempts have been made to use SL as a platform for language teaching and the possibility of SL as a means to promote conversational interactions has been reported. However, the research so far has largely focused on simply using SL without further augmentations for communication between learners or between teachers and learners in a school-like environment. Conversely, not enough attention has been paid to its controllability which builds on the embedded functions in SL. This study, based on the latest theories of second language acquisition, especially on the Task Based Language Teaching and the Interaction Hypothesis, proposes to design and implement an automatized interactive task space (AITS) where robotic agents work as interlocutors of learners. This paper presents a design that incorporates the SLA theories into SL and the implementation method of the design to construct AITS, fulfilling the controllability of SL. It also presents the result of the evaluation experiment conducted on the constructed AITS.
Resumo:
Identification and tracking of objects in specific environments such as harbors or security areas is a matter of great importance nowadays. With this purpose, numerous systems based on different technologies have been developed, resulting in a great amount of gathered data displayed through a variety of interfaces. Such amount of information has to be evaluated by human operators in order to take the correct decisions, sometimes under highly critical situations demanding both speed and accuracy. In order to face this problem we describe IDT-3D, a platform for identification and tracking of vessels in a harbour environment able to represent fused information in real time using a Virtual Reality application. The effectiveness of using IDT-3D as an integrated surveillance system is currently under evaluation. Preliminary results point to a significant decrease in the times of reaction and decision making of operators facing up a critical situation. Although the current application focus of IDT-3D is quite specific, the results of this research could be extended to the identification and tracking of targets in other controlled environments of interest as coastlines, borders or even urban areas.
Resumo:
This document presents theimplementation ofa Student Behavior Predictor Viewer(SBPV)for a student predictive model. The student predictive model is part of an intelligent tutoring system, and is built from logs of students’ behaviors in the “Virtual Laboratory of Agroforestry Biotechnology”implemented in a previous work.The SBPVis a tool for visualizing a 2D graphical representationof the extended automaton associated with any of the clusters ofthe student predictive model. Apart from visualizing the extended automaton, the SBPV supports the navigation across the automaton by means of desktop devices. More precisely, the SBPV allows user to move through the automaton, to zoom in/out the graphic or to locate a given state. In addition, the SBPV also allows user to modify the default layout of the automaton on the screen by changing the position of the states by means of the mouse. To developthe SBPV, a web applicationwas designedand implementedrelying on HTML5, JavaScript and C#.
Resumo:
The proliferation of video games and other applications of computer graphics in everyday life demands a much easier way to create animatable virtual human characters. Traditionally, this has been the job of highly skilled artists and animators that painstakingly model, rig and animate their avatars, and usually have to tune them for each application and transmission/rendering platform. The emergence of virtual/mixed reality environments also calls for practical and costeffective ways to produce custom models of actual people. The purpose of the present dissertation is bringing 3D human scanning closer to the average user. For this, two different techniques are presented, one passive and one active. The first one is a fully automatic system for generating statically multi-textured avatars of real people captured with several standard cameras. Our system uses a state-of-the-art shape from silhouette technique to retrieve the shape of subject. However, to deal with the lack of detail that is common in the facial region for these kind of techniques, which do not handle concavities correctly, our system proposes an approach to improve the quality of this region. This face enhancement technique uses a generic facial model which is transformed according to the specific facial features of the subject. Moreover, this system features a novel technique for generating view-independent texture atlases computed from the original images. This static multi-texturing system yields a seamless texture atlas calculated by combining the color information from several photos. We suppress the color seams due to image misalignments and irregular lighting conditions that multi-texturing approaches typically suffer from, while minimizing the blurring effect introduced by color blending techniques. The second technique features a system to retrieve a fully animatable 3D model of a human using a commercial depth sensor. Differently to other approaches in the current state of the art, our system does not require the user to be completely still through the scanning process, and neither the depth sensor is moved around the subject to cover all its surface. Instead, the depth sensor remains static and the skeleton tracking information is used to compensate the user’s movements during the scanning stage. RESUMEN La popularización de videojuegos y otras aplicaciones de los gráficos por ordenador en el día a día requiere una manera más sencilla de crear modelos virtuales humanos animables. Tradicionalmente, estos modelos han sido creados por artistas profesionales que cuidadosamente los modelan y animan, y que tienen que adaptar específicamente para cada aplicación y plataforma de transmisión o visualización. La aparición de los entornos de realidad virtual/mixta aumenta incluso más la demanda de técnicas prácticas y baratas para producir modelos 3D representando personas reales. El objetivo de esta tesis es acercar el escaneo de humanos en 3D al usuario medio. Para ello, se presentan dos técnicas diferentes, una pasiva y una activa. La primera es un sistema automático para generar avatares multi-texturizados de personas reales mediante una serie de cámaras comunes. Nuestro sistema usa técnicas del estado del arte basadas en shape from silhouette para extraer la forma del sujeto a escanear. Sin embargo, este tipo de técnicas no gestiona las concavidades correctamente, por lo que nuestro sistema propone una manera de incrementar la calidad en una región del modelo que se ve especialmente afectada: la cara. Esta técnica de mejora facial usa un modelo 3D genérico de una cara y lo modifica según los rasgos faciales específicos del sujeto. Además, el sistema incluye una novedosa técnica para generar un atlas de textura a partir de las imágenes capturadas. Este sistema de multi-texturización consigue un atlas de textura sin transiciones abruptas de color gracias a su manera de mezclar la información de color de varias imágenes sobre cada triángulo. Todas las costuras y discontinuidades de color debidas a las condiciones de iluminación irregulares son eliminadas, minimizando el efecto de desenfoque de la interpolación que normalmente introducen este tipo de métodos. La segunda técnica presenta un sistema para conseguir un modelo humano 3D completamente animable utilizando un sensor de profundidad. A diferencia de otros métodos del estado de arte, nuestro sistema no requiere que el usuario esté completamente quieto durante el proceso de escaneado, ni mover el sensor alrededor del sujeto para cubrir toda su superficie. Por el contrario, el sensor se mantiene estático y el esqueleto virtual de la persona, que se va siguiendo durante el proceso, se utiliza para compensar sus movimientos durante el escaneado.