963 resultados para Virtual Museum 3D


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, stable markerless 6 DOF video based handtracking devices became available. These devices simultaneously track the positions and orientations of both user hands in different postures with at least 25 frames per second. Such hand-tracking allows for using the human hands as natural input devices. However, the absence of physical buttons for performing click actions and state changes poses severe challenges in designing an efficient and easy to use 3D interface on top of such a device. In particular, for coupling and decoupling a virtual object’s movements to the user’s hand (i.e. grabbing and releasing) a solution has to be found. In this paper, we introduce a novel technique for efficient two-handed grabbing and releasing objects and intuitively manipulating them in the virtual space. This technique is integrated in a novel 3D interface for virtual manipulations. A user experiment shows the superior applicability of this new technique. Last but not least, we describe how this technique can be exploited in practice to improve interaction by integrating it with RTT DeltaGen, a professional CAD/CAS visualization and editing tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a database of freely available stereo-3D content designed to facilitate research in stereo post-production. It describes the structure and content of the database and provides some details about how the material was gathered. The database includes examples of many of the scenarios characteristic to broadcast footage. Material was gathered at different locations including a studio with controlled lighting and both indoor and outdoor on-location sites with more restricted lighting control. The database also includes video sequences with accompanying 3D audio data recorded in an Ambisonics format. An intended consequence of gathering the material is that the database contains examples of degradations that would be commonly present in real-world scenarios. This paper describes one such artefact caused by uneven exposure in the stereo views, causing saturation in the over-exposed view. An algorithm for the restoration of this artefact is proposed in order to highlight the usefuiness of the database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent developments in the area of interactive entertainment have suggested to combine stereoscopic visualization with multi-touch displays, which has the potential to open up new vistas for natural interaction with interactive three-dimensional (3D) applications. However, the question arises how the user interfaces for system control in such 3D setups should be designed in order to provide an effective user experience. In this article we introduce 3D GUI widgets for interaction with stereoscopic touch displays. The design of the widgets was inspired to skeuomorphism and affordances in such a way that the user should be able to operate the virtual objects in the same way as their real-world equivalents. We evaluate the developed widgets and compared them with their 2D counterparts in the scope of an example application in order to analyze the usability of and user behavior with the widgets. The results reveal differences in user behavior with and without stereoscopic display during touch interaction, and show that the developed 2D as well as 3D GUI widgets can be used effectively in different applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Immersive virtual environments (IVEs) have the potential to afford natural interaction in the three-dimensional (3D) space around a user. However, interaction performance in 3D mid-air is often reduced and depends on a variety of ergonomics factors, the user's endurance, muscular strength, as well as fitness. In particular, in contrast to traditional desktop-based setups, users often cannot rest their arms in a comfortable pose during the interaction. In this article we analyze the impact of comfort on 3D selection tasks in an immersive desktop setup. First, in a pre-study we identified how comfortable or uncomfortable specific interaction positions and poses are for users who are standing upright. Then, we investigated differences in 3D selection task performance when users interact with their hands in a comfortable or uncomfortable body pose, while sitting on a chair in front of a table while the VE was displayed on a headmounted display (HMD). We conducted a Fitts' Law experiment to evaluate selection performance in different poses. The results suggest that users achieve a significantly higher performance in a comfortable pose when they rest their elbow on the table.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional (3D) immersive virtual worlds have been touted as being capable of facilitating highly interactive, engaging, multimodal learning experiences. Much of the evidence gathered to support these claims has been anecdotal but the potential that these environments hold to solve traditional problems in online and technology-mediated education—primarily learner isolation and student disengagement—has resulted in considerable investments in virtual world platforms like Second Life, OpenSimulator, and Open Wonderland by both professors and institutions. To justify this ongoing and sustained investment, institutions and proponents of simulated learning environments must assemble a robust body of evidence that illustrates the most effective use of this powerful learning tool. In this authoritative collection, a team of international experts outline the emerging trends and developments in the use of 3D virtual worlds for teaching and learning. They explore aspects of learner interaction with virtual worlds, such as user wayfinding in Second Life, communication modes and perceived presence, and accessibility issues for elderly or disabled learners. They also examine advanced technologies that hold potential for the enhancement of learner immersion and discuss best practices in the design and implementation of virtual world-based learning interventions and tasks. By evaluating and documenting different methods, approaches, and strategies, the contributors to Learning in Virtual Worlds offer important information and insight to both scholars and practitioners in the field. AU Press is an open access publisher and the book is available for free in PDF format as well as for purchase on our website: http://bit.ly/1W4yTRA

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Digital developments have led to the opportunity to compose simulated patient models based on three-dimensional (3D) skeletal, facial, and dental imaging. The aim of this systematic review is to provide an update on the current knowledge, to report on the technical progress in the field of 3D virtual patient science, and to identify further research needs to accomplish clinical translation. MATERIALS AND METHODS Searches were performed electronically (MEDLINE and OVID) and manually up to March 2014 for studies of 3D fusion imaging to create a virtual dental patient. Inclusion criteria were limited to human studies reporting on the technical protocol for superimposition of at least two different 3D data sets and medical field of interest. RESULTS Of the 403 titles originally retrieved, 51 abstracts and, subsequently, 21 full texts were selected for review. Of the 21 full texts, 18 studies were included in the systematic review. Most of the investigations were designed as feasibility studies. Three different types of 3D data were identified for simulation: facial skeleton, extraoral soft tissue, and dentition. A total of 112 patients were investigated in the development of 3D virtual models. CONCLUSION Superimposition of data on the facial skeleton, soft tissue, and/or dentition is a feasible technique to create a virtual patient under static conditions. Three-dimensional image fusion is of interest and importance in all fields of dental medicine. Future research should focus on the real-time replication of a human head, including dynamic movements, capturing data in a single step.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES The aim of this Short Communication was to present a workflow for the superimposition of intraoral scan (IOS), cone-beam computed tomography (CBCT), and extraoral face scan (EOS) creating a 3D virtual dental patient. MATERIAL AND METHODS As a proof-of-principle, full arch IOS, preoperative CBCT, and mimic EOS were taken and superimposed to a unique 3D data pool. The connecting link between the different files was to detect existing teeth as constant landmarks in all three data sets. RESULTS This novel application technique successfully demonstrated the feasibility of building a craniofacial virtual model by image fusion of IOS, CBCT, and EOS under 3D static conditions. CONCLUSIONS The presented application is the first approach that realized the fusion of intraoral and facial surfaces combined with skeletal anatomy imaging. This novel 3D superimposition technique allowed the simulation of treatment planning, the exploration of the patients' expectations, and the implementation as an effective communication tool. The next step will be the development of a real-time 4D virtual patient in motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The confluence of three-dimensional (3D) virtual worlds with social networks imposes on software agents, in addition to conversational functions, the same behaviours as those common to human-driven avatars. In this paper, we explore the possibilities of the use of metabots (metaverse robots) with motion capabilities in complex virtual 3D worlds and we put forward a learning model based on the techniques used in evolutionary computation for optimizing the fuzzy controllers which will subsequently be used by metabots for moving around a virtual environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las tecnologías de vídeo en 3D han estado al alza en los últimos años, con abundantes avances en investigación unidos a una adopción generalizada por parte de la industria del cine, y una importancia creciente en la electrónica de consumo. Relacionado con esto, está el concepto de vídeo multivista, que abarca el vídeo 3D, y puede definirse como un flujo de vídeo compuesto de dos o más vistas. El vídeo multivista permite prestaciones avanzadas de vídeo, como el vídeo estereoscópico, el “free viewpoint video”, contacto visual mejorado mediante vistas virtuales, o entornos virtuales compartidos. El propósito de esta tesis es salvar un obstáculo considerable de cara al uso de vídeo multivista en sistemas de comunicación: la falta de soporte para esta tecnología por parte de los protocolos de señalización existentes, que hace imposible configurar una sesión con vídeo multivista mediante mecanismos estándar. Así pues, nuestro principal objetivo es la extensión del Protocolo de Inicio de Sesión (SIP) para soportar la negociación de sesiones multimedia con flujos de vídeo multivista. Nuestro trabajo se puede resumir en tres contribuciones principales. En primer lugar, hemos definido una extensión de señalización para configurar sesiones SIP con vídeo 3D. Esta extensión modifica el Protocolo de Descripción de Sesión (SDP) para introducir un nuevo atributo de nivel de medios, y un nuevo tipo de dependencia de descodificación, que contribuyen a describir los formatos de vídeo 3D que pueden emplearse en una sesión, así como la relación entre los flujos de vídeo que componen un flujo de vídeo 3D. La segunda contribución consiste en una extensión a SIP para manejar la señalización de videoconferencias con flujos de vídeo multivista. Se definen dos nuevos paquetes de eventos SIP para describir las capacidades y topología de los terminales de conferencia, por un lado, y la configuración espacial y mapeo de flujos de una conferencia, por el otro. También se describe un mecanismo para integrar el intercambio de esta información en el proceso de inicio de una conferencia SIP. Como tercera y última contribución, introducimos el concepto de espacio virtual de una conferencia, o un sistema de coordenadas que incluye todos los objetos relevantes de la conferencia (como dispositivos de captura, pantallas, y usuarios). Explicamos cómo el espacio virtual se relaciona con prestaciones de conferencia como el contacto visual, la escala de vídeo y la fidelidad espacial, y proporcionamos reglas para determinar las prestaciones de una conferencia a partir del análisis de su espacio virtual, y para generar espacios virtuales durante la configuración de conferencias.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identification and tracking of objects in specific environments such as harbors or security areas is a matter of great importance nowadays. With this purpose, numerous systems based on different technologies have been developed, resulting in a great amount of gathered data displayed through a variety of interfaces. Such amount of information has to be evaluated by human operators in order to take the correct decisions, sometimes under highly critical situations demanding both speed and accuracy. In order to face this problem we describe IDT-3D, a platform for identification and tracking of vessels in a harbour environment able to represent fused information in real time using a Virtual Reality application. The effectiveness of using IDT-3D as an integrated surveillance system is currently under evaluation. Preliminary results point to a significant decrease in the times of reaction and decision making of operators facing up a critical situation. Although the current application focus of IDT-3D is quite specific, the results of this research could be extended to the identification and tracking of targets in other controlled environments of interest as coastlines, borders or even urban areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo del proyecto consiste en la realización de un modelo tridimensional,mediante la utilización de un equipo laser escáner 3D, de los fragmentos de unos globos de lava emergidos de las erupciones que ocurrieron en la isla canaria de ElHierro, en Octubre de 2011. Se persigue con dicho modelo, conseguir una muestra virtual de unos restos geológicos que, por su naturaleza, son extremadamente frágiles y así permitir que puedan ser estudiados con sus características de la forma más completa posible sin necesidad de ser manipulados incluso cuando las muestras físicas se hayan deteriorado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este documento se realiza la implementación de la fase "Transformación Bacteriana" en el "Laboratorio Virtual de Biotecnología Agroforestal" de la Universidad Politécnica de Madrid. Esta fase representa una continuación de un trabajo previo, en el que se implementó el laboratorio virtual y se diseñó una arquitectura para el desarrollo de las fases posteriores. La Transformación Bacteriana es la tercera fase del proceso de modificación genética de un chopo para dotarle de resistencia frente a ciertos hongos y tiene como objetivo la introduccion de un plásmido modificado genéticamente en la bacteria Agrobacterium tumefaciens. Para el desarrollo de esta fase se determinaron las acciones que debían de ser agregadas al tutor automático y, además, se programaron los scripts de ciertos objetos 3D ya existentes y se modelaron en 3D nuevos instrumentos y maquinarias necesarios para la realización de la práctica. Por otra parte, luego de la etapa de pruebas de las dos primeras fases, se concluyó que se debían de resolver algunos problemas de usabilidad del visor Firestorm. Gracias a que este visor es un proyecto de código abierto, fue posible corregir los problemas identificados. La resolución de estos problemas se explica como parte del presente trabajo.