926 resultados para 3D virtual human


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three-dimensional virtual worlds have been growing fast in number of users, and are used for the most diverse purposes. In collaboration, they are used with good results due to features such as immersion, interaction capabilities, use of avatar embodiment, and physical space. In the particular cases of avatar embodiment and physical space, these features support nonverbal communication, but its impact on collaboration is not well known. In this work we present a protocol for case study research and its creation process, which aims to assert itself as a tool to collect data on how nonverbal communication influences collaboration in three-dimensional virtual worlds. We define the propositions and units of analysis, and a pilot case to validate them. Then, two cases are analysed under the created protocol. Most of the propositions found chains of evidences supporting them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the design and development of a frame based approach for speech to sign language machine translation system in the domain of railways and banking. This work aims to utilize the capability of Artificial intelligence for the improvement of physically challenged, deaf-mute people. Our work concentrates on the sign language used by the deaf community of Indian subcontinent which is called Indian Sign Language (ISL). Input to the system is the clerk’s speech and the output of this system is a 3D virtual human character playing the signs for the uttered phrases. The system builds up 3D animation from pre-recorded motion capture data. Our work proposes to build a Malayalam to ISL

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE Digital developments have led to the opportunity to compose simulated patient models based on three-dimensional (3D) skeletal, facial, and dental imaging. The aim of this systematic review is to provide an update on the current knowledge, to report on the technical progress in the field of 3D virtual patient science, and to identify further research needs to accomplish clinical translation. MATERIALS AND METHODS Searches were performed electronically (MEDLINE and OVID) and manually up to March 2014 for studies of 3D fusion imaging to create a virtual dental patient. Inclusion criteria were limited to human studies reporting on the technical protocol for superimposition of at least two different 3D data sets and medical field of interest. RESULTS Of the 403 titles originally retrieved, 51 abstracts and, subsequently, 21 full texts were selected for review. Of the 21 full texts, 18 studies were included in the systematic review. Most of the investigations were designed as feasibility studies. Three different types of 3D data were identified for simulation: facial skeleton, extraoral soft tissue, and dentition. A total of 112 patients were investigated in the development of 3D virtual models. CONCLUSION Superimposition of data on the facial skeleton, soft tissue, and/or dentition is a feasible technique to create a virtual patient under static conditions. Three-dimensional image fusion is of interest and importance in all fields of dental medicine. Future research should focus on the real-time replication of a human head, including dynamic movements, capturing data in a single step.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The confluence of three-dimensional (3D) virtual worlds with social networks imposes on software agents, in addition to conversational functions, the same behaviours as those common to human-driven avatars. In this paper, we explore the possibilities of the use of metabots (metaverse robots) with motion capabilities in complex virtual 3D worlds and we put forward a learning model based on the techniques used in evolutionary computation for optimizing the fuzzy controllers which will subsequently be used by metabots for moving around a virtual environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este documento presenta las mejoras y las extensiones introducidas en la herramienta de visualización del modelo predictivo del comportamiento del estudiante o Student Behavior Predictor Viewer (SBPV), implementada en un trabajo anterior. El modelo predictivo del comportamiento del estudiante es parte de un sistema inteligente de tutoría, y se construye a partir de los registros de actividad de los estudiantes en un laboratorio virtual 3D, como el Laboratorio Virtual de Biotecnología Agroforestal, implementado en un trabajo anterior, y cuyos registros de actividad de los estudiantes se han utilizado para validar este trabajo fin de grado. El SBPV es una herramienta para visualizar una representación gráfica 2D del grafo extendido asociado con cualquiera de los clusters del modelo predictivo del estudiante. Además de la visualización del grafo extendido, el SBPV controla la navegación a través del grafo por medio del navegador web. Más concretamente, el SBPV permite al usuario moverse a través del grafo, ampliar o reducir el zoom del gráfico o buscar un determinado estado. Además, el SBPV también permite al usuario modificar el diseño predeterminado del grafo en la pantalla al cambiar la posición de los estados con el ratón. Como parte de este trabajo fin de grado, se han corregido errores existentes en la versión anterior y se han introducido una serie de mejoras en el rendimiento y la usabilidad. En este sentido, se han implementado nuevas funcionalidades, tales como la visualización del modelo de comportamiento de cada estudiante individualmente o la posibilidad de elegir el método de clustering para crear el modelo predictivo del estudiante; así como ha sido necesario rediseñar la interfaz de usuario cambiando el tipo de estructuras gráficas con que se muestran los elementos del modelo y mejorando la visualización del grafo al interaccionar el usuario con él. Todas estas mejoras se explican detenidamente en el presente documento.---ABSTRACT---This document presents the improvements and extensions made to the visualization tool Student Behavior Predictor Viewer (SBPV), implemented in a previous job. The student behavior predictive model is part of an intelligent tutoring system, and is built from the records of students activity in a 3D virtual laboratory, like the “Virtual Laboratory of Agroforestry Biotechnology” implemented in a previous work, and whose records of students activity have been used to validate this final degree work. The SBPV is a tool for visualizing a 2D graphical representation of the extended graph associated with any of the clusters of the student predictive model. Apart from visualizing the extended graph, the SBPV supports the navigation across the graph by means of desktop devices. More precisely, the SBPV allows user to move through the graph, to zoom in/out the graphic or to locate a given state. In addition, the SBPV also allows user to modify the default layout of the graph on the screen by changing the position of the states by means of the mouse. As part of this work, some bugs of the previous version have been fixed and some enhancements have been implemented to improve the performance and the usability. In this sense, we have implemented new features, such as the display of the model behavior of only one student or the possibility of selecting the clustering method to create the student predictive model; as well as it was necessary to redesign the user interface changing the type of graphic structures that show model elements and improving the rendering of the graph when the user interacts with it. All these improvements are explained in detail in the next sections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proliferation of video games and other applications of computer graphics in everyday life demands a much easier way to create animatable virtual human characters. Traditionally, this has been the job of highly skilled artists and animators that painstakingly model, rig and animate their avatars, and usually have to tune them for each application and transmission/rendering platform. The emergence of virtual/mixed reality environments also calls for practical and costeffective ways to produce custom models of actual people. The purpose of the present dissertation is bringing 3D human scanning closer to the average user. For this, two different techniques are presented, one passive and one active. The first one is a fully automatic system for generating statically multi-textured avatars of real people captured with several standard cameras. Our system uses a state-of-the-art shape from silhouette technique to retrieve the shape of subject. However, to deal with the lack of detail that is common in the facial region for these kind of techniques, which do not handle concavities correctly, our system proposes an approach to improve the quality of this region. This face enhancement technique uses a generic facial model which is transformed according to the specific facial features of the subject. Moreover, this system features a novel technique for generating view-independent texture atlases computed from the original images. This static multi-texturing system yields a seamless texture atlas calculated by combining the color information from several photos. We suppress the color seams due to image misalignments and irregular lighting conditions that multi-texturing approaches typically suffer from, while minimizing the blurring effect introduced by color blending techniques. The second technique features a system to retrieve a fully animatable 3D model of a human using a commercial depth sensor. Differently to other approaches in the current state of the art, our system does not require the user to be completely still through the scanning process, and neither the depth sensor is moved around the subject to cover all its surface. Instead, the depth sensor remains static and the skeleton tracking information is used to compensate the user’s movements during the scanning stage. RESUMEN La popularización de videojuegos y otras aplicaciones de los gráficos por ordenador en el día a día requiere una manera más sencilla de crear modelos virtuales humanos animables. Tradicionalmente, estos modelos han sido creados por artistas profesionales que cuidadosamente los modelan y animan, y que tienen que adaptar específicamente para cada aplicación y plataforma de transmisión o visualización. La aparición de los entornos de realidad virtual/mixta aumenta incluso más la demanda de técnicas prácticas y baratas para producir modelos 3D representando personas reales. El objetivo de esta tesis es acercar el escaneo de humanos en 3D al usuario medio. Para ello, se presentan dos técnicas diferentes, una pasiva y una activa. La primera es un sistema automático para generar avatares multi-texturizados de personas reales mediante una serie de cámaras comunes. Nuestro sistema usa técnicas del estado del arte basadas en shape from silhouette para extraer la forma del sujeto a escanear. Sin embargo, este tipo de técnicas no gestiona las concavidades correctamente, por lo que nuestro sistema propone una manera de incrementar la calidad en una región del modelo que se ve especialmente afectada: la cara. Esta técnica de mejora facial usa un modelo 3D genérico de una cara y lo modifica según los rasgos faciales específicos del sujeto. Además, el sistema incluye una novedosa técnica para generar un atlas de textura a partir de las imágenes capturadas. Este sistema de multi-texturización consigue un atlas de textura sin transiciones abruptas de color gracias a su manera de mezclar la información de color de varias imágenes sobre cada triángulo. Todas las costuras y discontinuidades de color debidas a las condiciones de iluminación irregulares son eliminadas, minimizando el efecto de desenfoque de la interpolación que normalmente introducen este tipo de métodos. La segunda técnica presenta un sistema para conseguir un modelo humano 3D completamente animable utilizando un sensor de profundidad. A diferencia de otros métodos del estado de arte, nuestro sistema no requiere que el usuario esté completamente quieto durante el proceso de escaneado, ni mover el sensor alrededor del sujeto para cubrir toda su superficie. Por el contrario, el sensor se mantiene estático y el esqueleto virtual de la persona, que se va siguiendo durante el proceso, se utiliza para compensar sus movimientos durante el escaneado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. ^ The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

International audience

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a collaborative virtual learning environment, which includes technologies such as 3D virtual representations, learning and content management systems, remote experiments, and collaborative learning spaces, among others. It intends to facilitate the construction, management and sharing of knowledge among teachers and students, in a global perspective. The environment proposes the use of 3D social representations for accessing learning materials in a dynamic and interactive form, which is regarded to be closer to the physical reality experienced by teachers and students in a learning context. A first implementation of the proposed extended immersive learning environment, in the area of solid mechanics, is also described, including the access to theoretical contents and a remote experiment to determine the elastic modulus of a given object.These instructions give you basic guidelines for preparing camera-ready papers for conference proceedings. Use this document as a template if you are using Microsoft Word 6.0 or later. Otherwise, use this document as an instruction set. The electronic file of your paper will be formatted further. Define all symbols used in the abstract. Do not cite references in the abstract.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports on the creation of an interface for 3D virtual environments, computer-aided design applications or computer games. Standard computer interfaces are bound to 2D surfaces, e.g., computer mouses, keyboards, touch pads or touch screens. The Smart Object is intended to provide the user with a 3D interface by using sensors that register movement (inertial measurement unit), touch (touch screen) and voice (microphone). The design and development process as well as the tests and results are presented in this paper. The Smart Object was developed by a team of four third-year engineering students from diverse scientific backgrounds and nationalities during one semester.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One motive for behaving as the agent of another"s aggression appears to be anchored in as yet unelucidated mechanisms of obedience to authority. In a recent partial replication of Milgram"s obedience paradigm within an immersive virtual environment, participants administered pain to a female virtual human and observed her suffering. Whether the participants" response to the latter was more akin to other-oriented empathic concern for her well-being or to a self-oriented aversive state of personal distress in response to her distress is unclear. Using the stimuli from that study, this event-related fMRI-based study analysed brain activity during observation of the victim in pain versus not in pain. This contrast revealed activation in pre-defi ned brain areas known to be involved in affective processing but not in those commonly associated with affect sharing (e.g., ACC and insula). We then examined whether different dimensions of dispositional empathy predict activity within the same pre-defi ned brain regions: While personal distress and fantasy (i.e., tendency to transpose oneself into fi ctional situations and characters) predicted brain activity, empathic concern and perspective taking predicted no change in neuronal response associated with pain observation. These exploratory fi ndings suggest that there is a distinct pattern of brain activity associated with observing the pain-related behaviour of the victim within the context of this social dilemma, that this observation evoked a self-oriented aversive state of personal distress, and that the objective"reality" of pain is of secondary importance for this response. These fi ndings provide a starting point for experimentally more rigorous investigation of obedience.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJETIVO: Analisar a reprodutibilidade intra-observador da ultra-sonografia tridimensional (US3D) real com a US virtual, nas modalidades multiplanar e volumétrica. MATERIAIS E MÉTODOS: Foram examinados, prospectivamente, 132 blocos provenientes de 44 avaliações de 26 conceptos. Dezoito conceptos tinham idade gestacional ecográfica de oito semanas a oito semanas e seis dias, e 26 tinham de dez semanas a dez semanas e seis dias. Realizou-se a US3D, analisando-se: comprimento cabeça-nádega, saco gestacional, saco amniótico, translucência nucal, conduto onfalomesentérico, vesícula vitelínica, membros superiores, membros inferiores, distinção cabeça-tórax, perfil da face, coronal da face, implantação das orelhas, perfil da coluna, coronal da coluna, parede abdominal fechada. Foram obtidos três blocos por concepto para posterior realização da US virtual. A análise estatística foi feita utilizando-se o teste t de Student, o teste de McNemar e o kappa. RESULTADOS: No grupo I, a avaliação ultra-sonográfica 3D real multiplanar versus 3D virtual multiplanar, na análise das variáveis contínuas, evidenciou diferença significativa para todas. Na avaliação das variáveis categóricas, evidenciou-se que todas não apresentaram diferença significativa. No grupo II, a avaliação ultra-sonográfica 3D real volumétrica versus 3D virtual volumétrica demonstrou diferença significativa apenas para a variável implantação de orelhas. Os resultados das análises das variáveis categóricas evidenciaram concordância para a maioria das variáveis analisadas em ambos os grupos. CONCLUSÃO: Há reprodutibilidade intra-observador da US3D com a modalidade virtual, multiplanar e volumétrica.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Resumen basado en el de la publicación