972 resultados para Virtual Human
Resumo:
There is an increasing need of easy and affordable technologies to automatically generate virtual 3D models from their real counterparts. In particular, 3D human reconstruction has driven the creation of many clever techniques, most of them based on the visual hull (VH) concept. Such techniques do not require expensive hardware; however, they tend to yield 3D humanoids with realistic bodies but mediocre faces, since VH cannot handle concavities. On the other hand, structured light projectors allow to capture very accurate depth data, and thus to reconstruct realistic faces, but they are too expensive to use several of them. We have developed a technique to merge a VH-based 3D mesh of a reconstructed humanoid and the depth data of its face, captured by a single structured light projector. By combining the advantages of both systems in a simple setting, we are able to reconstruct realistic 3D human models with believable faces.
Resumo:
One motive for behaving as the agent of another"s aggression appears to be anchored in as yet unelucidated mechanisms of obedience to authority. In a recent partial replication of Milgram"s obedience paradigm within an immersive virtual environment, participants administered pain to a female virtual human and observed her suffering. Whether the participants" response to the latter was more akin to other-oriented empathic concern for her well-being or to a self-oriented aversive state of personal distress in response to her distress is unclear. Using the stimuli from that study, this event-related fMRI-based study analysed brain activity during observation of the victim in pain versus not in pain. This contrast revealed activation in pre-defi ned brain areas known to be involved in affective processing but not in those commonly associated with affect sharing (e.g., ACC and insula). We then examined whether different dimensions of dispositional empathy predict activity within the same pre-defi ned brain regions: While personal distress and fantasy (i.e., tendency to transpose oneself into fi ctional situations and characters) predicted brain activity, empathic concern and perspective taking predicted no change in neuronal response associated with pain observation. These exploratory fi ndings suggest that there is a distinct pattern of brain activity associated with observing the pain-related behaviour of the victim within the context of this social dilemma, that this observation evoked a self-oriented aversive state of personal distress, and that the objective"reality" of pain is of secondary importance for this response. These fi ndings provide a starting point for experimentally more rigorous investigation of obedience.
Resumo:
The article presents the design process of intelligent virtual human patients that are used for the enhancement of clinical skills. The description covers the development from conceptualization and character creation to technical components and the application in clinical research and training. The aim is to create believable social interactions with virtual agents that help the clinician to develop skills in symptom and ability assessment, diagnosis, interview techniques and interpersonal communication. The virtual patient fulfills the requirements of a standardized patient producing consistent, reliable and valid interactions in portraying symptoms and behaviour related to a specific clinical condition.
Resumo:
Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.
Resumo:
International audience
Resumo:
There is a high prevalence of leprosy in the Amazon region of Brazil. We have developed a distance education course in leprosy for training staff of the Family Health Teams (FHTs). The course was made available through a web portal. Tele-educational resources were mediated by professors and coordinators, and included the use of theoretical content available through the web, discussion lists, Internet chat, activity diaries, 3-D video animations (Virtual Human on Leprosy), classes in video streaming and case simulation. Sixty-five FHT staff members were enrolled. All of them completed the course and 47 participants received a certificate at the end of the course. At the end of the course, 48 course-evaluation questionnaires were answered. A total of 47 participants (98%) considered the course as excellent. The results demonstrate the feasibility of an interactive, tele-education model as an educational resource for staff in isolated regions. Improvements in diagnostic skills should increase diagnostic suspicion of leprosy and may contribute to early detection.
Resumo:
Introduction: Neuroimaging of the self focused on high-level mechanisms such as language, memory or imagery of the self. Recent evidence suggests that low-level mechanisms of multisensory and sensorimotor integration may play a fundamental role in encoding self-location and the first-person perspective (Blanke and Metzinger, 2009). Neurological patients with out-of body experiences (OBE) suffer from abnormal self-location and the first-person perspective due to a damage in the temporo-parietal junction (Blanke et al., 2004). Although self-location and the first-person perspective can be studied experimentally (Lenggenhager et al., 2009), the neural underpinnings of self-location have yet to be investigated. To investigate the brain network involved in self-location and first-person perspective we used visuo-tactile multisensory conflict, magnetic resonance (MR)-compatible robotics, and fMRI in study 1, and lesion analysis in a sample of 9 patients with OBE due to focal brain damage in study 2. Methods: Twenty-two participants saw a video showing either a person's back or an empty room being stroked (visual stimuli) while the MR-compatible robotic device stroked their back (tactile stimulation). Direction and speed of the seen stroking could either correspond (synchronous) or not (asynchronous) to those of the seen stroking. Each run comprised the four conditions according to a 2x2 factorial design with Object (Body, No-Body) and Synchrony (Synchronous, Asynchronous) as main factors. Self-location was estimated using the mental ball dropping (MBD; Lenggenhager et al., 2009). After the fMRI session participants completed a 6-item adapted from the original questionnaire created by Botvinick and Cohen (1998) and based on questions and data obtained by Lenggenhager et al. (2007, 2009). They were also asked to complete a questionnaire to disclose the perspective they adopted during the illusion. Response times (RTs) for the MBD and fMRI data were analyzed with a 3-way mixed model ANOVA with the in-between factor Perspective (up, down) and the two with-in factors Object (body, no-body) and Stroking (synchronous, asynchronous). Quantitative lesion analysis was performed using MRIcron (Rorden et al., 2007). We compared the distributions of brain lesions confirmed by multimodality imaging (Knowlton, 2004) in patients with OBE with those showing complex visual hallucinations involving people or faces, but without any disturbance of self-location and first person perspective. Nine patients with OBE were investigated. The control group comprised 8 patients. Structural imaging data were available for normalization and co-registration in all the patients. Normalization of each patient's lesion into the common MNI (Montreal Neurological Institute) reference space permitted simple, voxel-wise, algebraic comparisons to be made. Results: Even if in the scanner all participants were lying on their back and were facing upwards, analysis of perspective showed that half of the participants had the impression to be looking down at the virtual human body below them, despite any cues about their body position (Down-group). The other participants had the impression to be looking up at the virtual body above them (Up-group). Analysis of Q3 ("How strong was the feeling that the body you saw was you?") indicated stronger self-identification with the virtual body during the synchronous stroking. RTs in the MBD task confirmed these subjective data (significant 3-way interaction between perspective, object and stroking). fMRI results showed eight cortical regions where the BOLD signal was significantly different during at least one of the conditions resulting from the combination of Object and Stroking, relative to baseline: right and left temporo-parietal junction, right EBA, left middle occipito-temporal gyrus, left postcentral gyrus, right medial parietal lobe, bilateral medial occipital lobe (Fig 1). The activation patterns in right and left temporo-parietal junction and right EBA reflected changes in self-location and perspective as revealed by statistical analysis that was performed on the percentage of BOLD change with respect to the baseline. Statistical lesion overlap comparison (using nonparametric voxel based lesion symptom mapping) with respect to the control group revealed the right temporo-parietal junction, centered at the angular gyrus (Talairach coordinates x = 54, y =-52, z = 26; p>0.05, FDR corrected). Conclusions: The present questionnaire and behavioural results show that - despite the noisy and constraining MR environment) our participants had predictable changes in self-location, self-identification, and first-person perspective when robotic tactile stroking was applied synchronously with the robotic visual stroking. fMRI data in healthy participants and lesion data in patients with abnormal self-location and first-person perspective jointly revealed that the temporo-parietal cortex especially in the right hemisphere encodes these conscious experiences. We argue that temporo-parietal activity reflects the experience of the conscious "I" as embodied and localized within bodily space.
Resumo:
This paper presents the design and development of a frame based approach for speech to sign language machine translation system in the domain of railways and banking. This work aims to utilize the capability of Artificial intelligence for the improvement of physically challenged, deaf-mute people. Our work concentrates on the sign language used by the deaf community of Indian subcontinent which is called Indian Sign Language (ISL). Input to the system is the clerk’s speech and the output of this system is a 3D virtual human character playing the signs for the uttered phrases. The system builds up 3D animation from pre-recorded motion capture data. Our work proposes to build a Malayalam to ISL
Resumo:
There is a high prevalence of leprosy in the Amazon region of Brazil. We have developed a distance education course in leprosy for training staff of the Family Health Teams (FHTs). The course was made available through a web portal. Tele-educational resources were mediated by professors and coordinators, and included the use of theoretical content available through the web, discussion lists, Internet chat, activity diaries, 3-D video animations (Virtual Human on Leprosy), classes in video streaming and case simulation. Sixty-five FHT staff members were enrolled. All of them completed the course and 47 participants received a certificate at the end of the course. At the end of the course, 48 course-evaluation questionnaires were answered. A total of 47 participants (98%) considered the course as excellent. The results demonstrate the feasibility of an interactive, tele-education model as an educational resource for staff in isolated regions. Improvements in diagnostic skills should increase diagnostic suspicion of leprosy and may contribute to early detection.
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
In this paper we propose a simple model for the coupling behavior of the human spine for an inverse kinematics framework. Our spine model exhibits anatomically correct motions of the vertebrae of virtual mannequins by coupling standard swing and revolute joint models. The adjustement of the joints is made with several simple (in)equality constraints, resulting in a reduction of the solution space dimensionality for the inverse kinematics solver. By reducing the solution space dimensionality to feasible spine shapes, we prevent the inverse kinematics algorithm from providing infeasible postures for the spine.In this paper, we exploit how to apply these simple constraints to the human spine by a strict decoupling of the swing and torsion motion of the vertebrae. We demonstrate the validity of our approach on various experiments.
Resumo:
The proliferation of video games and other applications of computer graphics in everyday life demands a much easier way to create animatable virtual human characters. Traditionally, this has been the job of highly skilled artists and animators that painstakingly model, rig and animate their avatars, and usually have to tune them for each application and transmission/rendering platform. The emergence of virtual/mixed reality environments also calls for practical and costeffective ways to produce custom models of actual people. The purpose of the present dissertation is bringing 3D human scanning closer to the average user. For this, two different techniques are presented, one passive and one active. The first one is a fully automatic system for generating statically multi-textured avatars of real people captured with several standard cameras. Our system uses a state-of-the-art shape from silhouette technique to retrieve the shape of subject. However, to deal with the lack of detail that is common in the facial region for these kind of techniques, which do not handle concavities correctly, our system proposes an approach to improve the quality of this region. This face enhancement technique uses a generic facial model which is transformed according to the specific facial features of the subject. Moreover, this system features a novel technique for generating view-independent texture atlases computed from the original images. This static multi-texturing system yields a seamless texture atlas calculated by combining the color information from several photos. We suppress the color seams due to image misalignments and irregular lighting conditions that multi-texturing approaches typically suffer from, while minimizing the blurring effect introduced by color blending techniques. The second technique features a system to retrieve a fully animatable 3D model of a human using a commercial depth sensor. Differently to other approaches in the current state of the art, our system does not require the user to be completely still through the scanning process, and neither the depth sensor is moved around the subject to cover all its surface. Instead, the depth sensor remains static and the skeleton tracking information is used to compensate the user’s movements during the scanning stage. RESUMEN La popularización de videojuegos y otras aplicaciones de los gráficos por ordenador en el día a día requiere una manera más sencilla de crear modelos virtuales humanos animables. Tradicionalmente, estos modelos han sido creados por artistas profesionales que cuidadosamente los modelan y animan, y que tienen que adaptar específicamente para cada aplicación y plataforma de transmisión o visualización. La aparición de los entornos de realidad virtual/mixta aumenta incluso más la demanda de técnicas prácticas y baratas para producir modelos 3D representando personas reales. El objetivo de esta tesis es acercar el escaneo de humanos en 3D al usuario medio. Para ello, se presentan dos técnicas diferentes, una pasiva y una activa. La primera es un sistema automático para generar avatares multi-texturizados de personas reales mediante una serie de cámaras comunes. Nuestro sistema usa técnicas del estado del arte basadas en shape from silhouette para extraer la forma del sujeto a escanear. Sin embargo, este tipo de técnicas no gestiona las concavidades correctamente, por lo que nuestro sistema propone una manera de incrementar la calidad en una región del modelo que se ve especialmente afectada: la cara. Esta técnica de mejora facial usa un modelo 3D genérico de una cara y lo modifica según los rasgos faciales específicos del sujeto. Además, el sistema incluye una novedosa técnica para generar un atlas de textura a partir de las imágenes capturadas. Este sistema de multi-texturización consigue un atlas de textura sin transiciones abruptas de color gracias a su manera de mezclar la información de color de varias imágenes sobre cada triángulo. Todas las costuras y discontinuidades de color debidas a las condiciones de iluminación irregulares son eliminadas, minimizando el efecto de desenfoque de la interpolación que normalmente introducen este tipo de métodos. La segunda técnica presenta un sistema para conseguir un modelo humano 3D completamente animable utilizando un sensor de profundidad. A diferencia de otros métodos del estado de arte, nuestro sistema no requiere que el usuario esté completamente quieto durante el proceso de escaneado, ni mover el sensor alrededor del sujeto para cubrir toda su superficie. Por el contrario, el sensor se mantiene estático y el esqueleto virtual de la persona, que se va siguiendo durante el proceso, se utiliza para compensar sus movimientos durante el escaneado.
Resumo:
Extended and networked enterprises distribute the design of products, planning of the production process, and manufacturing regionally if not globally. Employees are therefore confronted with collaborative work over remote sites. A cost effective collaboration depends highly on the organization maintaining a common understanding for this kind of work and a suitable support with information and communication technology. The usual face to face work is going to be replaced at least partly if not totally by computer mediated collaboration. Creating and maintaining virtual teams is a challenge to work conditions as well as technology. New developments on cost-effective connections are providing not only vision and auditory perception but also haptic perception. Research results for improving remote collaboration are presented. Individual, social and cultural aspects are considered as new requirements on the employees of networked and extended enterprises.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2009