590 resultados para Animation (Cinematography)
Resumo:
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.
Resumo:
Sign language animations can lead to better accessibility of information and services for people who are deaf and have low literacy skills in spoken/written languages. Due to the distinct word-order, syntax, and lexicon of the sign language from the spoken/written language, many deaf people find it difficult to comprehend the text on a computer screen or captions on a television. Animated characters performing sign language in a comprehensible way could make this information accessible. Facial expressions and other non-manual components play an important role in the naturalness and understandability of these animations. Their coordination to the manual signs is crucial for the interpretation of the signed message. Software to advance the support of facial expressions in generation of sign language animation could make this technology more acceptable for deaf people. In this survey, we discuss the challenges in facial expression synthesis and we compare and critique the state of the art projects on generating facial expressions in sign language animations. Beginning with an overview of facial expressions linguistics, sign language animation technologies, and some background on animating facial expressions, a discussion of the search strategy and criteria used to select the five projects that are the primary focus of this survey follows. This survey continues on to introduce the work from the five projects under consideration. Their contributions are compared in terms of support for specific sign language, categories of facial expressions investigated, focus range in the animation generation, use of annotated corpora, input data or hypothesis for their approach, and other factors. Strengths and drawbacks of individual projects are identified in the perspectives above. This survey concludes with our current research focus in this area and future prospects.
Resumo:
Sociologisk Forsknings digitala arkiv
Resumo:
As far back as I can remember, I have always been interested in studio art. Whether it be painting, drawing, printmaking, or photography, it has consistently been a part of my life. Upon enrolling in Colby, I became interested in computers and decided to major my undergraduate college career in Computer Science. Not forgetting past interests, I continued my studio art education, taking several classes within the Art department. In due time, I began combining interests and began studying Computer Graphics and Design. With limited resources in this field at Colby, the majority of my computer graphic education and experience has been done on my own time apart from regular classroom work. As time progressed, so did my interests. Starting with simple image manipulation of digitally scanned photographs, I moved on to Web Page design, eventually leading to Desktop Publishing. Ultimately, I wanted to take a step further and expand my overall computer graphic knowledge by learning 3D modeling and animation. With even fewer resources in 3D animation at Colby, I perceived having trouble finding the information and tools I would need to gain the necessary skills for this new field. The Senior Scholars program gave me the opponunity to find and acquire the necessary tools to pursue my interest. This program also allowed me to devote the proper amount of time required for learning these new tools.
Resumo:
Avaliar uma empresa nos dias de hoje é algo corriqueiro e relativamente fácil de ser feito, na maioria dos casos, quando a empresa se adequa as metodologias disponíveis. Este trabalho se propõe a avaliar uma empresa que possui características impares, dificultando o uso dos métodos mais tradicionais hoje em uso. A empresa Dreamworks Animation SKG, estúdio norte-americano especializado me produções áudio visuais em computação gráfica para o cinema, possui é caracterizada por possuir em sua maioria ativos intangíveis, tanto como matéria prima quanto produto final, são historias que se convertem em filmes animados por computador. Este cenário propicia algumas peculiaridades, como por exemplo, a dificuldade de previsão dados financeiros futuros, devido às receitas da empresa estarem fortemente ligadas ao sucesso e/ou fracasso das produções lançadas. Como se trata de conteúdo muito subjetivo, como tudo que é relativo à arte. O trabalho apontou a proposta de uma solução que permite certa confiabilidade na previsão de receitas geradas pelo lançamento de uma produção. Há uma forte relação entre a receptividade do lançamento em seus dias de estréia e o alto retorno gerado pelo mesmo, o que não acontece com a relação entre o custo da produção e seu sucesso nas bilheterias.
Resumo:
The examination of traffic accidents is daily routine in forensic medicine. An important question in the analysis of the victims of traffic accidents, for example in collisions between motor vehicles and pedestrians or cyclists, is the situation of the impact. Apart from forensic medical examinations (external examination and autopsy), three-dimensional technologies and methods are gaining importance in forensic investigations. Besides the post-mortem multi-slice computed tomography (MSCT) and magnetic resonance imaging (MRI) for the documentation and analysis of internal findings, highly precise 3D surface scanning is employed for the documentation of the external body findings and of injury-inflicting instruments. The correlation of injuries of the body to the injury-inflicting object and the accident mechanism are of great importance. The applied methods include documentation of the external and internal body and the involved vehicles and inflicting tools as well as the analysis of the acquired data. The body surface and the accident vehicles with their damages were digitized by 3D surface scanning. For the internal findings of the body, post-mortem MSCT and MRI were used. The analysis included the processing of the obtained data to 3D models, determination of the driving direction of the vehicle, correlation of injuries to the vehicle damages, geometric determination of the impact situation and evaluation of further findings of the accident. In the following article, the benefits of the 3D documentation and computer-assisted, drawn-to-scale 3D comparisons of the relevant injuries with the damages to the vehicle in the analysis of the course of accidents, especially with regard to the impact situation, are shown on two examined cases.
Resumo:
In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories,hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.