993 resultados para Animation techniques
Resumo:
Management of groundwater systems requires realistic conceptual hydrogeological models as a framework for numerical simulation modelling, but also for system understanding and communicating this to stakeholders and the broader community. To help overcome these challenges we developed GVS (Groundwater Visualisation System), a stand-alone desktop software package that uses interactive 3D visualisation and animation techniques. The goal was a user-friendly groundwater management tool that could support a range of existing real-world and pre-processed data, both surface and subsurface, including geology and various types of temporal hydrological information. GVS allows these data to be integrated into a single conceptual hydrogeological model. In addition, 3D geological models produced externally using other software packages, can readily be imported into GVS models, as can outputs of simulations (e.g. piezometric surfaces) produced by software such as MODFLOW or FEFLOW. Boreholes can be integrated, showing any down-hole data and properties, including screen information, intersected geology, water level data and water chemistry. Animation is used to display spatial and temporal changes, with time-series data such as rainfall, standing water levels and electrical conductivity, displaying dynamic processes. Time and space variations can be presented using a range of contouring and colour mapping techniques, in addition to interactive plots of time-series parameters. Other types of data, for example, demographics and cultural information, can also be readily incorporated. The GVS software can execute on a standard Windows or Linux-based PC with a minimum of 2 GB RAM, and the model output is easy and inexpensive to distribute, by download or via USB/DVD/CD. Example models are described here for three groundwater systems in Queensland, northeastern Australia: two unconfined alluvial groundwater systems with intensive irrigation, the Lockyer Valley and the upper Condamine Valley, and the Surat Basin, a large sedimentary basin of confined artesian aquifers. This latter example required more detail in the hydrostratigraphy, correlation of formations with drillholes and visualisation of simulation piezometric surfaces. Both alluvial system GVS models were developed during drought conditions to support government strategies to implement groundwater management. The Surat Basin model was industry sponsored research, for coal seam gas groundwater management and community information and consultation. The “virtual” groundwater systems in these 3D GVS models can be interactively interrogated by standard functions, plus production of 2D cross-sections, data selection from the 3D scene, rear end database and plot displays. A unique feature is that GVS allows investigation of time-series data across different display modes, both 2D and 3D. GVS has been used successfully as a tool to enhance community/stakeholder understanding and knowledge of groundwater systems and is of value for training and educational purposes. Projects completed confirm that GVS provides a powerful support to management and decision making, and as a tool for interpretation of groundwater system hydrological processes. A highly effective visualisation output is the production of short videos (e.g. 2–5 min) based on sequences of camera ‘fly-throughs’ and screen images. Further work involves developing support for multi-screen displays and touch-screen technologies, distributed rendering, gestural interaction systems. To highlight the visualisation and animation capability of the GVS software, links to related multimedia hosted online sites are included in the references.
Resumo:
It is now widely acknowledged that student mental well-being is a critical factor in the tertiary student learning experience and is important to student learning success. The issue of student mental well-being also has implications for effective student transition out of university and into the world of work. It is therefore vital that intentional strategies are adopted by universities both within the formal curriculum, and outside it, to promote student well-being and to work proactively and preventatively to avoid a decline in student psychological well-being. This paper describes how the Queensland University of Technology Law School is using animation to teach students about the importance for their learning success of the protection of their mental well-being. Mayer and Moreno (2002) define an animation as an external representation with three main characteristics: (1) it is a pictorial representation, (2) it depicts apparent movement, and (3) it consists of objects that are artificially created through drawing or some other modelling technique. Research into the effectiveness of animation as a tool for tertiary student learning engagement is relatively new and growing field of enquiry. Nash argues, for example, that animations provide a “rich, immersive environment [that] encourages action and interactivity, which overcome an often dehumanizing learning management system approach” (Nash, 2009, 25). Nicholas states that contemporary millennial students in universities today, have been immersed in animated multimedia since their birth and in fact need multimedia to learn and communicate effectively (2008). However, it has also been established, for example through the work of Lowe (2003, 2004, 2008) that animations can place additional perceptual, attentional, and cognitive demands on students that they are not always equipped to cope with. There are many different genres of animation. The dominant style of animation used in the university learning environment is expository animation. This approach is a useful tool for visualising dynamic processes and is used to support student understanding of subjects and themes that might otherwise be perceived as theoretically difficult and disengaging. It is also a form of animation that can be constructed to avoid any potential negative impact on cognitive load that the animated genre might have. However, the nature of expository animation has limitations for engaging students, and can present as clinical and static. For this reason, the project applied Kombartzky, Ploetzner, Schlag, and Metz’s (2010) cognitive strategy for effective student learning from expository animation, and developed a hybrid form of animation that takes advantage of the best elements of expository animation techniques along with more engaging short narrative techniques. First, the paper examines the existing literature on the use of animation in tertiary educational contexts. Second, the paper describes how animation was used at QUT Law School to teach students about the issue of mental well-being and its importance to their learning success. Finally, the paper analyses the potential of the use of animation, and of the cognitive strategy and animation approach trialled in the project, as a teaching tool for the promotion of student learning about the importance of mental well-being.
Resumo:
3D Motion capture is a medium that plots motion, typically human motion, converting it into a form that can be represented digitally. It is a fast evolving field and recent inertial technology may provide new artistic possibilities for its use in live performance. Although not often used in this context, motion capture has a combination of attributes that can provide unique forms of collaboration with performance arts. The inertial motion capture suit used for this study has orientation sensors placed at strategic points on the body to map body motion. Its portability, real-time performance, ease of use, and its immunity from line-of-sight problems inherent in optical systems suggest it would work well as a live performance technology. Many animation techniques can be used in real-time. This research examines a broad cross-section of these techniques using four practice-led cases to assess the suitability of inertial motion capture to live performance. Although each case explores different visual possibilities, all make use of the performativity of the medium, using either an improvisational format or interactivity among stage, audience and screen that would be difficult to emulate any other way. A real-time environment is not capable of reproducing the depth and sophistication of animation people have come to expect through media. These environments take many hours to render. In time the combination of what can be produced in real-time and the tools available in a 3D environment will no doubt create their own tree of aesthetic directions in live performance. The case study looks at the potential of interactivity that this technology offers.
Resumo:
Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision.
Resumo:
The exhibition consists of a series of 9 large-scale cotton rag prints, printed from digital files, and a sound and picture animation on DVD composed of drawings, sound, analogue and digital photographs, and Super 8 footage. The exhibition represents the artist’s experience of Singapore during her residency. Source imagery was gathered from photographs taken at the Bukit Brown abandoned Chinese Cemetery in Singapore, and Australian native gardens in Parkville Melbourne. Historical sources include re-photographed Singapore 19th and early 20th century postcard images. The works use analogue, hand-drawn and digital imaging, still and animated, to explore the digital interface’s ability to combine mixed media. This practice stems from the digital imaging practice of layering, using various media editing software. The work is innovative in that it stretches the idea of the layer composition in a single image by setting each layer into motion using animation techniques. This creates a multitude of permutations and combinations as the two layers move in different rhythmic patterns. The work also represents an innovative collaboration between the photographic practitioner and a sound composer, Duncan King-Smith, who designed sound for the animation based on concepts of trance, repetition and abstraction. As part of the Art ConneXions program, the work travelled to numerous international venues including: Space 217 Singapore, RMIT Gallery Melbourne, National Museum Jakarta, Vietnam Fine Arts Museum Hanoi, and ifa (Institut fur Auslandsbeziehungen) Gallery in both Stuttgart and Berlin.
Resumo:
This article examines Len Lye’s film-making in the 1930s within a broader visual arts context, seeking to clarify the nature and extent of his involvement in British documentary film culture at this time. In particular, it demonstrates how Lye's method of fusing 'live action', found footage, and animation techniques created the possibility of a radical documentary practice that could reconcile promotional advertising and commercial art with avant-garde abstraction and kinaesthetic experimentation. In particular, the article focusses on Lye's N. or N.W. (1937, 35mm, b&w, 10 mns), arguing that his work from this period should be regarded as central - and not marginal - to any serious reassessment of Britain's “Documentary Movement” of the inter-war era, and its relations to any history of the cinema and visual culture.
Resumo:
abstract With many visual speech animation techniques now available, there is a clear need for systematic perceptual evaluation schemes. We describe here our scheme and its application to a new video-realistic (potentially indistinguishable from real recorded video) visual-speech animation system, called Mary 101. Two types of experiments were performed: a) distinguishing visually between real and synthetic image- sequences of the same utterances, ("Turing tests") and b) gauging visual speech recognition by comparing lip-reading performance of the real and synthetic image-sequences of the same utterances ("Intelligibility tests"). Subjects that were presented randomly with either real or synthetic image-sequences could not tell the synthetic from the real sequences above chance level. The same subjects when asked to lip-read the utterances from the same image-sequences recognized speech from real image-sequences significantly better than from synthetic ones. However, performance for both, real and synthetic, were at levels suggested in the literature on lip-reading. We conclude from the two experiments that the animation of Mary 101 is adequate for providing a percept of a talking head. However, additional effort is required to improve the animation for lip-reading purposes like rehabilitation and language learning. In addition, these two tasks could be considered as explicit and implicit perceptual discrimination tasks. In the explicit task (a), each stimulus is classified directly as a synthetic or real image-sequence by detecting a possible difference between the synthetic and the real image-sequences. The implicit perceptual discrimination task (b) consists of a comparison between visual recognition of speech of real and synthetic image-sequences. Our results suggest that implicit perceptual discrimination is a more sensitive method for discrimination between synthetic and real image-sequences than explicit perceptual discrimination.
Resumo:
This research presents a study that seeks to analyze the images of the future in science fiction movies, specifically those made through animation techniques, exploring particularly the representation of audiovisual communication medias in its dialogue with the approached societies in the movies chosen to be analyzed. The proposed discussion seeks approximations in order to answer to the question that initiated this research: how are we thinking the future nowadays? It also seeks to, according to Morin (1997), comprehend aspects of the contemporary society by using the cinema, and at the same time, to understand the cinema, aided by a social analysis
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This thesis goes from the origin of the human interest in movement, tracing briefy the history of animation, from your remotest manifestations until the appearence of the GIF format, to address the topic of repetition in movement, stablishing a parallel between this format and primitive devices of animation ehxibition. It is also brought into this project some questions relative to the uses of GIF as meme and as artwork, and the possibilities that the repetition offer in both cases. Through artistic production and empiric research, many techniques were considered adequated to produce an animation that gave the public the sensation of continuity, with the intent to contribute to the development of GIF language. It was also researched the effect of the movement repetition on the espectors, and if these manifestations where different for each person by age and with distinct professions. At the end of the research, was concluded that animation techniques with contrasting visual characteristics were able to give the sensation of continuity, as well as the sensations towards the animations were independent of the social groups that the espectors belongs
Resumo:
Este trabajo se centra en la construcción de la parte física del personaje virtual. El desarrollo muestra téecnicas de modelado 3D, cinemática y animación usadas para la creación de personajes virtuales. Se incluye además una implementación que está dividida en: modelado del personaje virtual, creación de un sistema de cinemática inversa y la creación de animaciones utilizando el sistema de cinemática. Primero, crear un modelo 3D exacto al diseño original, segundo, el desarrollo de un sistema de cinemática inversa que resuelva con exactitud las posiciones de las partes articuladas que forman el personaje virtual, y tercero, la creación de animaciones haciendo uso del sistema de cinemática para conseguir animaciones fluidas y depuradas. Como consecuencia, se ha obtenido un componente 3D animado, reutilizable, ampliable, y exportable a otros entornos virtuales. ---ABSTRACT---This article is pointed in the making of the physical part of the virtual character. Development shows modeling 3D, kinematic and animation techniques used for create the virtual character. In addition, an implementation is included, and it is divided in: to model the 3D character, to create an inverse kinematics system, and to create animations using a kinematic system. First, creating an exact 3D model from the original design, second, developing an inverse kinematics system that resolves the positions of the articulated pieces that compose the virtual character, and third, creating animation using the inverse kinematics system to get fluid and refined animations in realtime. As consequence, a 3D animated, reusable, extendable and to other virtual environments exportable component has been obtained.
Resumo:
Des précisions méritent d’être apportées au début de ce rapport pour éclairer certains choix que nous avons faits dans notre démarche de rédaction ainsi que le contexte dans lequel se sont effectués nos travaux. D’abord, nous avons pris deux décisions en ce qui concerne la féminisation du texte. Premièrement, nous avons opté pour l’utilisation des termes génériques dans la définition du cadre conceptuel. Par ailleurs, nous avons utilisé le féminin pour parler des enseignantes et des sujets impliqués dans la recherche pour représenter fidèlement la réalité. Nous avons par la suite décidé d’utiliser le féminin dans toutes les autres parties du rapport. D’autre part, tous les prénoms des sujets et des enfants impliqués dans la recherche ont été remplacés par des pseudonymes pour conserver leur anonymat. Nous avons aussi choisi de présenter les extraits issus de nos données de recherche tels quels, pour qu’ils soient le plus fidèles possibles. Par contre, nous nous sommes permis de corriger les erreurs d’orthographe par respect pour la langue française. De plus, pour éviter d’alourdir la lecture du cinquième chapitre, nous mentionnons les références des extraits qui appuient nos affirmations sans en rapporter le texte. De même, il nous est arrivé de faire ce choix à quelques reprises dans le sixième chapitre, lorsque les extraits nous paraissaient trop longs ou que la reproduction fidèle des verbalisations aurait rendu la lecture fastidieuse. La rédaction du rapport est le fruit d’un travail d’équipe soutenu. Par contre, pour des raisons d’efficacité, nous avons partagé l’analyse des données. De ce fait, certaines parties du rapport, notamment dans les chapitres cinq et six ont été rédigées individuellement. Aussi, des styles d’écriture différents peuvent être observés dans ces deux chapitres car nous avons tenu à les respecter. Par contre, chacune des chercheures a soumis ses textes à l’équipe qui les a critiqués et bonifiés. Finalement, il faut mentionner que le présent document est issu d’une double démarche : celle d’une équipe de travail subventionnée dans le cadre du Programme d’aide à la recherche sur l’enseignement et l’apprentissage du Ministère de l’éducation du Québec, et celle d’étudiantes dans le Programme de maîtrise en sciences de l’éducation donné par l’Université de Sherbrooke. Ainsi nous avons pu bénéficier de plus d’un support : support financier, support institutionnel et support méthodologique. À notre avis, la recette est bonne.
Resumo:
Effective management of groundwater requires stakeholders to have a realistic conceptual understanding of the groundwater systems and hydrological processes.However, groundwater data can be complex, confusing and often difficult for people to comprehend..A powerful way to communicate understanding of groundwater processes, complex subsurface geology and their relationships is through the use of visualisation techniques to create 3D conceptual groundwater models. In addition, the ability to animate, interrogate and interact with 3D models can encourage a higher level of understanding than static images alone. While there are increasing numbers of software tools available for developing and visualising groundwater conceptual models, these packages are often very expensive and are not readily accessible to majority people due to complexity. .The Groundwater Visualisation System (GVS) is a software framework that can be used to develop groundwater visualisation tools aimed specifically at non-technical computer users and those who are not groundwater domain experts. A primary aim of GVS is to provide management support for agencies, and enhancecommunity understanding.
Resumo:
As a Lecturer of Animation History and 3D Computer Animator, I received a copy of Moving Innovation: A History of Computer Animation by Tom Sito with an element of anticipation in the hope that this text would clarify the complex evolution of Computer Graphics (CG). Tom Sito did not disappoint, as this text weaves together the multiple development streams and convergent technologies and techniques throughout history that would ultimately result in modern CG. Universities now have students who have never known a world without computer animation and many students are younger than the first 3D CG animated feature film, Toy Story (1996); this text is ideal for teaching computer animation history and, as I would argue, it also provides a model for engaging young students in the study of animation history in general. This is because Sito places the development of computer animation within the context of its pre-digital ancestry and throughout the text he continues to link the discussion to the broader history of animation, its pioneers, technologies and techniques...
Resumo:
As an animator and practice-based researcher with a background in games development, I am interested in technological change in the video game medium, with a focus on the tools and technologies that drive game character animation and interactive story. In particular, I am concerned with the issue of ‘user agency’, or the ability of the end user to affect story development—a key quality of the gaming experience and essential to the aesthetics of gaming, which is defined in large measure by its interactive elements. In this paper I consider the unique qualities of the video game1 as an artistic medium and the impact that these qualities have on the production of animated virtual character performances. I discuss the somewhat oppositional nature of animated character performances found in games from recent years, which range from inactive to active—in other words, low to high agency. Where procedural techniques (based on coded rules of movement) are used to model dynamic character performances, the user has the ability to interactively affect characters in real-time within the larger sphere of the game. This game play creates a high degree of user agency. However, it lacks the aesthetic nuances of the more crafted sections of games: the short cut-scenes, or narrative interludes where entire acted performances are mapped onto game characters (often via performance capture)2 and constructed into relatively cinematic representations. While visually spectacular, cut-scenes involve minimal interactivity, so user agency is low. Contemporary games typically float between these two distinct methods of animation, from a focus on user agency and dynamically responsive animation to a focus on animated character performance in sections where the user is a passive participant. We tend to think of the majority of action in games as taking place via playable figures: an avatar or central character that represents a player. However, there is another realm of characters that also partake in actions ranging from significant to incidental: non-playable characters, or NPCs, which populate action sequences where game play takes place as well as cut scenes that unfold without much or any interaction on the part of the player. NPCs are the equivalent to supporting roles, bit characters, or extras in the world of cinema. Minor NPCs may simply be background characters or enemies to defeat, but many NPCs are crucial to the overall game story. It is my argument that, thus far, no game has successfully utilized the full potential of these characters to contribute toward development of interactive, high performance action. In particular, a type of NPC that I have identified as ‘pivotal’3—those constituting the supporting cast of a video game—are essential to the telling of a game story, particularly in genres that focus on story and characters: adventure games, action games, and role-playing games. A game story can be defined as the entirety of the narrative, told through non-interactive cut-scenes as well a interactive sections of play, and development of more complex stories in games clearly impacts the animation of NPCs. I argue that NPCs in games must be capable of acting with emotion throughout a game—in the cutscenes, which are tightly controlled, but also in sections of game play, where player agency can potentially alter the story in real-time. When the animated performance of NPCs and user agency are not continuous throughout the game, the implication is that game stories may be primarily told through short movies within games, making it more difficult to define video games animation as a distinct artistic medium.