845 resultados para Turning voice emotion into graphical movement
Resumo:
The 5th World Summit on Media for Children and Youth held in Karlstad, Sweden in June 2010 provided a unique media literacy experience for approximately thirty young people from diverse backgrounds through participation in the Global Youth Media Council. This article focuses on the Summit’s aim to give young people a ‘voice’ through intercultural dialogue about media reform. The accounts of four young Australians are discussed in order to consider how successful the Summit was in achieving this goal. The article concludes by making recommendations for future international media literacy conferences involving young people. It also advocates for the expansion of the Global Youth Media Council concept as a grass roots movement to involve more young people in discussions about media reform.
Resumo:
An ability to recognise and resolve ethical dilemmas was identified by the Australian Law Reform Commission as one of the ten fundamental lawyering skills. While the ‘Priestley 11’ list of areas of law required to qualify for legal practice includes ethics and professional responsibility, the commitment to ethics learning in Australian law schools has been far from uniform. The obligation imposed by the Priestley 11 is frequently discharged by a traditional teaching and learning approach involving lectures and/or tutorials and focusing on the content of the formal rules of professional responsibility. However, the effectiveness of such an approach is open to question. Instead, a practical rather than a theoretical approach to the teaching of legal ethics is required. Effective final-year student learning of ethics may be achieved by an approach which engages students, enabling them to appreciate the relevance of what they are learning to the real world and facilitating their transition from study to their working lives. Entry into Valhalla comprises a suite of modules featuring ‘machinima’ (computer-generated imagery) created using the Second Life virtual environment to contextualise otherwise abstract concepts. It provides an engaging learning environment which enables students to obtain an appreciation of ethical responsibility in a real-world context and facilitates understanding and problem-solving ability.
Resumo:
Gesture in performance is widely acknowledged in the literature as an important element in making a performance expressive and meaningful. The body has been shown to play an important role in the production and perception of vocal performance in particular. This paper is interested in the role of gesture in creative works that seek to extend vocal performance via technology. A creative work for vocal performer, laptop computer and a Human Computer Interface called the eMic (Extended Microphone Stand Interface controller) is presented as a case study, to explore the relationships between movement, voice production, and musical expression. The eMic is an interface for live vocal performance that allows the singers’ gestures and interactions with a sensor based microphone stand to be captured and mapped to musical parameters. The creative work discussed in this paper presents a new compositional approach for the eMic by working with movement as a starting point for the composition and thus using choreographed gesture as the basis for musical structures. By foregrounding the body and movement in the creative process, the aim is to create a more visually engaging performance where the performer is able to more effectively use the body to express their musical objectives.
Resumo:
Travel time is an important network performance measure and it quantifies congestion in a manner easily understood by all transport users. In urban networks, travel time estimation is challenging due to number of reasons such as, fluctuations in traffic flow due to traffic signals, significant flow to/from mid link sinks/sources, etc. The classical analytical procedure utilizes cumulative plots at upstream and downstream locations for estimating travel time between the two locations. In this paper, we discuss about the issues and challenges with classical analytical procedure such as its vulnerability to non conservation of flow between the two locations. The complexity with respect to exit movement specific travel time is discussed. Recently, we have developed a methodology utilising classical procedure to estimate average travel time and its statistic on urban links (Bhaskar, Chung et al. 2010). Where, detector, signal and probe vehicle data is fused. In this paper we extend the methodology for route travel time estimation and test its performance using simulation. The originality is defining cumulative plots for each exit turning movement utilising historical database which is self updated after each estimation. The performance is also compared with a method solely based on probe (Probe-only). The performance of the proposed methodology has been found insensitive to different route flow, with average accuracy of more than 94% given a probe per estimation interval which is more than 5% increment in accuracy with respect to Probe-only method.
Resumo:
Visual activity detection of lip movements can be used to overcome the poor performance of voice activity detection based solely in the audio domain, particularly in noisy acoustic conditions. However, most of the research conducted in visual voice activity detection (VVAD) has neglected addressing variabilities in the visual domain such as viewpoint variation. In this paper we investigate the effectiveness of the visual information from the speaker’s frontal and profile views (i.e left and right side views) for the task of VVAD. As far as we are aware, our work constitutes the first real attempt to study this problem. We describe our visual front end approach and the Gaussian mixture model (GMM) based VVAD framework, and report the experimental results using the freely available CUAVE database. The experimental results show that VVAD is indeed possible from profile views and we give a quantitative comparison of VVAD based on frontal and profile views The results presented are useful in the development of multi-modal Human Machine Interaction (HMI) using a single camera, where the speaker’s face may not always be frontal.