554 resultados para Audio-visual product
Resumo:
This doctoral thesis comprises three distinct yet related projects which investigate interdisciplinary practice across: music collaboration; mime performance; and corporate communication. Both the processes and underpinning research of these projects explore, expose and exploit areas where disparate and apparently conflicting fields of professional practice successfully and effectively; intersect, interact, and inform each other - rather than conflict - thereby enhancing each, both individually and collectively. Informed by three decades of professional practice across: music; stage performance; television; corporate communication; design; and tertiary education, the three projects have produced innovative, creative, and commercial viable outcomes, manifest in a variety of media including: music; written text; digital, audio/visual; and internet. In exploring new practice and creating new knowledge, these project outcomes clearly demonstrate the value and effectiveness of reconciling disparate fields of practice through the application of inter-disciplinary creativity and innovation to professional practice.
Resumo:
Instrumental music performance is a well-established case of real-time interaction with technology and, when extended to ensembles, of interaction with others. However, these interactions are fleeting and the opportunities to reflect on action is limited, even though audio and video recording has recently provided important opportunities in this regard. In this paper we report on research to further extend these reflective opportunities through the capture and visualization of gestural data collected during collaborative virtual performances; specifically using the digital media instrument Jam2jam AV and the specifically-developed visualization software Jam2jam AV Visualize. We discusses how such visualization may assist performance development and understanding. The discussion engages with issues of representation, authenticity of virtual experiences, intersubjectivity and wordless collaboration, and creativity support. Two usage scenarios are described showing that collaborative intent is evident in the data visualizations more clearly than in audio-visual recordings alone, indicating that the visualization of performance gestures can be an efficient way of identifying deliberate and co-operative performance behaviours.
Resumo:
Mainstream representations of trans people typically run the gamut from victim to mentally ill and are almost always articulated by non-trans voices. The era of user-generated digital content and participatory culture has heralded unprecedented opportunities for trans people who wish to speak their own stories in public spaces. Digital Storytelling, as an easy accessible autobiographic audio-visual form, offers scope to play with multi-dimensional and ambiguous representations of identity that contest mainstream assumptions of what it is to be ‘male’ or ‘female’. Also, unlike mainstream media forms, online and viral distribution of Digital Stories offer potential to reach a wide range of audiences, which is appealing to activist oriented storytellers who wish to confront social prejudices. However, with these newfound possibilities come concerns regarding visibility and privacy, especially for storytellers who are all too aware of the risks of being ‘out’ as trans. This paper explores these issues from the perspective of three trans storytellers, with reference to the Digital Stories they have created and shared online and on DVD. These examplars are contextualised with some popular and scholarly perspectives on trans representation, in particular embodied and performed identity. It is contended that trans Digital Stories, while appearing in some ways to be quite conventional, actually challenge common notions of gender identity in ways that are both radical and transformative.
Resumo:
To detect and annotate the key events of live sports videos, we need to tackle the semantic gaps of audio-visual information. Previous work has successfully extracted semantic from the time-stamped web match reports, which are synchronized with the video contents. However, web and social media articles with no time-stamps have not been fully leveraged, despite they are increasingly used to complement the coverage of major sporting tournaments. This paper aims to address this limitation using a novel multimodal summarization framework that is based on sentiment analysis and players' popularity. It uses audiovisual contents, web articles, blogs, and commentators' speech to automatically annotate and visualize the key events and key players in a sports tournament coverage. The experimental results demonstrate that the automatically generated video summaries are aligned with the events identified from the official website match reports.
Resumo:
As the popularity of video as an information medium rises, the amount of video content that we produce and archive keeps growing. This creates a demand for shorter representations of videos in order to assist the task of video retrieval. The traditional solution is to let humans watch these videos and write textual summaries based on what they saw. This summarisation process, however, is time-consuming. Moreover, a lot of useful audio-visual information contained in the original video can be lost. Video summarisation aims to turn a full-length video into a more concise version that preserves as much information as possible. The problem of video summarisation is to minimise the trade-off between how concise and how representative a summary is. There are also usability concerns that need to be addressed in a video summarisation scheme. To solve these problems, this research aims to create an automatic video summarisation framework that combines and improves on existing video summarisation techniques, with the focus on practicality and user satisfaction. We also investigate the need for different summarisation strategies in different kinds of videos, for example news, sports, or TV series. Finally, we develop a video summarisation system based on the framework, which is validated by subjective and objective evaluation. The evaluation results shows that the proposed framework is effective for creating video skims, producing high user satisfaction rate and having reasonably low computing requirement. We also demonstrate that the techniques presented in this research can be used for visualising video summaries in the form web pages showing various useful information, both from the video itself and from external sources.
Resumo:
This paper investigates the use of lip information, in conjunction with speech information, for robust speaker verification in the presence of background noise. It has been previously shown in our own work, and in the work of others, that features extracted from a speaker's moving lips hold speaker dependencies which are complementary with speech features. We demonstrate that the fusion of lip and speech information allows for a highly robust speaker verification system which outperforms the performance of either sub-system. We present a new technique for determining the weighting to be applied to each modality so as to optimize the performance of the fused system. Given a correct weighting, lip information is shown to be highly effective for reducing the false acceptance and false rejection error rates in the presence of background noise
Resumo:
Investigates the use of temporal lip information, in conjunction with speech information, for robust, text-dependent speaker identification. We propose that significant speaker-dependent information can be obtained from moving lips, enabling speaker recognition systems to be highly robust in the presence of noise. The fusion structure for the audio and visual information is based around the use of multi-stream hidden Markov models (MSHMM), with audio and visual features forming two independent data streams. Recent work with multi-modal MSHMMs has been performed successfully for the task of speech recognition. The use of temporal lip information for speaker identification has been performed previously (T.J. Wark et al., 1998), however this has been restricted to output fusion via single-stream HMMs. We present an extension to this previous work, and show that a MSHMM is a valid structure for multi-modal speaker identification
Resumo:
To sustain an ongoing rapid growth of video information, there is an emerging demand for a sophisticated content-based video indexing system. However, current video indexing solutions are still immature and lack of any standard. This doctoral consists of a research work based on an integrated multi-modal approach for sports video indexing and retrieval. By combining specific features extractable from multiple audio-visual modalities, generic structure and specific events can be detected and classified. During browsing and retrieval, users will benefit from the integration of high-level semantic and some descriptive mid-level features such as whistle and close-up view of player(s).
Resumo:
The thesis is an examination of how Japanese popular culture products are remade (rimeiku). Adaptation of manga, anime and television drama, from one format to another, frequently occurs within Japan. The rights to these stories and texts are traded in South Korea and Taiwan. The ‘spin-off’ products form part of the Japanese content industry. When products are distributed and remade across geographical boundaries, they have a multi-dimensional aspect and potentially contribute to an evolving cultural re-engagement between Japan and East Asia. The case studies are the television dramas Akai Giwaku and Winter Sonata and two manga, Hana yori Dango and Janguru Taitei. Except for the television drama Winter Sonata these texts originated in Japan. Each study shows how remaking occurs across geographical borders. The study argues that Japan has been slow to recognise the value of its popular culture through regional and international media trade. Japan is now taking steps to remedy this strategic shortfall to enable the long-term viability of the Japanese content industry. The study includes an examination of how remaking raises legal issues in the appropriation of media content. Unauthorised copying and piracy contributes to loss of financial value. To place the three Japanese cultural products into a historical context, the thesis includes an overview of Japanese copying culture from its early origins through to the present day. The thesis also discusses the Meiji restoration and the post-World War II restructuring that resulted in Japan becoming a regional media powerhouse. The localisation of Japanese media content in South Korea and Taiwan also brings with it significant cultural influences, which may be regarded as contributing to a better understanding of East Asian society in line with the idea of regional ‘harmony’. The study argues that the commercial success of Japanese products beyond Japan is governed by perceptions of the quality of the story and by the cultural frames of the target audience. The thesis draws on audience research to illustrate the loss or reinforcement of national identity as a consequence of cross-cultural trade. The thesis also examines the contribution to Japanese ‘soft power’ (Nye, 2004, p. x). The study concludes with recommendations for the sustainability of the Japanese media industry.
Resumo:
Can China improve the competitiveness of its culture in world markets? Should it focus less on quantity and more on quality? How should Chinese cultural producers and distributors target audiences overseas? These are important questions facing policy makers today. In this paper I investigate how China might best deploy its soft power capabilities: for instance, should it try to demonstrate that it is a creative, innovative nation, capable of original ideas? Or should it put the emphasis on validating its credentials as an enduring culture and civilisation? In order to investigate these questions I introduce the cultural innovation timeline, a model that explains how China is adding value. There are six stages in the timeline but I will focus in particular on how the timeline facilitates cultural trade. In the second part of the paper I look at some of the challenges facing China, particularly the reception of its cultural products in international markets.
Resumo:
In 2011 Queensland suffered both floods and cyclones, leaving residents without homes and their communities in ruins (2011). This paper presents how researchers from QUT, who are also members of the Oral History Association of Australia (OHAA) Queensland’s chapter, are using oral history, photographs, videography and digital storytelling to help heal and empower rural communities around the state and how evaluation has become a key element of our research. QUT researchers ran storytelling workshops in the capital city of Brisbane i early 2011, after the city suffered sever flooding. Cyclone Yasi then struck the town of Cardwell (in February 2011) destroying their historical museum and recording equipment. We delivered an 'emergency workshop', offering participants hands on use of the equipment, ethical and interviewing theory, so that the community could start to build a new collection. We included oral history workshops as well as sessions on how best to use a video camera, digital camera and creative writing sessions, so the community would also know how to make 'products' or exhibition pieces out of the interviews they were recording. We returned six months later to conduct follow-up workshops and the material produced by and with the community had been amazing. More funding has now been secured to replicate audio/visual/writing workshops in other remote rural Queensland communities including Townsville, Mackay and Cunnamulla and Toowoomba in 2012, highlighting the need for a multi media approach, to leverage the most out of OH interviews as a mechanism to restore and promote community resilience and pride.
Resumo:
“Supermassive” is a synchronised four-channel video installation with sound. Each video channel shows a different camera view of an animated three-dimensional scene, which visually references galactic or astral imagery. This scene is comprised of forty-four separate clusters of slowly orbiting white text. Each cluster refers to a different topic that has been sourced online. The topics are diverse with recurring subjects relating to spirituality, science, popular culture, food and experiences of contemporary urban life. The slow movements of the text and camera views are reinforced through a rhythmic, contemplative soundtrack. As an immersive installation, “Supermassive” operates somewhere between a meditational mind map and a representation of a contemporary data stream. “Supermassive” contributes to studies in the field of contemporary art. It is particularly concerned with the ways that graphic representations of language can operate in the exploration of contemporary lived experiences, whether actual or virtual. Artists such as Ed Ruscha and Christopher Wool have long explored the emotive and psychological potentials of graphic text. Other artists such as Doug Aitken and Pipilotti Rist have engaged with the physical and spatial potentials of audio-visual installations to create emotive and symbolic experiences for their audiences. Using a practice-led research methodology, “Supermassive” extends these creative inquiries. By creating a reflective atmosphere in which divergent textual subjects are pictured together, the work explores not only how we navigate information, but also how such navigations inform understandings of our physical and psychological realities. “Supermassive” has been exhibited internationally at LA Louver Gallery, Venice, California in 2013 and nationally with GBK as part of Art Month Sydney, also in 2013. It has been critically reviewed in The Los Angeles Times.
Resumo:
An experiment in large scale, live, game design and public performance, bringing together participants from across the creative arts to design, deliver and document a project that was both a cooperative learning experience and an experimental public performance. The four month project, funded by the Edge Digital Centre, culminated into a 24 hour ARG event involving over 100 participants in December 2012. Using the premise of a viral outbreak, young enthusiasts auditioned for the roles of Survivor, Zombie, Medic and Military. The main objective was for the Survivors to complete a series of challenges over 24 hours, while the other characters fulfilled their opposing objectives of interference and sabotage supported by both scripted and free-form scenarios staged in constructed scenes throughout the venues. The event was set in the State Library of Queensland and the Edge Digital Centre who granted the project full access, night and day to all areas including public, office and underground areas. These venues were transformed into cinematic settings full of interactive props and various audio-visual effects. The ZomPoc Project was an innovative experiment in writing and directing a large scale, live, public performance, bringing together participants from across the creative industries. In order to design such an event a number of innovative resources were developed exploiting techniques of game design, theatre, film, television and tangible media production. A series of workshops invited local artists, scientists, technicians and engineers to find new ways of collaborating to create networked artifacts, experimental digital works, robotic props, modular set designs, sound effects and unique costuming guided by an innovative multi-platform script developed by Deb Polson. The result of this collaboration was the creation of innovative game and set props, both atmospheric and interactive. Such works animated the space, presented story clues and facilitated interactions between strangers who found themselves sharing a unique experience in unexpected places.