958 resultados para Audio director
Resumo:
Uppsatsen undersöker skillnader mellan att jobba som ljuddesigner på mindre indieprojekt jämfört med större spelproduktioner. Skillnader mellan att jobba som intern ljuddesigner jämfört med som konsult och hur andra utvecklare ser på ljud i spel tas även upp. Metoden som användes var kvalitativa intervjuer och tre intervjuer genomfördes. Analysen av intervjuerna utgår från uppsatsens syfte och frågeställning samt tidigare forskning om ljud i spel och inom organisationssociologi. Studien visar att det finns stora skillnader inom de olika produktionerna. Även inom olika indieprojekt. Kommunikation med resten av teamet och det administrativa arbetet skiljer sig mellan projekten. De större projekten har här en större fördel tack vare mer vana och användandet av en audio director.
Improved speech recognition using adaptive audio-visual fusion via a stochastic secondary classifier
Resumo:
Critical skills such as identifying and appreciating issues that confront firms engaging in international business, and the ability to undertake creative decision-making, are considered fundamental to the study of International Business. It has been argued that using audio-visual case studies can help develop such skills. However, this is difficult due to a lack of Australian case studies. This paper reviews the literature outlining the advantages believed to result from the use of audio-visual case studies, describes a project implemented in a large cohort of students studying International Business, reports on a pilot evaluation of the project, and outlines the findings and conclusions of the survey.
Resumo:
Acoustically, car cabins are extremely noisy and as a consequence audio-only, in-car voice recognition systems perform poorly. As the visual modality is immune to acoustic noise, using the visual lip information from the driver is seen as a viable strategy in circumventing this problem by using audio visual automatic speech recognition (AVASR). However, implementing AVASR requires a system being able to accurately locate and track the drivers face and lip area in real-time. In this paper we present such an approach using the Viola-Jones algorithm. Using the AVICAR [1] in-car database, we show that the Viola- Jones approach is a suitable method of locating and tracking the driver’s lips despite the visual variability of illumination and head pose for audio-visual speech recognition system.
Resumo:
Working with 12 journalism students plus a research assistant, producer/director Romano conducted five community focus groups and discussions with 80 people on the street. These provided the themes and concepts and the creative approaches for each program. Each was structured around one of the emergent themes; all programs offered different voices rather than coming to a single conclusion. New Horizons, New Homes aired over three weeks n Radio 4EB and was entered into the 2005 UN Media Peace Award where it won the Best Radio Category ahead of ABC and SBS. The UN commended the way in which the programs brought together a wide base of research to create a better understanding in the community on this issue. This project did not just improve the accuracy and social inclusiveness of reporting. It applied principles of deliberative democracy in the creation of journalism that enhances citizens’ deliberative potential on complex social issues
Resumo:
Acoustically, car cabins are extremely noisy and as a consequence, existing audio-only speech recognition systems, for voice-based control of vehicle functions such as the GPS based navigator, perform poorly. Audio-only speech recognition systems fail to make use of the visual modality of speech (eg: lip movements). As the visual modality is immune to acoustic noise, utilising this visual information in conjunction with an audio only speech recognition system has the potential to improve the accuracy of the system. The field of recognising speech using both auditory and visual inputs is known as Audio Visual Speech Recognition (AVSR). Continuous research in AVASR field has been ongoing for the past twenty-five years with notable progress being made. However, the practical deployment of AVASR systems for use in a variety of real-world applications has not yet emerged. The main reason is due to most research to date neglecting to address variabilities in the visual domain such as illumination and viewpoint in the design of the visual front-end of the AVSR system. In this paper we present an AVASR system in a real-world car environment using the AVICAR database [1], which is publicly available in-car database and we show that the use of visual speech conjunction with the audio modality is a better approach to improve the robustness and effectiveness of voice-only recognition systems in car cabin environments.