66 resultados para movie audio tracks


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present paper explores extreme car audio systems and the culture and practices that surround car audio competitions. I begin by examining whether, and how, car audio can be thought of as a 'music scene' and in what ways the culture and practice of car audio may fit within post-subcultural discourses. Following this, I offer a description of car audio competitions, revealing some of the practices that define this aspect of car audio scenes. In particular, I concentrate on sound pressure level (SPL) competitions and some of the interesting aspects of the SPL scene. Finally, I briefly examine how the powerful effects (and affects) of bass frequencies are an important part of the attraction of loud car audio systems and how car audio systems contribute to the territorializing of urban spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is an exploratory study into the effective use of embedding custom made audiovisual case studies (AVCS) in enhancing the student’s learning experience. This paper describes a project that used AVCS for a large divergent cohort of undergraduate students, enrolled in an International Business course. The study makes a number of key contributions to advancing learning and teaching within the discipline. AVCS provide first hand reporting of the case material, where the students have the ability to improve their understanding from both verbal and nonverbal cues. The paper demonstrates how AVCS can be embedded in a student-centred teaching approach to capture the students’ interest and to enhance a deep approach to learning by providing real-world authentic experience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a method of representing audience behavior through facial and body motions from a single video stream, and use these features to predict the rating for feature-length movies. This is a very challenging problem as: i) the movie viewing environment is dark and contains views of people at different scales and viewpoints; ii) the duration of feature-length movies is long (80-120 mins) so tracking people uninterrupted for this length of time is still an unsolved problem, and; iii) expressions and motions of audience members are subtle, short and sparse making labeling of activities unreliable. To circumvent these issues, we use an infrared illuminated test-bed to obtain a visually uniform input. We then utilize motion-history features which capture the subtle movements of a person within a pre-defined volume, and then form a group representation of the audience by a histogram of pair-wise correlations over a small-window of time. Using this group representation, we learn our movie rating classifier from crowd-sourced ratings collected by rottentomatoes.com and show our prediction capability on audiences from 30 movies across 250 subjects (> 50 hrs).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

12 Original recordings curated by leading national industry figures. It’s a 12 track album full of remixed, rerecorded and rejigged tracks from the project that were shortlisted by our friends at MGM Distribution, Music Sales, and EMI Music Australia. The TWELVE album is already receiving critical acclaim from Australia's music industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eleven original recordings curated by leading industry figures. This is a compilation album from QUT's 2012 100 Songs project. It's called Eleven: Best of 100 Songs Project 2012 and was released in May 2013. It’s an 11 track album with a bonus track, full of remixed, rerecorded and rejigged tracks from the project that were shortlisted by our friends at MGM Distribution, Mushroom Music, Island Records and Music Sales Australia. The Eleven album is already receiving critical acclaim from Australia's music industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Audio recording of 10 tracks funded by Legacy, The Australia Council, and Arts Queensland. The recordings explore the untold stories of soldiers' wives through song. It features the vocal and songwriting talents of Jackie Marshall, Bertie Page, Sahara Beck, Emma Bosworth, Roz Pappalardo, and Kristy Apps. Recorded, Mixed, Mastered, and Co-Produced by Phil Graham.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a low-bandwidth multi-robot communication system designed to serve as a backup communication channel in the event a robot suffers a network device fault. While much research has been performed in the area of distributing network communication across multiple robots within a system, individual robots are still susceptible to hardware failure. In the past, such robots would simply be removed from service, and their tasks re-allocated to other members. However, there are times when a faulty robot might be crucial to a mission, or be able to contribute in a less communication intensive area. By allowing robots to encode and decode messages into unique sequences of DTMF symbols, called words, our system is able to facilitate continued low-bandwidth communication between robots without access to network communication. Our results have shown that the system is capable of permitting robots to negotiate task initiation and termination, and is flexible enough to permit a pair of robots to perform a simple turn taking task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a nation of rampant illegal downloaders, a tax on movies and television downloads is the last thing we need. Australian consumers and content producers are among those likely to be worse off should Joe Hockey succeed in his efforts to extend GST to online video-on-demand services like Netflix. It is easy to see why Mr Hockey and his state treasurer counterparts have reportedly agreed to this move. That doesn’t mean it’s a good idea.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel technique for conducting robust voice activity detection (VAD) in high-noise recordings. We use Gaussian mixture modeling (GMM) to train two generic models; speech and non-speech. We then score smaller segments of a given (unseen) recording against each of these GMMs to obtain two respective likelihood scores for each segment. These scores are used to compute a dissimilarity measure between pairs of segments and to carry out complete-linkage clustering of the segments into speech and non-speech clusters. We compare the accuracy of our method against state-of-the-art and standardised VAD techniques to demonstrate an absolute improvement of 15% in half-total error rate (HTER) over the best performing baseline system and across the QUT-NOISE-TIMIT database. We then apply our approach to the Audio-Visual Database of American English (AVDBAE) to demonstrate the performance of our algorithm in using visual, audio-visual or a proposed fusion of these features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual information in the form of lip movements of the speaker has been shown to improve the performance of speech recognition and search applications. In our previous work, we proposed cross database training of synchronous hidden Markov models (SHMMs) to make use of external large and publicly available audio databases in addition to the relatively small given audio visual database. In this work, the cross database training approach is improved by performing an additional audio adaptation step, which enables audio visual SHMMs to benefit from audio observations of the external audio models before adding visual modality to them. The proposed approach outperforms the baseline cross database training approach in clean and noisy environments in terms of phone recognition accuracy as well as spoken term detection (STD) accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speech recognition can be improved by using visual information in the form of lip movements of the speaker in addition to audio information. To date, state-of-the-art techniques for audio-visual speech recognition continue to use audio and visual data of the same database for training their models. In this paper, we present a new approach to make use of one modality of an external dataset in addition to a given audio-visual dataset. By so doing, it is possible to create more powerful models from other extensive audio-only databases and adapt them on our comparatively smaller multi-stream databases. Results show that the presented approach outperforms the widely adopted synchronous hidden Markov models (HMM) trained jointly on audio and visual data of a given audio-visual database for phone recognition by 29% relative. It also outperforms the external audio models trained on extensive external audio datasets and also internal audio models by 5.5% and 46% relative respectively. We also show that the proposed approach is beneficial in noisy environments where the audio source is affected by the environmental noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Strand specific RNAseq data is now more common in RNAseq projects. Visualizing RNAseq data has become an important matter in Analysis of sequencing data. The most widely used visualization tool is the UCSC genome browser that introduced the custom track concept that enabled researchers to simultaneously visualize gene expression at a particular locus from multiple experiments. Our objective of the software tool is to provide friendly interface for visualization of RNAseq datasets. Results This paper introduces a visualization tool (RNASeqBrowser) that incorporates and extends the functionality of the UCSC genome browser. For example, RNASeqBrowser simultaneously displays read coverage, SNPs, InDels and raw read tracks with other BED and wiggle tracks -- all being dynamically built from the BAM file. Paired reads are also connected in the browser to enable easier identification of novel exon/intron borders and chimaeric transcripts. Strand specific RNAseq data is also supported by RNASeqBrowser that displays reads above (positive strand transcript) or below (negative strand transcripts) a central line. Finally, RNASeqBrowser was designed for ease of use for users with few bioinformatic skills, and incorporates the features of many genome browsers into one platform. Conclusions The features of RNASeqBrowser: (1) RNASeqBrowser integrates UCSC genome browser and NGS visualization tools such as IGV. It extends the functionality of the UCSC genome browser by adding several new types of tracks to show NGS data such as individual raw reads, SNPs and InDels. (2) RNASeqBrowser can dynamically generate RNA secondary structure. It is useful for identifying non-coding RNA such as miRNA. (3) Overlaying NGS wiggle data is helpful in displaying differential expression and is simple to implement in RNASeqBrowser. (4) NGS data accumulates a lot of raw reads. Thus, RNASeqBrowser collapses exact duplicate reads to reduce visualization space. Normal PC’s can show many windows of NGS individual raw reads without much delay. (5) Multiple popup windows of individual raw reads provide users with more viewing space. This avoids existing approaches (such as IGV) which squeeze all raw reads into one window. This will be helpful for visualizing multiple datasets simultaneously. RNASeqBrowser and its manual are freely available at http://www.australianprostatecentre.org/research/software/rnaseqbrowser webcite or http://sourceforge.net/projects/rnaseqbrowser/ webcite

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Frog species have been declining worldwide at unprecedented rates in the past decades. There are many reasons for this decline including pollution, habitat loss, and invasive species [1]. To preserve, protect, and restore frog biodiversity, it is important to monitor and assess frog species. In this paper, a novel method using image processing techniques for analyzing Australian frog vocalisations is proposed. An FFT is applied to audio data to produce a spectrogram. Then, acoustic events are detected and isolated into corresponding segments through image processing techniques applied to the spectrogram. For each segment, spectral peak tracks are extracted with selected seeds and a region growing technique is utilised to obtain the contour of each frog vocalisation. Based on spectral peak tracks and the contour of each frog vocalisation, six feature sets are extracted. Principal component analysis reduces each feature set down to six principal components which are tested for classification performance with a k-nearest neighbor classifier. This experiment tests the proposed method of classification on fourteen frog species which are geographically well distributed throughout Queensland, Australia. The experimental results show that the best average classification accuracy for the fourteen frog species can be up to 87%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acoustic classification of anurans (frogs) has received increasing attention for its promising application in biological and environment studies. In this study, a novel feature extraction method for frog call classification is presented based on the analysis of spectrograms. The frog calls are first automatically segmented into syllables. Then, spectral peak tracks are extracted to separate desired signal (frog calls) from background noise. The spectral peak tracks are used to extract various syllable features, including: syllable duration, dominant frequency, oscillation rate, frequency modulation, and energy modulation. Finally, a k-nearest neighbor classifier is used for classifying frog calls based on the results of principal component analysis. The experiment results show that syllable features can achieve an average classification accuracy of 90.5% which outperforms Mel-frequency cepstral coefficients features (79.0%).