15 resultados para automatic music analysis
em CentAUR: Central Archive University of Reading - UK
Resumo:
In a workshop setting, two pieces of recorded music were presented to a group of adult non-specialists; a key feature was to set up structured discussion within which the respondents considered each piece of music as a whole and not in its constituent parts. There were two areas of interest, namely to explore whether the respondents were likely to identify the musical features or to make extra-musical associations and, to establish the extent to which there would be commonality and difference in their approach to formulating the verbal responses. An inductive approach was used in the analysis of data to reveal some of the working theories underpinning the intuitive musicianship of the adult non-specialist listener. Findings have shown that, when unprompted by forced choice responses, the listeners generated responses that could be said to be information-poor in terms of musical features but rich in terms of the level of personal investment they made in formulating their responses. This is evidenced in a number of connections they made between the discursive and the non-discursive, including those which are relational and mediated by their experiences. Implications for music education are considered.
Resumo:
Weather is frequently used in music to frame events and emotions, yet quantitative analyses are rare. From a collated base set of 759 weather-related songs, 419 were analysed based on listings from a karaoke database. This article analyses the 20 weather types described, frequency of occurrence, genre, keys, mimicry, lyrics and songwriters. Vocals were the principal means of communicating weather: sunshine was the most common, followed by rain, with weather depictions linked to the emotions of the song. Bob Dylan, John Lennon and Paul McCartney wrote the most weather-related songs, partly following their experiences at the time of writing.
Resumo:
Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.
Resumo:
The externally recorded electroencephalogram (EEG) is contaminated with signals that do not originate from the brain, collectively known as artefacts. Thus, EEG signals must be cleaned prior to any further analysis. In particular, if the EEG is to be used in online applications such as Brain-Computer Interfaces (BCIs) the removal of artefacts must be performed in an automatic manner. This paper investigates the robustness of Mutual Information based features to inter-subject variability for use in an automatic artefact removal system. The system is based on the separation of EEG recordings into independent components using a temporal ICA method, RADICAL, and the utilisation of a Support Vector Machine for classification of the components into EEG and artefact signals. High accuracy and robustness to inter-subject variability is achieved.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
We are developing computational tools supporting the detailed analysis of the dependence of neural electrophysiological response on dendritic morphology. We approach this problem by combining simulations of faithful models of neurons (experimental real life morphological data with known models of channel kinetics) with algorithmic extraction of morphological and physiological parameters and statistical analysis. In this paper, we present the novel method for an automatic recognition of spike trains in voltage traces, which eliminates the need for human intervention. This enables classification of waveforms with consistent criteria across all the analyzed traces and so it amounts to reduction of the noise in the data. This method allows for an automatic extraction of relevant physiological parameters necessary for further statistical analysis. In order to illustrate the usefulness of this procedure to analyze voltage traces, we characterized the influence of the somatic current injection level on several electrophysiological parameters in a set of modeled neurons. This application suggests that such an algorithmic processing of physiological data extracts parameters in a suitable form for further investigation of structure-activity relationship in single neurons.
Resumo:
Automatic keyword or keyphrase extraction is concerned with assigning keyphrases to documents based on words from within the document. Previous studies have shown that in a significant number of cases author-supplied keywords are not appropriate for the document to which they are attached. This can either be because they represent what the author believes the paper is about not what it actually is, or because they include keyphrases which are more classificatory than explanatory e.g., “University of Poppleton” instead of “Knowledge Discovery in Databases”. Thus, there is a need for a system that can generate appropriate and diverse range of keyphrases that reflect the document. This paper proposes a solution that examines the synonyms of words and phrases in the document to find the underlying themes, and presents these as appropriate keyphrases. The primary method explores taking n-grams of the source document phrases, and examining the synonyms of these, while the secondary considers grouping outputs by their synonyms. The experiments undertaken show the primary method produces good results and that the secondary method produces both good results and potential for future work.
Resumo:
An automatic method for recognizing natively disordered regions from amino acid sequence is described and benchmarked against predictors that were assessed at the latest critical assessment of techniques for protein structure prediction (CASP) experiment. The method attains a Wilcoxon score of 90.0, which represents a statistically significant improvement on the methods evaluated on the same targets at CASP. The classifier, DISOPRED2, was used to estimate the frequency of native disorder in several representative genomes from the three kingdoms of life. Putative, long (>30 residue) disordered segments are found to occur in 2.0% of archaean, 4.2% of eubacterial and 33.0% of eukaryotic proteins. The function of proteins with long predicted regions of disorder was investigated using the gene ontology annotations supplied with the Saccharomyces genome database. The analysis of the yeast proteome suggests that proteins containing disorder are often located in the cell nucleus and are involved in the regulation of transcription and cell signalling. The results also indicate that native disorder is associated with the molecular functions of kinase activity and nucleic acid binding.
Resumo:
Automatic keyword or keyphrase extraction is concerned with assigning keyphrases to documents based on words from within the document. Previous studies have shown that in a significant number of cases author-supplied keywords are not appropriate for the document to which they are attached. This can either be because they represent what the author believes a paper is about not what it actually is, or because they include keyphrases which are more classificatory than explanatory e.g., “University of Poppleton” instead of “Knowledge Discovery in Databases”. Thus, there is a need for a system that can generate an appropriate and diverse range of keyphrases that reflect the document. This paper proposes two possible solutions that examine the synonyms of words and phrases in the document to find the underlying themes, and presents these as appropriate keyphrases. Using three different freely available thesauri, the work undertaken examines two different methods of producing keywords and compares the outcomes across multiple strands in the timeline. The primary method explores taking n-grams of the source document phrases, and examining the synonyms of these, while the secondary considers grouping outputs by their synonyms. The experiments undertaken show the primary method produces good results and that the secondary method produces both good results and potential for future work. In addition, the different qualities of the thesauri are examined and it is concluded that the more entries in a thesaurus, the better it is likely to perform. The age of the thesaurus or the size of each entry does not correlate to performance.
Resumo:
(ABR) is of fundamental importance to the investiga- tion of the auditory system behavior, though its in- terpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analyzing the ABR, clinicians are often interested in the identi- fication of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave la- tency) is a practical tool for the diagnosis of disorders affecting the auditory system. In this context, the aim of this research is to compare ABR manual/visual analysis provided by different examiners. Methods: The ABR data were collected from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). A total of 160 data samples were analyzed and a pair- wise comparison between four distinct examiners was executed. We carried out a statistical study aiming to identify significant differences between assessments provided by the examiners. For this, we used Linear Regression in conjunction with Bootstrap, as a me- thod for evaluating the relation between the responses given by the examiners. Results: The analysis sug- gests agreement among examiners however reveals differences between assessments of the variability of the waves. We quantified the magnitude of the ob- tained wave latency differences and 18% of the inves- tigated waves presented substantial differences (large and moderate) and of these 3.79% were considered not acceptable for the clinical practice. Conclusions: Our results characterize the variability of the manual analysis of ABR data and the necessity of establishing unified standards and protocols for the analysis of these data. These results may also contribute to the validation and development of automatic systems that are employed in the early diagnosis of hearing loss.
Resumo:
Television’s long-form storytelling has the potential to allow the rippling of music across episodes and seasons in interesting ways. In the integration of narrative, music and meaning found in The O.C. (Fox, FOX 2003-7), popular song’s allusive and referential qualities are drawn upon to particularly televisual ends. At times embracing its ‘disruptive’ presence, at others suturing popular music into narrative, at times doing both at once. With television studies largely lacking theories of music, this chapter draws on film music theory and close textual analysis to analyse some of the programme's music moments in detail. In particular it considers the series-spanning use of Jeff Buckley’s cover of ‘Hallelujah’ (and its subsequent oppressive presence across multiple televisual texts), the end of episode musical montage and the use of recurring song fragments as theme within single episodes. In doing so it highlights music's role in the fragmentation and flow of the television aesthetic and popular song’s structural presence in television narrative. Illustrating the multiplicity of popular song’s use in television, these moments demonstrate song’s ability to provide narrative commentary, yet also make particular use of what Ian Garwood describes as the ability of ‘a non-diegetic song to exceed the emotional range displayed by diegetic characters’ (2003:115), to ‘speak’ for characters or to their feelings, contributing to both teen TV’s melodramatic affect and narrative expression.
Resumo:
The automatic transformation of sequential programs for efficient execution on parallel computers involves a number of analyses and restructurings of the input. Some of these analyses are based on computing array sections, a compact description of a range of array elements. Array sections describe the set of array elements that are either read or written by program statements. These sections can be compactly represented using shape descriptors such as regular sections, simple sections, or generalized convex regions. However, binary operations such as Union performed on these representations do not satisfy a straightforward closure property, e.g., if the operands to Union are convex, the result may be nonconvex. Approximations are resorted to in order to satisfy this closure property. These approximations introduce imprecision in the analyses and, furthermore, the imprecisions resulting from successive operations have a cumulative effect. Delayed merging is a technique suggested and used in some of the existing analyses to minimize the effects of approximation. However, this technique does not guarantee an exact solution in a general setting. This article presents a generalized technique to precisely compute Union which can overcome these imprecisions.
Resumo:
In Indian classical music, ragas constitute specific combinations of tonic intervals potentially capable of evoking distinct emotions. A raga composition is typically presented in two modes, namely, alaap and gat. Alaap is the note by note delineation of a raga bound by a slow tempo, but not bound by a rhythmic cycle. Gat on the other hand is rendered at a faster tempo and follows a rhythmic cycle. Our primary objective was to (1) discriminate the emotions experienced across alaap and gat of ragas, (2) investigate the association of tonic intervals, tempo and rhythmic regularity with emotional response. 122 participants rated their experienced emotion across alaap and gat of 12 ragas. Analysis of the emotional responses revealed that (1) ragas elicit distinct emotions across the two presentation modes, and (2) specific tonic intervals are robust predictors of emotional response. Specifically, our results showed that the ‘minor second’ is a direct predictor of negative valence. (3) Tonality determines the emotion experienced for a raga where as rhythmic regularity and tempo modulate levels of arousal. Our findings provide new insights into the emotional response to Indian ragas and the impact of tempo, rhythmic regularity and tonality on it.
Resumo:
This paper aims to identify the circulation associated with Easterly Wave Disturbances (EWDs) that propagate toward the Eastern Northeast Brazil (ENEB) and their impact on the rainfall over ENEB during 2006 and 2007 rainy seasons (April–July). The EWDs identification and trajectory are analyzed using an automatic tracking technique (TracKH). The EWDs circulation patterns and their main features were obtained using the composite technique. To evaluate the TracKH efficiency, a validation was done by comparing the EWDs number tracked against observed cases obtained from an observational analysis. The mean characteristics of EWDs are 5.5-day period, propagation speed of ~9.5 m·s−1, and a 4500 km wavelength. A synoptic analysis shows that between days −2 d and 0 d, the low level winds presented cyclonic relative vorticity and convergence anomalies both in 2006 and 2007. The EWDs signals are strongest at low levels. The EWDs propagation is associated with relative humidity and precipitation positive anomalies and OLR and omega negative anomalies. The EWDs tracks are seen over all ENEB and their lysis occurs between the ENEB and marginally inside the continent. The tracking captured 71% of EWDs in all periods, indicating that an objective analysis is a promising method for EWDs detection.