984 resultados para music perception


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of multi-instrument recognition in polyphonic music signals. Individual instruments are modeled within a stochastic framework using Student's-t Mixture Models (tMMs). We impose a mixture of these instrument models on the polyphonic signal model. No a priori knowledge is assumed about the number of instruments in the polyphony. The mixture weights are estimated in a latent variable framework from the polyphonic data using an Expectation Maximization (EM) algorithm, derived for the proposed approach. The weights are shown to indicate instrument activity. The output of the algorithm is an Instrument Activity Graph (IAG), using which, it is possible to find out the instruments that are active at a given time. An average F-ratio of 0 : 7 5 is obtained for polyphonies containing 2-5 instruments, on a experimental test set of 8 instruments: clarinet, flute, guitar, harp, mandolin, piano, trombone and violin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tonic is a fundamental concept in Indian art music. It is the base pitch, which an artist chooses in order to construct the melodies during a rg(a) rendition, and all accompanying instruments are tuned using the tonic pitch. Consequently, tonic identification is a fundamental task for most computational analyses of Indian art music, such as intonation analysis, melodic motif analysis and rg recognition. In this paper we review existing approaches for tonic identification in Indian art music and evaluate them on six diverse datasets for a thorough comparison and analysis. We study the performance of each method in different contexts such as the presence/absence of additional metadata, the quality of audio data, the duration of audio data, music tradition (Hindustani/Carnatic) and the gender of the singer (male/female). We show that the approaches that combine multi-pitch analysis with machine learning provide the best performance in most cases (90% identification accuracy on average), and are robust across the aforementioned contexts compared to the approaches based on expert knowledge. In addition, we also show that the performance of the latter can be improved when additional metadata is available to further constrain the problem. Finally, we present a detailed error analysis of each method, providing further insights into the advantages and limitations of the methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Shape and texture are both important properties of visual objects, but texture is relatively less understood. Here, we characterized neuronal responses to discrete textures in monkey inferotemporal (IT) cortex and asked whether they can explain classic findings in human texture perception. We focused on three classic findings on texture discrimination: 1) it can be easy or hard depending on the constituent elements; 2) it can have asymmetries, and 3) it is reduced for textures with randomly oriented elements. We recorded neuronal activity from monkey inferotemporal (IT) cortex and measured texture perception in humans for a variety of textures. Our main findings are as follows: 1) IT neurons show congruent selectivity for textures across array size; 2) textures that were easy for humans to discriminate also elicited distinct patterns of neuronal activity in monkey IT; 3) texture pairs with asymmetries in humans also exhibited asymmetric variation in firing rate across monkey IT; and 4) neuronal responses to randomly oriented textures were explained by an average of responses to homogeneous textures, which rendered them less discriminable. The reduction in discriminability of monkey IT neurons predicted the reduced discriminability in humans during texture discrimination. Taken together, our results suggest that texture perception in humans is likely based on neuronal representations similar to those in monkey IT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We formulate the problem of detecting the constituent instruments in a polyphonic music piece as a joint decoding problem. From monophonic data, parametric Gaussian Mixture Hidden Markov Models (GM-HMM) are obtained for each instrument. We propose a method to use the above models in a factorial framework, termed as Factorial GM-HMM (F-GM-HMM). The states are jointly inferred to explain the evolution of each instrument in the mixture observation sequence. The dependencies are decoupled using variational inference technique. We show that the joint time evolution of all instruments' states can be captured using F-GM-HMM. We compare performance of proposed method with that of Student's-t mixture model (tMM) and GM-HMM in an existing latent variable framework. Experiments on two to five polyphony with 8 instrument models trained on the RWC dataset, tested on RWC and TRIOS datasets show that F-GM-HMM gives an advantage over the other considered models in segments containing co-occurring instruments.