4 resultados para Trios (Flute, harp, viola)

em Indian Institute of Science - Bangalore - Índia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of multi-instrument recognition in polyphonic music signals. Individual instruments are modeled within a stochastic framework using Student's-t Mixture Models (tMMs). We impose a mixture of these instrument models on the polyphonic signal model. No a priori knowledge is assumed about the number of instruments in the polyphony. The mixture weights are estimated in a latent variable framework from the polyphonic data using an Expectation Maximization (EM) algorithm, derived for the proposed approach. The weights are shown to indicate instrument activity. The output of the algorithm is an Instrument Activity Graph (IAG), using which, it is possible to find out the instruments that are active at a given time. An average F-ratio of 0 : 7 5 is obtained for polyphonies containing 2-5 instruments, on a experimental test set of 8 instruments: clarinet, flute, guitar, harp, mandolin, piano, trombone and violin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a simple speech music discriminator that uses features based on HILN(Harmonics, Individual Lines and Noise) model. We have been able to test the strength of the feature set on a standard database of 66 files and get an accuracy of around 97%. We also have tested on sung queries and polyphonic music and have got very good results. The current algorithm is being used to discriminate between sung queries and played (using an instrument like flute) queries for a Query by Humming(QBH) system currently under development in the lab.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The clever designs of natural transducers are a great source of inspiration for man-made systems. At small length scales, there are many transducers in nature that we are now beginning to understand and learn from. Here, we present an example of such a transducer that is used by field crickets to produce their characteristic song. This transducer uses two distinct components-a file of discrete teeth and a plectrum that engages intermittently to produce a series of impulses forming the loading, and an approximately triangular membrane, called the harp, that acts as a resonator and vibrates in response to the impulse-train loading. The file-and-plectrum act as a frequency multiplier taking the low wing beat frequency as the input and converting it into an impulse-train of sufficiently high frequency close to the resonant frequency of the harp. The forced vibration response results in beats producing the characteristic sound of the cricket song. With careful measurements of the harp geometry and experimental measurements of its mechanical properties (Young's modulus determined from nanoindentation tests), we construct a finite element (FE) model of the harp and carry out modal analysis to determine its natural frequency. We fine tune the model with appropriate elastic boundary conditions to match the natural frequency of the harp of a particular species-Gryllus bimaculatus. We model impulsive loading based on a loading scheme reported in literature and predict the transient response of the harp. We show that the harp indeed produces beats and its frequency content matches closely that of the recorded song. Subsequently, we use our FE model to show that the natural design is quite robust to perturbations in the file. The characteristic song frequency produced is unaffected by variations in the spacing of file-teeth and even by larger gaps. Based on the understanding of how this natural transducer works, one can design and fabricate efficient microscale acoustic devices such as microelectromechanical systems (MEMS) loudspeakers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We formulate the problem of detecting the constituent instruments in a polyphonic music piece as a joint decoding problem. From monophonic data, parametric Gaussian Mixture Hidden Markov Models (GM-HMM) are obtained for each instrument. We propose a method to use the above models in a factorial framework, termed as Factorial GM-HMM (F-GM-HMM). The states are jointly inferred to explain the evolution of each instrument in the mixture observation sequence. The dependencies are decoupled using variational inference technique. We show that the joint time evolution of all instruments' states can be captured using F-GM-HMM. We compare performance of proposed method with that of Student's-t mixture model (tMM) and GM-HMM in an existing latent variable framework. Experiments on two to five polyphony with 8 instrument models trained on the RWC dataset, tested on RWC and TRIOS datasets show that F-GM-HMM gives an advantage over the other considered models in segments containing co-occurring instruments.