13 resultados para Organ music, Arranged
em Indian Institute of Science - Bangalore - Índia
Resumo:
Among the various amines administered to excisedCucumis sativus cotyledons in short-term organ culture, agmatine (AGM) inhibited arginine decarboxylase (ADC) activity to around 50%, and putrescine was the most potent entity in this regard. Homoarginine (HARG) dramatically stimulated (3- to 4-fold) the enzyme activity. Both AGM inhibition and HARG stimulation of ADC were transient, the maximum response being elicited at 12 h of culture. Mixing experiments ruled out involvement of a macromolecular effector in the observed modulation of ADC. HARG-stimulated ADC activity was completely abolished by cycloheximide, whereas AGM-mediated inhibition was unaffected. Half-life of the enzyme did not alter on treatment with either HARG or AGM. The observed alterations in ADC activity are accompanied by change in Km of the enzyme. HARG-stimulated ADC activity is additive to that induced by benzyladenine (BA) whereas in presence of KCl, HARG failed to enhance ADC activity, thus demonstrating the overriding influence of K+ on amine metabolism.
Resumo:
The problem of automatic melody line identification in a MIDI file plays an important role towards taking QBH systems to the next level. We present here, a novel algorithm to identify the melody line in a polyphonic MIDI file. A note pruning and track/channel ranking method is used to identify the melody line. We use results from musicology to derive certain simple heuristics for the note pruning stage. This helps in the robustness of the algorithm, by way of discarding "spurious" notes. A ranking based on the melodic information in each track/channel enables us to choose the melody line accurately. Our algorithm makes no assumption about MIDI performer specific parameters, is simple and achieves an accuracy of 97% in identifying the melody line correctly. This algorithm is currently being used by us in a QBH system built in our lab.
Resumo:
We propose a simple speech music discriminator that uses features based on HILN(Harmonics, Individual Lines and Noise) model. We have been able to test the strength of the feature set on a standard database of 66 files and get an accuracy of around 97%. We also have tested on sung queries and polyphonic music and have got very good results. The current algorithm is being used to discriminate between sung queries and played (using an instrument like flute) queries for a Query by Humming(QBH) system currently under development in the lab.
Resumo:
In the direction of arrival (DOA) estimation problem, we encounter both finite data and insufficient knowledge of array characterization. It is therefore important to study how subspace-based methods perform in such conditions. We analyze the finite data performance of the multiple signal classification (MUSIC) and minimum norm (min. norm) methods in the presence of sensor gain and phase errors, and derive expressions for the mean square error (MSE) in the DOA estimates. These expressions are first derived assuming an arbitrary array and then simplified for the special case of an uniform linear array with isotropic sensors. When they are further simplified for the case of finite data only and sensor errors only, they reduce to the recent results given in [9-12]. Computer simulations are used to verify the closeness between the predicted and simulated values of the MSE.
Resumo:
We analyze the AlApana of a Carnatic music piece without the prior knowledge of the singer or the rAga. AlApana is ameans to communicate to the audience, the flavor or the bhAva of the rAga through the permitted notes and its phrases. The input to our analysis is a recording of the vocal AlApana along with the accompanying instrument. The AdhAra shadja(base note) of the singer for that AlApana is estimated through a stochastic model of note frequencies. Based on the shadja, we identify the notes (swaras) used in the AlApana using a semi-continuous GMM. Using the probabilities of each note interval, we recognize swaras of the AlApana. For sampurNa rAgas, we can identify the possible rAga, based on the swaras. We have been able to achieve correct shadja identification, which is crucial to all further steps, in 88.8% of 55 AlApanas. Among them (48 AlApanas of 7 rAgas), we get 91.5% correct swara identification and 62.13% correct R (rAga) accuracy.
Resumo:
We present an analytical effective theory for the magnetic phase diagram for zigzag-edge terminated honeycomb nanoribbons described by a Hubbard model with an interaction parameter U. We show that the edge magnetic moment varies as ln U and uncover its dependence on the width W of the ribbon. The physics of this owes its origin to the sensory-organ-like response of the nanoribbons, demonstrating that considerations beyond the usual Stoner-Landau theory are necessary to understand the magnetism of these systems. A first-order magnetic transition from an antiparallel orientation of the moments on opposite edges to a parallel orientation occurs upon doping with holes or electrons. The critical doping for this transition is shown to depend inversely on the width of the ribbon. Using variational Monte Carlo calculations, we show that magnetism is robust to fluctuations. Additionally, we show that the magnetic phase diagram is generic to zigzag-edge terminated nanostructures such as nanodots. Furthermore, we perform first-principles modeling to show how such magnetic transitions can be realized in substituted graphene nanoribbons. DOI: 10.1103/PhysRevB.87.085412
Resumo:
Compressive Sensing (CS) is a new sensing paradigm which permits sampling of a signal at its intrinsic information rate which could be much lower than Nyquist rate, while guaranteeing good quality reconstruction for signals sparse in a linear transform domain. We explore the application of CS formulation to music signals. Since music signals comprise of both tonal and transient nature, we examine several transforms such as discrete cosine transform (DCT), discrete wavelet transform (DWT), Fourier basis and also non-orthogonal warped transforms to explore the effectiveness of CS theory and the reconstruction algorithms. We show that for a given sparsity level, DCT, overcomplete, and warped Fourier dictionaries result in better reconstruction, and warped Fourier dictionary gives perceptually better reconstruction. “MUSHRA” test results show that a moderate quality reconstruction is possible with about half the Nyquist sampling.
Resumo:
Music signals comprise of atomic notes drawn from a musical scale. The creation of musical sequences often involves splicing the notes in a constrained way resulting in aesthetically appealing patterns. We develop an approach for music signal representation based on symbolic dynamics by translating the lexicographic rules over a musical scale to constraints on a Markov chain. This source representation is useful for machine based music synthesis, in a way, similar to a musician producing original music. In order to mathematically quantify user listening experience, we study the correlation between the max-entropic rate of a musical scale and the subjective aesthetic component. We present our analysis with examples from the south Indian classical music system.
Resumo:
We address the problem of multi-instrument recognition in polyphonic music signals. Individual instruments are modeled within a stochastic framework using Student's-t Mixture Models (tMMs). We impose a mixture of these instrument models on the polyphonic signal model. No a priori knowledge is assumed about the number of instruments in the polyphony. The mixture weights are estimated in a latent variable framework from the polyphonic data using an Expectation Maximization (EM) algorithm, derived for the proposed approach. The weights are shown to indicate instrument activity. The output of the algorithm is an Instrument Activity Graph (IAG), using which, it is possible to find out the instruments that are active at a given time. An average F-ratio of 0 : 7 5 is obtained for polyphonies containing 2-5 instruments, on a experimental test set of 8 instruments: clarinet, flute, guitar, harp, mandolin, piano, trombone and violin.
Resumo:
The tonic is a fundamental concept in Indian art music. It is the base pitch, which an artist chooses in order to construct the melodies during a rg(a) rendition, and all accompanying instruments are tuned using the tonic pitch. Consequently, tonic identification is a fundamental task for most computational analyses of Indian art music, such as intonation analysis, melodic motif analysis and rg recognition. In this paper we review existing approaches for tonic identification in Indian art music and evaluate them on six diverse datasets for a thorough comparison and analysis. We study the performance of each method in different contexts such as the presence/absence of additional metadata, the quality of audio data, the duration of audio data, music tradition (Hindustani/Carnatic) and the gender of the singer (male/female). We show that the approaches that combine multi-pitch analysis with machine learning provide the best performance in most cases (90% identification accuracy on average), and are robust across the aforementioned contexts compared to the approaches based on expert knowledge. In addition, we also show that the performance of the latter can be improved when additional metadata is available to further constrain the problem. Finally, we present a detailed error analysis of each method, providing further insights into the advantages and limitations of the methods.
Resumo:
Recent advances in nanotechnology have paved ways to various techniques for designing and fabricating novel nanostructures incorporating noble metal nanoparticles, for a wide range of applications. The interaction of light with metal nanoparticles (NPs) can generate strongly localized electromagnetic fields (Localized Surface Plasmon Resonance, LSPR) at certain wavelengths of the incident beam. In assemblies or structures where the nanoparticles are placed in close proximity, the plasmons of individual metallic NPs can be strongly coupled to each other via Coulomb interactions. By arranging the metallic NPs in a chiral (e.g. helical) geometry, it is possible to induce collective excitations, which lead to differential optical response of the structures to right-and left circularly polarized light (e.g. Circular Dichroism - CD). Earlier reports in this field include novel techniques of synthesizing metallic nanoparticles on biological helical templates made from DNA, proteins etc. In the present work, we have developed new ways of fabricating chiral complexes made of metallic NPs, which demonstrate a very strong chiro-optical response in the visible region of the electromagnetic spectrum. Using DDA (Discrete Dipole Approximation) simulations, we theoretically studied the conditions responsible for large and broadband chiro-optical response. This system may be used for various applications, for example those related to polarization control of visible light, sensing of proteins and other chiral bio-molecules, and many more.
Resumo:
We formulate the problem of detecting the constituent instruments in a polyphonic music piece as a joint decoding problem. From monophonic data, parametric Gaussian Mixture Hidden Markov Models (GM-HMM) are obtained for each instrument. We propose a method to use the above models in a factorial framework, termed as Factorial GM-HMM (F-GM-HMM). The states are jointly inferred to explain the evolution of each instrument in the mixture observation sequence. The dependencies are decoupled using variational inference technique. We show that the joint time evolution of all instruments' states can be captured using F-GM-HMM. We compare performance of proposed method with that of Student's-t mixture model (tMM) and GM-HMM in an existing latent variable framework. Experiments on two to five polyphony with 8 instrument models trained on the RWC dataset, tested on RWC and TRIOS datasets show that F-GM-HMM gives an advantage over the other considered models in segments containing co-occurring instruments.