17 resultados para American music
Resumo:
Tambura is an essential drone accompaniment used in Indian music concerts. It acts as an immediate reference of pitch for both the artists and listeners. The four strings of Tambura are tuned to the frequency ratio :1:1: . Careful listening to Tambura sound reveals that the tonal spectrum is not stationary but is time varying. The object of this study is to make a detailed spectrum analysis to find out the nature of temporal variation of the tonal spectrum of Tambura sound. Results of the analysis are correlated with perceptual evaluation conducted in a controlled acoustic environment. A significant result of this study is to demonstrate the presence of several notes which are normally not noticed even by a professional artist. The effect of bridge in Tambura in producing the so called “live tone” is explained through time and frequency parameters of Tambura sounds.
Resumo:
The problem of automatic melody line identification in a MIDI file plays an important role towards taking QBH systems to the next level. We present here, a novel algorithm to identify the melody line in a polyphonic MIDI file. A note pruning and track/channel ranking method is used to identify the melody line. We use results from musicology to derive certain simple heuristics for the note pruning stage. This helps in the robustness of the algorithm, by way of discarding "spurious" notes. A ranking based on the melodic information in each track/channel enables us to choose the melody line accurately. Our algorithm makes no assumption about MIDI performer specific parameters, is simple and achieves an accuracy of 97% in identifying the melody line correctly. This algorithm is currently being used by us in a QBH system built in our lab.
Resumo:
We propose a simple speech music discriminator that uses features based on HILN(Harmonics, Individual Lines and Noise) model. We have been able to test the strength of the feature set on a standard database of 66 files and get an accuracy of around 97%. We also have tested on sung queries and polyphonic music and have got very good results. The current algorithm is being used to discriminate between sung queries and played (using an instrument like flute) queries for a Query by Humming(QBH) system currently under development in the lab.
Resumo:
The absorption produced by the audience in concert halls is considered a random variable. Beranek's proposal [L. L. Beranek, Music, Acoustics and Architecture (Wiley, New York, 1962), p. 543] that audience absorption is proportional to the area they occupy and not to their number is subjected to a statistical hypothesis test. A two variable linear regression model of the absorption with audience area and residual area as regressor variables is postulated for concert halls without added absorptive materials. Since Beranek's contention amounts to the statement that audience absorption is independent of the seating density, the test of the hypothesis lies in categorizing halls by seating density and examining for significant differences among slopes of regression planes of the different categories. Such a test shows that Beranek's hypothesis can be accepted. It is also shown that the audience area is a better predictor of the absorption than the audience number. The absorption coefficients and their 95% confidence limits are given for the audience and residual areas. A critique of the regression model is presented.
Resumo:
In the direction of arrival (DOA) estimation problem, we encounter both finite data and insufficient knowledge of array characterization. It is therefore important to study how subspace-based methods perform in such conditions. We analyze the finite data performance of the multiple signal classification (MUSIC) and minimum norm (min. norm) methods in the presence of sensor gain and phase errors, and derive expressions for the mean square error (MSE) in the DOA estimates. These expressions are first derived assuming an arbitrary array and then simplified for the special case of an uniform linear array with isotropic sensors. When they are further simplified for the case of finite data only and sensor errors only, they reduce to the recent results given in [9-12]. Computer simulations are used to verify the closeness between the predicted and simulated values of the MSE.
Resumo:
In the past few years there have been attempts to develop subspace methods for DoA (direction of arrival) estimation using a fourth?order cumulant which is known to de?emphasize Gaussian background noise. To gauge the relative performance of the cumulant MUSIC (MUltiple SIgnal Classification) (c?MUSIC) and the standard MUSIC, based on the covariance function, an extensive numerical study has been carried out, where a narrow?band signal source has been considered and Gaussian noise sources, which produce a spatially correlated background noise, have been distributed. These simulations indicate that, even though the cumulant approach is capable of de?emphasizing the Gaussian noise, both bias and variance of the DoA estimates are higher than those for MUSIC. To achieve comparable results the cumulant approach requires much larger data, three to ten times that for MUSIC, depending upon the number of sources and how close they are. This is attributed to the fact that in the estimation of the cumulant, an average of a product of four random variables is needed to make an evaluation. Therefore, compared to those in the evaluation of the covariance function, there are more cross terms which do not go to zero unless the data length is very large. It is felt that these cross terms contribute to the large bias and variance observed in c?MUSIC. However, the ability to de?emphasize Gaussian noise, white or colored, is of great significance since the standard MUSIC fails when there is colored background noise. Through simulation it is shown that c?MUSIC does yield good results, but only at the cost of more data.
Resumo:
We analyze the AlApana of a Carnatic music piece without the prior knowledge of the singer or the rAga. AlApana is ameans to communicate to the audience, the flavor or the bhAva of the rAga through the permitted notes and its phrases. The input to our analysis is a recording of the vocal AlApana along with the accompanying instrument. The AdhAra shadja(base note) of the singer for that AlApana is estimated through a stochastic model of note frequencies. Based on the shadja, we identify the notes (swaras) used in the AlApana using a semi-continuous GMM. Using the probabilities of each note interval, we recognize swaras of the AlApana. For sampurNa rAgas, we can identify the possible rAga, based on the swaras. We have been able to achieve correct shadja identification, which is crucial to all further steps, in 88.8% of 55 AlApanas. Among them (48 AlApanas of 7 rAgas), we get 91.5% correct swara identification and 62.13% correct R (rAga) accuracy.
Resumo:
Compressive Sensing (CS) is a new sensing paradigm which permits sampling of a signal at its intrinsic information rate which could be much lower than Nyquist rate, while guaranteeing good quality reconstruction for signals sparse in a linear transform domain. We explore the application of CS formulation to music signals. Since music signals comprise of both tonal and transient nature, we examine several transforms such as discrete cosine transform (DCT), discrete wavelet transform (DWT), Fourier basis and also non-orthogonal warped transforms to explore the effectiveness of CS theory and the reconstruction algorithms. We show that for a given sparsity level, DCT, overcomplete, and warped Fourier dictionaries result in better reconstruction, and warped Fourier dictionary gives perceptually better reconstruction. “MUSHRA” test results show that a moderate quality reconstruction is possible with about half the Nyquist sampling.
Resumo:
Empirical research available on technology transfer initiatives is either North American or European. Literature over the last two decades shows various research objectives such as identifying the variables to be measured and statistical methods to be used in the context of studying university based technology transfer initiatives. AUTM survey data from years 1996 to 2008 provides insightful patterns about the North American technology transfer initiatives, we use this data in our paper. This paper has three sections namely, a comparison of North American Universities with (n=1129) and without Medical Schools (n=786), an analysis of the top 75th percentile of these samples and a DEA analysis of these samples. We use 20 variables. Researchers have attempted to classify university based technology transfer initiative variables into multi-stages, namely, disclosures, patents and license agreements. Using the same approach, however with minor variations, three stages are defined in this paper. The first stage is to do with inputs from R&D expenditure and outputs namely, invention disclosures. The second stage is to do with invention disclosures being the input and patents issued being the output. The third stage is to do with patents issued as an input and technology transfers as outcomes.
Resumo:
Music signals comprise of atomic notes drawn from a musical scale. The creation of musical sequences often involves splicing the notes in a constrained way resulting in aesthetically appealing patterns. We develop an approach for music signal representation based on symbolic dynamics by translating the lexicographic rules over a musical scale to constraints on a Markov chain. This source representation is useful for machine based music synthesis, in a way, similar to a musician producing original music. In order to mathematically quantify user listening experience, we study the correlation between the max-entropic rate of a musical scale and the subjective aesthetic component. We present our analysis with examples from the south Indian classical music system.
Resumo:
The forces that cause deformation of western North America have been debated for decades. Recent studies, primarily based on analysis of crustal stresses in the western United States, have suggested that the deformation of the region is mainly controlled by gravitational potential energy (GPE) variations and boundary loads, with basal tractions due to mantle flow playing a relatively minor role. We address these issues by modelling the deviatoric stress field over western North America from a 3-D finite element mantle circulation model with lateral viscosity variations. Our approach takes into account the contribution from both topography and shallow lithosphere structure (GPE) as well as that from deeper mantle flow in one single model, as opposed to separate lithosphere and circulation models, as has been done so far. In addition to predicting the deviatoric stresses we also jointly fit the constraints of geoid, dynamic topography and plate motion both globally and over North America, in order to ensure that the forces that arise in our models are dynamically consistent. We examine the sensitivity of the dynamic models to different lateral viscosity variations. We find that circulation models that include upper mantle slabs yield a better fit to observed plate velocities. Our results indicate that a model of GPE variations coupled with mantle convection gives the best fit to the observational constraints. We argue that although GPE variations control a large part of the deformation of the western United States, deeper mantle tractions also play a significant role. The average deviatoric stress magnitudes in the western United States range 30-40 MPa. The cratonic region exhibits higher coupling to mantle flow than the rest of the continent. We find that a relatively strong San Andreas fault gives a better fit to the observational constraints, especially that of plate velocity in western North America.
Resumo:
1. Resilience-based approaches are increasingly being called upon to inform ecosystem management, particularly in arid and semi-arid regions. This requires management frameworks that can assess ecosystem dynamics, both within and between alternative states, at relevant time scales. 2. We analysed long-term vegetation records from two representative sites in the North American sagebrush-steppe ecosystem, spanning nine decades, to determine if empirical patterns were consistent with resilience theory, and to determine if cheatgrass Bromus tectorum invasion led to thresholds as currently envisioned by expert-based state-and-transition models (STM). These data span the entire history of cheatgrass invasion at these sites and provide a unique opportunity to assess the impacts of biotic invasion on ecosystem resilience. 3. We used univariate and multivariate statistical tools to identify unique plant communities and document the magnitude, frequency and directionality of community transitions through time. Community transitions were characterized by 37-47% dissimilarity in species composition, they were not evenly distributed through time, their frequency was not correlated with precipitation, and they could not be readily attributed to fire or grazing. Instead, at both sites, the majority of community transitions occurred within an 8-10year period of increasing cheatgrass density, became infrequent after cheatgrass density peaked, and thereafter transition frequency declined. 4. Greater cheatgrass density, replacement of native species and indication of asymmetry in community transitions suggest that thresholds may have been exceeded in response to cheatgrass invasion at one site (more arid), but not at the other site (less arid). Asymmetry in the direction of community transitions also identified communities that were at-risk' of cheatgrass invasion, as well as potential restoration pathways for recovery of pre-invasion states. 5. Synthesis and applications. These results illustrate the complexities associated with threshold identification, and indicate that criteria describing the frequency, magnitude, directionality and temporal scale of community transitions may provide greater insight into resilience theory and its application for ecosystem management. These criteria are likely to vary across biogeographic regions that are susceptible to cheatgrass invasion, and necessitate more in-depth assessments of thresholds and alternative states, than currently available.
Resumo:
We address the problem of multi-instrument recognition in polyphonic music signals. Individual instruments are modeled within a stochastic framework using Student's-t Mixture Models (tMMs). We impose a mixture of these instrument models on the polyphonic signal model. No a priori knowledge is assumed about the number of instruments in the polyphony. The mixture weights are estimated in a latent variable framework from the polyphonic data using an Expectation Maximization (EM) algorithm, derived for the proposed approach. The weights are shown to indicate instrument activity. The output of the algorithm is an Instrument Activity Graph (IAG), using which, it is possible to find out the instruments that are active at a given time. An average F-ratio of 0 : 7 5 is obtained for polyphonies containing 2-5 instruments, on a experimental test set of 8 instruments: clarinet, flute, guitar, harp, mandolin, piano, trombone and violin.