780 resultados para Armorial music
Resumo:
Hip-hopin juuret juontavat 1970-luvulle Yhdysvaltoihin, mutta hip-hop-kulttuuri ja musiikkityyli ovat sen jälkeen levinneet ympäri maailmaa osana globalisaatiokehitystä. Myös monet nuoret muslimit tekevät nykyään hip-hop-musiikkia, ja yhä useampi tuo myös sanoituksiinsa vaikutteita Islamista ja elämästään muslimina. Musiikin asema on islamissa varsin kiistelty, eikä sitä ole selvästi sallittu (halal) tai kielletty (haram) muslimeilta. Muslimien tekemää hip-hoppia on toistaiseksi tutkittu hyvin vähän. Tutkielmassa tarkastellaan diskurssianalyysin keinoin, miten tapaustutkimuksena toimivalla muslimhiphop.com-internetsivustolla argumentoidaan ja konstruoidaan käsityksiä muslimi-identiteetistä. Teoreettisena ja analyyttisenä viitekehyksenä toimii sosiaalikonstruktivistinen näkemys identiteetistä kontekstisidonnaisena sekä puheessa ja teksteissä diskursiivisesti rakentuvana. Tutkielma kyseenalaistaa aiempien tutkimusten oletuksen hip-hopista vaikutuksiltaan yksinomaan positiivisena muslimi-identiteetille sekä muslimeja yhdentävänä tekijänä. Aineisto on kerätty edellä mainitulta sivustolta syksyn 2010 ja kevään 2011 aikana, josta analyysiin on rajattu vain itse sivusto ja sen hip-hoppia käsittelevät osiot. Sivusto ja sen perustaja ovat yhdysvaltalaisia, mutta sivustolla esiteltävien artistien tausta on hyvin monikulttuurinen. Moni on myös maahanmuuttajana nykyisessä kotimaassaan. Aineistossa esiintyviä teemoja ja diskursseja eritellään ja analysoidaan tutkielmassa lainauksien avulla. Sivuston periaatteissa ja artistien esillepääsyn kriteereissä määritellään tarkasti asennoituminen Islamin ja musiikin yhdistämiseen: mikäli sanoitukset ja artistit noudattavat Islamin oppeja, on muslimin sallittua tehdä ja kuunnella tällaista musiikkia. Islam-aiheisen hip-hopin perustellaan olevan ennen kaikkea vaihtoehto valtavirran hip-hopille, jota konstruoidaan aineistossa moraalisesti arveluttavaksi. Hip-hopille sekä muslimeille sallittuja ja kiellettyjä elementtejä erottelevan halal-haram-diskurssin ohella aineistosta nousee esiin opetusdiskurssi. Muslimien tekemän hip-hopin perustellaan edistävän Islamin opettamista erityisesti nuorille muslimeille ja siten vahvistavan positiivista muslimi-identiteettiä. Myös positiivisen muutoksen diskurssia käytetään aineistossa runsaasti liittyen mm. muslimiyhteisöihin sekä muslimeihin kohdistuviin taloudellisiin ja sosiaalipoliittisiin epäkohtiin ja stereotypioihin; musiikin sisältöä ja sen tekemistä perustellaan sen voimalla muuttaa asioita parempaan suuntaan. Monet muslimiartistit kamppailevat yhdistääkseen toisaalta Islamin ja taiteellisen luovuuden ja ilmaisuvapauden, toisaalta menestyäkseen kaupallisesti unohtamatta uskonnollista vakaumustaan. Monilla heistä hip-hop on ollut vahvasti läsnä kasvuympäristössä, mutta sen yhdistäminen Islamin periaatteisiin aiheuttaa kysymyksiä ja kyseenalaistuksia oman musiikillisen ja uskonnollisen identiteetin muodosta ja sisällöstä. Aineiston perusteella monet muslimiartistit ja Islam-aiheista hip-hoppia kuuntelevat muslimit joutuvat jatkuvasti puolustamaan musiikkia siihen kielteisesti suhtautuville muslimeille sekä ei-muslimeille, jotka vierastavat sen uskonnollisuutta. Muslimi-identiteettiä neuvotellaan jatkuvasti, ja se näyttäytyy aineistossa moniulotteisena ja tilanteisesti rakentuvana. Avainsanat: Muslimit, Islam, hip-hop, identiteetti, Internet, diskurssi, diskurssianalyysi
Resumo:
In the direction of arrival (DOA) estimation problem, we encounter both finite data and insufficient knowledge of array characterization. It is therefore important to study how subspace-based methods perform in such conditions. We analyze the finite data performance of the multiple signal classification (MUSIC) and minimum norm (min. norm) methods in the presence of sensor gain and phase errors, and derive expressions for the mean square error (MSE) in the DOA estimates. These expressions are first derived assuming an arbitrary array and then simplified for the special case of an uniform linear array with isotropic sensors. When they are further simplified for the case of finite data only and sensor errors only, they reduce to the recent results given in [9-12]. Computer simulations are used to verify the closeness between the predicted and simulated values of the MSE.
Resumo:
We analyze the AlApana of a Carnatic music piece without the prior knowledge of the singer or the rAga. AlApana is ameans to communicate to the audience, the flavor or the bhAva of the rAga through the permitted notes and its phrases. The input to our analysis is a recording of the vocal AlApana along with the accompanying instrument. The AdhAra shadja(base note) of the singer for that AlApana is estimated through a stochastic model of note frequencies. Based on the shadja, we identify the notes (swaras) used in the AlApana using a semi-continuous GMM. Using the probabilities of each note interval, we recognize swaras of the AlApana. For sampurNa rAgas, we can identify the possible rAga, based on the swaras. We have been able to achieve correct shadja identification, which is crucial to all further steps, in 88.8% of 55 AlApanas. Among them (48 AlApanas of 7 rAgas), we get 91.5% correct swara identification and 62.13% correct R (rAga) accuracy.
Resumo:
Compressive Sensing (CS) is a new sensing paradigm which permits sampling of a signal at its intrinsic information rate which could be much lower than Nyquist rate, while guaranteeing good quality reconstruction for signals sparse in a linear transform domain. We explore the application of CS formulation to music signals. Since music signals comprise of both tonal and transient nature, we examine several transforms such as discrete cosine transform (DCT), discrete wavelet transform (DWT), Fourier basis and also non-orthogonal warped transforms to explore the effectiveness of CS theory and the reconstruction algorithms. We show that for a given sparsity level, DCT, overcomplete, and warped Fourier dictionaries result in better reconstruction, and warped Fourier dictionary gives perceptually better reconstruction. “MUSHRA” test results show that a moderate quality reconstruction is possible with about half the Nyquist sampling.
Resumo:
Music signals comprise of atomic notes drawn from a musical scale. The creation of musical sequences often involves splicing the notes in a constrained way resulting in aesthetically appealing patterns. We develop an approach for music signal representation based on symbolic dynamics by translating the lexicographic rules over a musical scale to constraints on a Markov chain. This source representation is useful for machine based music synthesis, in a way, similar to a musician producing original music. In order to mathematically quantify user listening experience, we study the correlation between the max-entropic rate of a musical scale and the subjective aesthetic component. We present our analysis with examples from the south Indian classical music system.
Resumo:
We address the problem of multi-instrument recognition in polyphonic music signals. Individual instruments are modeled within a stochastic framework using Student's-t Mixture Models (tMMs). We impose a mixture of these instrument models on the polyphonic signal model. No a priori knowledge is assumed about the number of instruments in the polyphony. The mixture weights are estimated in a latent variable framework from the polyphonic data using an Expectation Maximization (EM) algorithm, derived for the proposed approach. The weights are shown to indicate instrument activity. The output of the algorithm is an Instrument Activity Graph (IAG), using which, it is possible to find out the instruments that are active at a given time. An average F-ratio of 0 : 7 5 is obtained for polyphonies containing 2-5 instruments, on a experimental test set of 8 instruments: clarinet, flute, guitar, harp, mandolin, piano, trombone and violin.
Resumo:
The tonic is a fundamental concept in Indian art music. It is the base pitch, which an artist chooses in order to construct the melodies during a rg(a) rendition, and all accompanying instruments are tuned using the tonic pitch. Consequently, tonic identification is a fundamental task for most computational analyses of Indian art music, such as intonation analysis, melodic motif analysis and rg recognition. In this paper we review existing approaches for tonic identification in Indian art music and evaluate them on six diverse datasets for a thorough comparison and analysis. We study the performance of each method in different contexts such as the presence/absence of additional metadata, the quality of audio data, the duration of audio data, music tradition (Hindustani/Carnatic) and the gender of the singer (male/female). We show that the approaches that combine multi-pitch analysis with machine learning provide the best performance in most cases (90% identification accuracy on average), and are robust across the aforementioned contexts compared to the approaches based on expert knowledge. In addition, we also show that the performance of the latter can be improved when additional metadata is available to further constrain the problem. Finally, we present a detailed error analysis of each method, providing further insights into the advantages and limitations of the methods.
Resumo:
We formulate the problem of detecting the constituent instruments in a polyphonic music piece as a joint decoding problem. From monophonic data, parametric Gaussian Mixture Hidden Markov Models (GM-HMM) are obtained for each instrument. We propose a method to use the above models in a factorial framework, termed as Factorial GM-HMM (F-GM-HMM). The states are jointly inferred to explain the evolution of each instrument in the mixture observation sequence. The dependencies are decoupled using variational inference technique. We show that the joint time evolution of all instruments' states can be captured using F-GM-HMM. We compare performance of proposed method with that of Student's-t mixture model (tMM) and GM-HMM in an existing latent variable framework. Experiments on two to five polyphony with 8 instrument models trained on the RWC dataset, tested on RWC and TRIOS datasets show that F-GM-HMM gives an advantage over the other considered models in segments containing co-occurring instruments.
Resumo:
This paper proposes a new method for local key and chord estimation from audio signals. This method relies primarily on principles from music theory, and does not require any training on a corpus of labelled audio files. A harmonic content of the musical piece is first extracted by computing a set of chroma vectors. A set of chord/key pairs is selected for every frame by correlation with fixed chord and key templates. An acyclic harmonic graph is constructed with these pairs as vertices, using a musical distance to weigh its edges. Finally, the sequences of chords and keys are obtained by finding the best path in the graph using dynamic programming. The proposed method allows a mutual chord and key estimation. It is evaluated on a corpus composed of Beatles songs for both the local key estimation and chord recognition tasks, as well as a larger corpus composed of songs taken from the Billboard dataset.