773 resultados para Generative Music


Relevância:

100.00% 100.00%

Publicador:

Resumo:

iGrooving is a generative music mobile application specifically designed for runners. The application’s foundation is a step-counter that is programmed using the iPhone’s built-in accelerometer. The runner’s steps generate the tempo of the performance by mapping each step to trigger a kick-drum sound file. Additionally, different sound files are triggered at specific step counts to generate the musical performance, allowing the runner a level of compositional autonomy. The sonic elements are chosen to promote a meditative aspect of running. iGrooving is conceived as a biofeedback-stimulated musical instrument and an environment for creating generative music processes with everyday technologies, inspiring us to rethink our everyday notions of musical performance as a shared experience. Isolation, dynamic changes, and music generation are detailed to show how iGrooving facilitates novel methods for music composition, performance and audience participation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fado was listed as UNESCO Intangible Cultural Heritage in 2011. This dissertation describes a theoretical model, as well as an automatic system, able to generate instrumental music based on the musics and vocal sounds typically associated with fado’s practice. A description of the phenomenon of fado, its musics and vocal sounds, based on ethnographic, historical sources and empirical data is presented. The data includes the creation of a digital corpus, of musical transcriptions, identified as fado, and statistical analysis via music information retrieval techniques. The second part consists in the formulation of a theory and the coding of a symbolic model, as a proof of concept, for the automatic generation of instrumental music based on the one in the corpus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Music is an immensely powerful affective medium that pervades our everyday life. With ever advancing technology, the reproduction and application of music for emotive and information transfer purposes has never been more prevalent. In this paper we introduce a rule-based engine for influencing the perceived emotions of music. Based on empirical music psychology, we attempt to formalise the relationship between musical elements and their perceived emotion. We examine the modification to structural aspects of music to allow for a graduated transition between perceived emotive states. This engine is intended to provide music reproduction systems with a finer grained control over this affective medium; where perceived musical emotion can be influenced with intent. This intent comes from both an external application and the audience. Using a series of affective computing technologies, an audience’s response metrics and attitudes can be incorporated to model this intent. A generative feedback loop is set up between the external application, the influencing process and the audience’s response to this, which together shape the modification of musical structure. The effectiveness of our rule system for influencing perceived musical emotion was examined in earlier work, with a small test study providing generally encouraging results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We create and study a generative model for Irish traditional music based on Variational Autoencoders and analyze the learned latent space trying to find musically significant correlations in the latent codes' distributions in order to perform musical analysis on data. We train two kinds of models: one trained on a dataset of Irish folk melodies, one trained on bars extrapolated from the melodies dataset, each one in five variations of increasing size. We conduct the following experiments: we inspect the latent space of tunes and bars in relation to key, time signature, and estimated harmonic function of bars; we search for links between tunes in a particular style (i.e. "reels'") and their positioning in latent space relative to other tunes; we compute distances between embedded bars in a tune to gain insight into the model's understanding of the similarity between bars. Finally, we show and evaluate generative examples. We find that the learned latent space does not explicitly encode musical information and is thus unusable for musical analysis of data, while generative results are generally good and not strictly dependent on the musical coherence of the model's internal representation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we initially present an algorithm for automatic composition of melodies using chaotic dynamical systems. Afterward, we characterize chaotic music in a comprehensive way as comprising three perspectives: musical discrimination, dynamical influence on musical features, and musical perception. With respect to the first perspective, the coherence between generated chaotic melodies (continuous as well as discrete chaotic melodies) and a set of classical reference melodies is characterized by statistical descriptors and melodic measures. The significant differences among the three types of melodies are determined by discriminant analysis. Regarding the second perspective, the influence of dynamical features of chaotic attractors, e.g., Lyapunov exponent, Hurst coefficient, and correlation dimension, on melodic features is determined by canonical correlation analysis. The last perspective is related to perception of originality, complexity, and degree of melodiousness (Euler's gradus suavitatis) of chaotic and classical melodies by nonparametric statistical tests. (c) 2010 American Institute of Physics. [doi: 10.1063/1.3487516]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This text discusses the phonographic segment of religious music in Brazil in its two main manifestations, linked respectively to the Catholic and Protestant traditions. The text offers a brief history of both traditions, as well as a description of their main recording companies and artists of greatest prominence. In its final part. the text presents the strategies that bring together recording companies and independent artists, as well as ponders over Brazil`s independent musical production as a whole.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study compared the effects of live, taped, and no music, on agitation and orientation levels of people experiencing posttraumatic amnesia (PTA). Participants (N = 22) were exposed to all 3 conditions, twice over 6 consecutive days. Songs used in the live and taped music conditions were identical and were selected based on participants' own preferred music. Pre and posttesting was conducted for each condition using the Agitated Behavior Scale (Corrigan, 1989) and the Westmead PTA Scale (Shores, Marosszeky, Sandanam, Batchelor, 1986). Participants' memory for the music used was also tested and compared with their memory for pictorial material presented in the Westmead PTA Scale. Results indicate that music significantly reduced agitation (p

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Australia struggles to achieve economic competitiveness, prevent expansion of the trade deficit and develop value-added production despite applications of policy strategies from protectionism to trade liberalisation. This article argues that these problems were emerging at the turn of the century, and that an investigation of music technology manufacturing in the first two decades of this century reveals fundamental problems in the conduct of relevant policy analysis. Analysis has focused on the trade or technology gap which is only symptomatic of an underlying knowledge gap. The article calls for a knowledge policy approach which can allow protection without the negative effects of isolation from global markets and without having to resort to unworkable utopian free-trade dogma. A shift of focus from a 'goods traded' view to a knowledge transaction (or diffusion) perspective is advocated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study used a temporal bisection task to investigate whether music affects time estimation differently from a matched auditory neutral stimulus, and whether the emotional valence of the musical stimuli (i.e., sad vs. happy music) modulates this effect. The results showed that, compared to sine wave control music, music presented in a major (happy) or a minor (sad) key shifted the bisection function toward the right, thus increasing the bisection point value (point of subjective equality). This indicates that the duration of a melody is judged shorter than that of a non-melodic control stimulus, thus confirming that ""time flies"" when we listen to music. Nevertheless, sensitivity to time was similar for all the auditory stimuli. Furthermore, the temporal bisection functions did not differ as a function of musical mode. (C) 2010 Elsevier B.V. All rights reserved.