994 resultados para music perception


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have developed a new experimental method for interrogating statistical theories of music perception by implementing these theories as generative music algorithms. We call this method Generation in Context. This method differs from most experimental techniques in music perception in that it incorporates aesthetic judgments. Generation In Context is designed to measure percepts for which the musical context is suspected to play an important role. In particular the method is suitable for the study of perceptual parameters which are temporally dynamic. We outline a use of this approach to investigate David Temperley’s (2007) probabilistic melody model, and provide some provisional insights as to what is revealed about the model. We suggest that Temperley’s model could be improved by dynamically modulating the probability distributions according to the changing musical context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Music plays an important role in the daily life of cochlear implant (CI) users, but electrical hearing and speech processing pose challenges for enjoying music. Studies of unilateral CI (UCI) users' music perception have found that these subjects have little difficulty recognizing tempo and rhythm but great difficulty with pitch, interval and melody. The present study is an initial step towards understanding music perception in bilateral CI (BCI) users. The Munich Music Questionnaire was used to investigate music listening habits and enjoyment in 23 BCI users compared to 2 control groups: 23 UCI users and 23 normal-hearing (NH) listeners. Bilateral users appeared to have a number of advantages over unilateral users, though their enjoyment of music did not reach the level of NH listeners.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This project investigates machine listening and improvisation in interactive music systems with the goal of improvising musically appropriate accompaniment to an audio stream in real-time. The input audio may be from a live musical ensemble, or playback of a recording for use by a DJ. I present a collection of robust techniques for machine listening in the context of Western popular dance music genres, and strategies of improvisation to allow for intuitive and musically salient interaction in live performance. The findings are embodied in a computational agent – the Jambot – capable of real-time musical improvisation in an ensemble setting. Conceptually the agent’s functionality is split into three domains: reception, analysis and generation. The project has resulted in novel techniques for addressing a range of issues in each of these domains. In the reception domain I present a novel suite of onset detection algorithms for real-time detection and classification of percussive onsets. This suite achieves reasonable discrimination between the kick, snare and hi-hat attacks of a standard drum-kit, with sufficiently low-latency to allow perceptually simultaneous triggering of accompaniment notes. The onset detection algorithms are designed to operate in the context of complex polyphonic audio. In the analysis domain I present novel beat-tracking and metre-induction algorithms that operate in real-time and are responsive to change in a live setting. I also present a novel analytic model of rhythm, based on musically salient features. This model informs the generation process, affording intuitive parametric control and allowing for the creation of a broad range of interesting rhythms. In the generation domain I present a novel improvisatory architecture drawing on theories of music perception, which provides a mechanism for the real-time generation of complementary accompaniment in an ensemble setting. All of these innovations have been combined into a computational agent – the Jambot, which is capable of producing improvised percussive musical accompaniment to an audio stream in real-time. I situate the architectural philosophy of the Jambot within contemporary debate regarding the nature of cognition and artificial intelligence, and argue for an approach to algorithmic improvisation that privileges the minimisation of cognitive dissonance in human-computer interaction. This thesis contains extensive written discussions of the Jambot and its component algorithms, along with some comparative analyses of aspects of its operation and aesthetic evaluations of its output. The accompanying CD contains the Jambot software, along with video documentation of experiments and performances conducted during the project.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Listening to music involves a widely distributed bilateral network of brain regions that controls many auditory perceptual, cognitive, emotional, and motor functions. Exposure to music can also temporarily improve mood, reduce stress, and enhance cognitive performance as well as promote neural plasticity. However, very little is currently known about the relationship between music perception and auditory and cognitive processes or about the potential therapeutic effects of listening to music after neural damage. This thesis explores the interplay of auditory, cognitive, and emotional factors related to music processing after a middle cerebral artery (MCA) stroke. In the acute recovery phase, 60 MCA stroke patients were randomly assigned to a music listening group, an audio book listening group, or a control group. All patients underwent neuropsychological assessments, magnetoencephalography (MEG) measurements, and magnetic resonance imaging (MRI) scans repeatedly during a six-month post-stroke period. The results revealed that amusia, a deficit of music perception, is a common and persistent deficit after a stroke, especially if the stroke affects the frontal and temporal brain areas in the right hemisphere. Amusia is clearly associated with deficits in both auditory encoding, as indicated by the magnetic mismatch negativity (MMNm) response, and domain-general cognitive processes, such as attention, working memory, and executive functions. Furthermore, both music and audio book listening increased the MMNm, whereas only music listening improved the recovery of verbal memory and focused attention as well as prevented a depressed and confused mood during the first post-stroke months. These findings indicate a close link between musical, auditory, and cognitive processes in the brain. Importantly, they also encourage the use of listening to music as a rehabilitative leisure activity after a stroke and suggest that the auditory environment can induce long-term plastic changes in the recovering brain.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In a recent study, we reported that the accurate perception of beat structure in music ('perception of musical meter') accounted for over 40% of the variance in single word reading in children with and without dyslexia (Huss et al., 2011). Performance in the musical task was most strongly associated with the auditory processing of rise time, even though beat structure was varied by manipulating the duration of the musical notes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The effect of restructuring the form of three unfamiliar pop/rock songs was investigated in two experiments. In the first experiment, listeners' judgements of the likely location of sections of novel popular songs were explored by requiring participants to place the eight sections (Intro - Verse 1 - Chorus 1 - Verse 2 - Chorus 2 - Bridge (solo) - Chorus 3 - Extro) of the songs into the locations they thought them most likely to occur within the song. Results revealed that participants were able to place the sections in approximately the right location with some accuracy, though they were unable to differentiate between choruses. In Experiment 2, three versions of each of the songs were presented in three different structures: intact (original form), medium restructured (the sections in a moderately changed order), and highly restructured (more severe restructuring). The results show that listeners' judgments of predictability and liking were largely uninfluenced by the restructuring of the songs, in line with findings for classical music. Moment-by-moment liking judgements of the songs demonstrated a change in liking judgements with repeated exposure, though the trend was downwards with repeated exposure rather than upwards. Detailed analysis of moment-by-moment judgements at the ends and beginnings of sections showed that listeners were able to respond quickly to intact songs, but not to restructured songs. The results suggest that concatenism prevails in listening to popular song at the expense of paying attention to larger structural features. © 2012 by the regents of the university of california all rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Most cochlear implant (CI) users perceive music poorly. Little is known, however, about the musical enjoyment received by CI users. The author examined possible relationships between musical enjoyment and music perception tasks through the use of 1) multiple musical tests, and 2) two groups of listeners: normal-hearing (NH) listeners with a CI-simulation and actual CI users. The two groups’ performances are compared to determine whether NH participants listening to music via CI-simulation software are a good model for actual CI users for perceiving music.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sleeper is an 18'00" musical work for live performer and laptop computer which exists as both a live performance work and a recorded work for audio CD. The work has been presented at a range of international performance events and survey exhibitions. These include the 2003 International Computer Music Conference (Singapore) where it was selected for CD publication, Variable Resistance (San Francisco Museum of Modern Art, USA), and i.audio, a survey of experimental sound at the Performance Space, Sydney. The source sound materials are drawn from field recordings made in acoustically resonant spaces in the Australian urban environment, amplified and acoustic instruments, radio signals, and sound synthesis procedures. The processing techniques blur the boundaries between, and exploit, the perceptual ambiguities of de-contextualised and processed sound. The work thus challenges the arbitrary distinctions between sound, noise and music and attempts to reveal the inherent musicality in so-called non-musical materials via digitally re-processed location audio. Thematically the work investigates Paul Virilio’s theory that technology ‘collapses space’ via the relationship of technology to speed. Technically this is explored through the design of a music composition process that draws upon spatially and temporally dispersed sound materials treated using digital audio processing technologies. One of the contributions to knowledge in this work is a demonstration of how disparate materials may be employed within a compositional process to produce music through the establishment of musically meaningful morphological, spectral and pitch relationships. This is achieved through the design of novel digital audio processing networks and a software performance interface. The work explores, tests and extends the music perception theories of ‘reduced listening’ (Schaeffer, 1967) and ‘surrogacy’ (Smalley, 1997), by demonstrating how, through specific audio processing techniques, sounds may shifted away from ‘causal’ listening contexts towards abstract aesthetic listening contexts. In doing so, it demonstrates how various time and frequency domain processing techniques may be used to achieve this shift.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes the use of the Chimera Architecture as the basis for a generative rhythmic improvisation system that is intended for use in ensemble contexts. This interactive soft- ware system learns in real time based on an audio input from live performers. The paper describes the components of the Chimera Architecture including a novel analysis engine that uses prediction to robustly assess the rhythmic salience of the input stream. Analytical results are stored in a hierarchical structure that includes multiple scenarios which allow ab- stracted and alternate interpretations of the current metrical context. The system draws upon this Chimera Architecture when generating a musical response. The generated rhythms are intended to have a particular ambiguity in relation to the music performance by other members of the ensemble. Ambi- guity is controlled through alternate interpretations of the Chimera. We describe an implementation of the Chimera Ar- chitecture that focuses on rhythmic material, and present and discuss initial experimental results of the software system playing along with recordings of a live performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper discusses a method, Generation in Context, for interrogating theories of music analysis and music perception. Given an analytic theory, the method consists of creating a generative process that implements the theory in reverse. Instead of using the theory to create analyses from scores, the theory is used to generate scores from analyses. Subjective evaluation of the quality of the musical output provides a mechanism for testing the theory in a contextually robust fashion. The method is exploratory, meaning that in addition to testing extant theories it provides a general mechanism for generating new theoretical insights. We outline our initial explorations in the use of generative processes for music research, and we discuss how generative processes provide evidence as to the veracity of theories about how music is experienced, with insights into how these theories may be improved and, concurrently, provide new techniques for music creation. We conclude that Generation in Context will help reveal new perspectives on our understanding of music.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

When communicating emotion in music, composers and performers encode their expressive intentions through the control of basic musical features such as: pitch, loudness, timbre, mode, and articulation. The extent to which emotion can be controlled through the systematic manipulation of these features has not been fully examined. In this paper we present CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time. CMERS performance was evaluated in two rounds of perceptual testing. In experiment I, 20 participants continuously rated the perceived emotion of 15 music samples generated by CMERS. Three music works, each with five emotional variations were used (normal, happy, sad, angry, and tender). The intended emotion by CMERS was correctly identified 78% of the time, with significant shifts in valence and arousal also recorded, regardless of the works’ original emotion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a musical context, the pitch of sounds is encoded according to domain-general principles not confined to music or even to audition overall but common to other perceptual and cognitive processes (such as multiple pattern encoding and feature integration), and to domain-specific and culture-specific properties related to a particular musical system only (such as the pitch steps of the Western tonal system). The studies included in this thesis shed light on the processing stages during which pitch encoding occurs on the basis of both domain-general and music-specific properties, and elucidate the putative brain mechanisms underlying pitch-related music perception. Study I showed, in subjects without formal musical education, that the pitch and timbre of multiple sounds are integrated as unified object representations in sensory memory before attentional intervention. Similarly, multiple pattern pitches are simultaneously maintained in non-musicians' sensory memory (Study II). These findings demonstrate the degree of sophistication of pitch processing at the sensory memory stage, requiring neither attention nor any special expertise of the subjects. Furthermore, music- and culture-specific properties, such as the pitch steps of the equal-tempered musical scale, are automatically discriminated in sensory memory even by subjects without formal musical education (Studies III and IV). The cognitive processing of pitch according to culture-specific musical-scale schemata hence occurs as early as at the sensory-memory stage of pitch analysis. Exposure and cortical plasticity seem to be involved in musical pitch encoding. For instance, after only one hour of laboratory training, the neural representations of pitch in the auditory cortex are altered (Study V). However, faulty brain mechanisms for attentive processing of fine-grained pitch steps lead to inborn deficits in music perception and recognition such as those encountered in congenital amusia (Study VI). These findings suggest that predispositions for exact pitch-step discrimination together with long-term exposure to music govern the acquisition of the automatized schematic knowledge of the music of a particular culture that even non-musicians possess.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L’amusie congénitale est un trouble neurogénétique qui se caractérise par une inhabileté à acquérir des habiletés musicales de base, telles que la perception musicale et la reconnaissance musicale normales, malgré une audition, un développement du langage et une intelligence normaux (Ayotte, Peretz & Hyde, 2002). Récemment, une éude d’aggrégation familiale a démontré que 39% des membres de familles d’individus amusiques démontrent le trouble, comparativement à 3% des membres de familles d’individus normaux (Peretz et al., 2007). Cette conclusion est intéressante puisqu’elle démontre une prévalence de l’amusie congénitale dans la population normale. Kalmus et Fry (1980) ont évalué cette prévalence à 4%, en utilisant le Distorted Tunes Test (DTT). Par contre, ce test présente certaines lacunes méthodologiques et statistiques, telles un effet plafond important, ainsi que l’usage de mélodies folkloriques, désavantageant les amusiques puisque ceux-ci ne peuvent pas assimiler ces mélodies correctement. L’étude présente visait à réévaluer la prévalence de l’amusie congénitale en utilisant un test en ligne récemment validé par Peretz et ses collègues (2008). Mille cent participants, d’un échantillon homogène, ont complété le test en ligne. Les résultats démontrent une prévalence globale de 11.6%, ainsi que quatre profiles de performance distincts: pitch deafness (1.5%), pitch memory amusia (3.2%), pitch perception amusia (3.3%), et beat deafness (3.3%). La variabilité des résultats obtenus avec le test en ligne démontre l’existence de quatre types d’amusies avec chacune une prévalence individuelle, indiquant une hétérogénéité dans l’expression de l’amusie congénitale qui devra être explorée ultérieurement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La version intégrale de cette thèse est disponible uniquement pour consultation individuelle à la Bibliothèque de musique de l’Université de Montréal (http://www.bib.umontreal.ca/MU).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

VERSION ANGLAISE DISPONIBLE AU DÉPARTEMENT; THÈSE RÉALISÉE CONJOINTEMENT AVEC L'ÉCOLE DES SCIENCES DE LA COMMUNICATION DE L'UNIVERSITÉ MCGILL (DRS. K. STEINHAUER ET J.E. DRURY).