982 resultados para Musical perception
Resumo:
Pós-graduação em Música - IA
Resumo:
La version intégrale de ce mémoire est disponible uniquement pour consultation individuelle à la Bibliothèque de musique de l ’Université de Montréal (www.bib.umontreal.ca/MU).
Resumo:
La musique est universelle et le chant est le moyen d’expression musicale le plus accessible à tous. Les enfants chantent spontanément entre un an et un an et demi (Ostwald, 1973). Pourtant, le développement de cette habileté est très peu étudié en neuropsychologie et ce, malgré le fait qu’elle représente une immense source d’informations sur le traitement de la musique par le cerveau. Les études proposées ici visaient à mieux comprendre le développement normal et pathologique des fonctions perceptives et vocales. Dans un premier temps, une étude sur le chant normal chez les enfants de 6 à 11 ans est présentée. Le développement du chant de 79 enfants d’âge scolaire y est analysé de manière systématique et objective. Cette étude se penche plus particulièrement sur l’influence de l’âge ainsi que d’autres facteurs (le genre, la perception musicale, la présence de paroles et la présence d’un accompagnement vocal) sur la qualité du chant. Les jeunes participants ont chanté une chanson familière dans différentes conditions, soit avec et sans paroles, après un modèle ainsi qu’à l’unisson avec ce dernier. Suite à l’analyse acoustique des performances, différentes variables mélodiques et rythmiques telles que le nombre d’erreurs d’intervalles, le nombre d’erreurs de contours, la taille des déviations d’intervalles, le nombre d’erreurs rythmiques, la taille des déviations temporelles et le tempo, ont été calculés. Les résultats montrent que certaines habiletés de base liées au chant se développent toujours après 6 ans. Toutefois, le rythme est maîtrisé plus tôt, et les enfants d’âges scolaires réussissent parfois mieux que les adultes sur le plan rythmique. De plus, il est plus difficile pour les enfants de chanter avec des ii paroles que sur une syllabe et chanter à l’unisson représente un défi plus grand que chanter après un modèle. Par ailleurs, le nombre d’erreurs de contours, d’intervalles et de rythme, de même que la taille des erreurs rythmiques, sont liés à nos mesures de perception musicale. La seconde étude présente le premier cas documenté d’amusie congénitale chez l’enfant. Elle implique l’analyse de la perception musicale et du chant d’une fillette de 10 ans nous ayant été référée par son directeur de chorale. De sévères déficits ont été relevés chez elle et un diagnostic d’amusie congénitale fut posé. En effet, ses résultats aux tests visant à évaluer sa perception musicale indiquent d’importantes difficultés tant sur le plan de la discrimination des différences mélodiques et rythmiques, qu’au niveau de la mémoire des mélodies. La fillette présente des lacunes claires quant à la perception des fines différences de hauteurs. L’analyse des réponses cérébrales en potentiels évoqués suggère que l’enfant souffre de déficits situés tôt au cours des processus de traitement auditif, tel que démontré par l’absence de négativité de discordance (MMN). Le chant de la jeune fille est lui aussi déficitaire, particulièrement en ce qui concerne le nombre d’erreurs d’intervalles et leurs tailles. En conclusion, nos études montrent que les aptitudes pour le chant sont toujours en développement au cours des premières années de scolarisation. Ce développement peut être entravé par la présence d’un déficit lié spécifiquement à la perception musicale. Pour la première fois, l’amusie congénitale, sera décrite chez l’enfant.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The aim of the present work is a historical survey on Gestalt trends in psychological research between late 19th and the first half of 20th century with privileged reference to sound and musical perception by means of a reconsideration of experimental and theoretical literature. Ernst Mach and Christian von Ehrenfels gave rise to the debate about Gestaltqualität which notably grew thanks to the ‘Graz School’ (Alexius Meinong, Stephan Witasek, Anton Faist, Vittorio Benussi), where the object theory and the production theory of perception were worked out. Stumpf’s research on Tonpsychologie and Franz Brentano’s tradition of ‘act psychology’ were directly involved in this debate, opposing to Wilhelm Wundt’s conception of the discipline; this clearly came to light in Stumpf’s controversy with Carl Lorenz and Wundt on Tondistanzen. Stumpf’s concept of Verschmelzung and his views about consonance and concordance led him to some disputes with Theodor Lipps and Felix Krueger, lasting more than two decades. Carl Stumpf was responsible for education of a new generation of scholars during his teaching at the Berlin University: his pupils Wolfgang Köhler, Kurt Koffka and Max Wertheimer established the so-called ‘Berlin School’ and promoted the official Gestalt theory since the 1910s. After 1922 until 1938 they gave life and led together with other distinguished scientists the «Psychologische Forschung», a scientific journal in which ‘Gestalt laws’ and many other acoustical studies on different themes (such as sound localization, successive comparison, phonetic phenomena) were exposed. During the 1920s Erich Moritz von Hornbostel gave important contributions towards the definition of an organic Tonsystem in which sound phenomena could find adequate arrangement. Last section of the work contains descriptions of Albert Wellek’s studies, Kurt Huber’s vowel researches and aspects of melody perception, apparent movement and phi-phenomenon in acoustical field. The work contains also some considerations on the relationships among tone psychology, musical psychology, Gestalt psychology, musical aesthetics and musical theory. Finally, the way Gestalt psychology changed earlier interpretations is exemplified by the decisive renewal of perception theory, the abandon of Konstanzannahme, some repercussions on theory of meaning as organization and on feelings in musical experience.
Resumo:
This article adopts an ecological view of digital musical interactions, first considering the relationship between performers and digital systems, and then spectators’ perception of these interactions. We provide evidence that the relationships between performers and digital music systems are not necessarily instrumental in the same was as they are with acoustic systems, and nor should they always strive to be. Furthermore, we report results of a study suggesting that spectators may not perceive such interactions in the same way as performances with musical instruments. We present implications for the design of digital musical interactions, suggesting that designers should embrace the reality that digital systems are malleable and dynamic, and may engage performers and spectators in different modalities, sometimes simultaneously.
Resumo:
Introduction: Rhythm organises musical events into patterns and forms, and rhythm perception in music is usually studied by using metrical tasks. Metrical structure also plays an organisational function in the phonology of language, via speech prosody, and there is evidence for rhythmic perceptual difficulties in developmental dyslexia. Here we investigate the hypothesis that the accurate perception of musical metrical structure is related to basic auditory perception of rise time, and also to phonological and literacy development in children. Methods: A battery of behavioural tasks was devised to explore relations between musical metrical perception, auditory perception of amplitude envelope structure, phonological awareness (PA) and reading in a sample of 64 typically-developing children and children with developmental dyslexia. Results: We show that individual differences in the perception of amplitude envelope rise time are linked to musical metrical sensitivity, and that musical metrical sensitivity predicts PA and reading development, accounting for over 60% of variance in reading along with age and I.Q. Even the simplest metrical task, based on a duple metrical structure, was performed significantly more poorly by the children with dyslexia. Conclusions: The accurate perception of metrical structure may be critical for phonological development and consequently for the development of literacy. Difficulties in metrical processing are associated with basic auditory rise time processing difficulties, suggesting a primary sensory impairment in developmental dyslexia in tracking the lower-frequency modulations in the speech envelope. © 2010 Elsevier.
Resumo:
In a recent study, we reported that the accurate perception of beat structure in music ('perception of musical meter') accounted for over 40% of the variance in single word reading in children with and without dyslexia (Huss et al., 2011). Performance in the musical task was most strongly associated with the auditory processing of rise time, even though beat structure was varied by manipulating the duration of the musical notes.
Resumo:
This thesis explores the possibilities of spatial hearing in relation to sound perception, and presents three acousmatic compositions based on a musical aesthetic that emphasizes this relation in musical discourse. The first important characteristic of these compositions is the exclusive use of sine waves and other time invariant sound signals. Even though these types of sound signals present no variations in time, it is possible to perceive pitch, loudness, and tone color variations as soon as they move in space due to acoustic processes involved in spatial hearing. To emphasize the perception of such variations, this thesis proposes to divide a tone in multiple sound units and spread them in space using several loudspeakers arranged around the listener. In addition to the perception of sound attribute variations, it is also possible to create rhythm and texture variations that depend on how sound units are arranged in space. This strategy permits to overcome the so called "sound surrogacy" implicit in acousmatic music, as it is possible to establish cause-effect relations between sound movement and the perception of sound attribute, rhythm, and texture variations. Another important consequence of using sound fragmentation together with sound spatialization is the possibility to produce diffuse sound fields independently from the levels of reverberation of the room, and to create sound spaces with a certain spatial depth without using any kind of artificial sound delay or reverberation.
Resumo:
Neuropsychological studies have suggested that imagery processes may be mediated by neuronal mechanisms similar to those used in perception. To test this hypothesis, and to explore the neural basis for song imagery, 12 normal subjects were scanned using the water bolus method to measure cerebral blood flow (CBF) during the performance of three tasks. In the control condition subjects saw pairs of words on each trial and judged which word was longer. In the perceptual condition subjects also viewed pairs of words, this time drawn from a familiar song; simultaneously they heard the corresponding song, and their task was to judge the change in pitch of the two cued words within the song. In the imagery condition, subjects performed precisely the same judgment as in the perceptual condition, but with no auditory input. Thus, to perform the imagery task correctly an internal auditory representation must be accessed. Paired-image subtraction of the resulting pattern of CBF, together with matched MRI for anatomical localization, revealed that both perceptual and imagery. tasks produced similar patterns of CBF changes, as compared to the control condition, in keeping with the hypothesis. More specifically, both perceiving and imagining songs are associated with bilateral neuronal activity in the secondary auditory cortices, suggesting that processes within these regions underlie the phenomenological impression of imagined sounds. Other CBF foci elicited in both tasks include areas in the left and right frontal lobes and in the left parietal lobe, as well as the supplementary motor area. This latter region implicates covert vocalization as one component of musical imagery. Direct comparison of imagery and perceptual tasks revealed CBF increases in the inferior frontal polar cortex and right thalamus. We speculate that this network of regions may be specifically associated with retrieval and/or generation of auditory information from memory.
Resumo:
Drawing from ethnographic, empirical, and historical / cultural perspectives, we examine the extent to which visual aspects of music contribute to the communication that takes place between performers and their listeners. First, we introduce a framework for understanding how media and genres shape aural and visual experiences of music. Second, we present case studies of two performances, and describe the relation between visual and aural aspects of performance. Third, we report empirical evidence that visual aspects of performance reliably influence perceptions of musical structure (pitch related features) and affective interpretations of music. Finally, we trace new and old media trajectories of aural and visual dimensions of music, and highlight how our conceptions, perceptions and appreciation of music are intertwined with technological innovation and media deployment strategies.
Resumo:
This paper discusses a method, Generation in Context, for interrogating theories of music analysis and music perception. Given an analytic theory, the method consists of creating a generative process that implements the theory in reverse. Instead of using the theory to create analyses from scores, the theory is used to generate scores from analyses. Subjective evaluation of the quality of the musical output provides a mechanism for testing the theory in a contextually robust fashion. The method is exploratory, meaning that in addition to testing extant theories it provides a general mechanism for generating new theoretical insights. We outline our initial explorations in the use of generative processes for music research, and we discuss how generative processes provide evidence as to the veracity of theories about how music is experienced, with insights into how these theories may be improved and, concurrently, provide new techniques for music creation. We conclude that Generation in Context will help reveal new perspectives on our understanding of music.
Resumo:
When communicating emotion in music, composers and performers encode their expressive intentions through the control of basic musical features such as: pitch, loudness, timbre, mode, and articulation. The extent to which emotion can be controlled through the systematic manipulation of these features has not been fully examined. In this paper we present CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time. CMERS performance was evaluated in two rounds of perceptual testing. In experiment I, 20 participants continuously rated the perceived emotion of 15 music samples generated by CMERS. Three music works, each with five emotional variations were used (normal, happy, sad, angry, and tender). The intended emotion by CMERS was correctly identified 78% of the time, with significant shifts in valence and arousal also recorded, regardless of the works’ original emotion.
Resumo:
We have developed a new experimental method for interrogating statistical theories of music perception by implementing these theories as generative music algorithms. We call this method Generation in Context. This method differs from most experimental techniques in music perception in that it incorporates aesthetic judgments. Generation In Context is designed to measure percepts for which the musical context is suspected to play an important role. In particular the method is suitable for the study of perceptual parameters which are temporally dynamic. We outline a use of this approach to investigate David Temperley’s (2007) probabilistic melody model, and provide some provisional insights as to what is revealed about the model. We suggest that Temperley’s model could be improved by dynamically modulating the probability distributions according to the changing musical context.
Resumo:
Gesture in performance is widely acknowledged in the literature as an important element in making a performance expressive and meaningful. The body has been shown to play an important role in the production and perception of vocal performance in particular. This paper is interested in the role of gesture in creative works that seek to extend vocal performance via technology. A creative work for vocal performer, laptop computer and a Human Computer Interface called the eMic (Extended Microphone Stand Interface controller) is presented as a case study, to explore the relationships between movement, voice production, and musical expression. The eMic is an interface for live vocal performance that allows the singers’ gestures and interactions with a sensor based microphone stand to be captured and mapped to musical parameters. The creative work discussed in this paper presents a new compositional approach for the eMic by working with movement as a starting point for the composition and thus using choreographed gesture as the basis for musical structures. By foregrounding the body and movement in the creative process, the aim is to create a more visually engaging performance where the performer is able to more effectively use the body to express their musical objectives.