971 resultados para Musical pitch.
Resumo:
In a musical context, the pitch of sounds is encoded according to domain-general principles not confined to music or even to audition overall but common to other perceptual and cognitive processes (such as multiple pattern encoding and feature integration), and to domain-specific and culture-specific properties related to a particular musical system only (such as the pitch steps of the Western tonal system). The studies included in this thesis shed light on the processing stages during which pitch encoding occurs on the basis of both domain-general and music-specific properties, and elucidate the putative brain mechanisms underlying pitch-related music perception. Study I showed, in subjects without formal musical education, that the pitch and timbre of multiple sounds are integrated as unified object representations in sensory memory before attentional intervention. Similarly, multiple pattern pitches are simultaneously maintained in non-musicians' sensory memory (Study II). These findings demonstrate the degree of sophistication of pitch processing at the sensory memory stage, requiring neither attention nor any special expertise of the subjects. Furthermore, music- and culture-specific properties, such as the pitch steps of the equal-tempered musical scale, are automatically discriminated in sensory memory even by subjects without formal musical education (Studies III and IV). The cognitive processing of pitch according to culture-specific musical-scale schemata hence occurs as early as at the sensory-memory stage of pitch analysis. Exposure and cortical plasticity seem to be involved in musical pitch encoding. For instance, after only one hour of laboratory training, the neural representations of pitch in the auditory cortex are altered (Study V). However, faulty brain mechanisms for attentive processing of fine-grained pitch steps lead to inborn deficits in music perception and recognition such as those encountered in congenital amusia (Study VI). These findings suggest that predispositions for exact pitch-step discrimination together with long-term exposure to music govern the acquisition of the automatized schematic knowledge of the music of a particular culture that even non-musicians possess.
Resumo:
While the origins of consonance and dissonance in terms of acoustics, psychoacoustics and physiology have been debated for centuries, their plausible effects on movement synchronization have largely been ignored. The present study aims to address this by investigating whether, and if so how, consonant/dissonant pitch intervals affect the spatiotemporal properties of regular reciprocal aiming movements. We compared movements synchronized either to consonant or to dissonant sounds, and showed that they were differently influenced by the degree of consonance of the sound presented. Interestingly, the difference was present after the sound stimulus was removed. In this case, the performance measured after consonant sound exposure was found to be more stable and accurate, with a higher percentage of information/movement coupling (tau-coupling) and a higher degree of movement circularity when compared to performance measured after the exposure to dissonant sounds. We infer that the neural resonance representing consonant tones leads to finer perception/action coupling which in turn may help explain the prevailing preference for these types of tones.
Resumo:
Cette étude introduit un nouvel outil d’évaluation des troubles liés à la perception et la mémoire de la musique pour les enfants âgés entre six et huit ans. La batterie d’évaluation proposée est une adaptation de la batterie de Montréal de l'évaluation de l’amusie (MBEA) afin qu’elle puisse être utilisée chez les enfants, et ce, peu importe leur langue maternelle et leur culture. Dans l'expérience 1, la batterie, qui évalue les composantes musicales suivantes : la tonalité, le contour, l’intervalle, le rythme ainsi que la mémoire incidente, a été administrée auprès de 258 enfants à Montréal et 91 à Pékin. Dans l'expérience 2, une version abrégée de la batterie a été administrée à 86 enfants à Montréal. Les deux versions ont démontré une sensibilité aux différences individuelles et à la formation musicale. Il ne semble pas y avoir une influence de l'apprentissage de la lecture et de l’écriture sur les performances, mais plutôt un effet de la culture. Effectivement, les enfants qui ont comme langue maternelle le Mandarin (une langue tonale) ont obtenu de meilleurs résultats aux tâches de discrimination liées à la composante mélodique en comparaison à leurs homologues canadiens. Pour les deux groupes d’enfants, ceux qui ont été identifiés comme potentiellement amusiques ont principalement, mais pas exclusivement, des difficultés à percevoir de fines variations de hauteurs. Le caractère prédominant du déficit lié au traitement mélodique est moins distinctif avec la version abrégée. Par ailleurs, les résultats suggèrent différentes trajectoires de développement pour le traitement de la mélodie, du rythme et de la mémoire. De ce fait, la version de la MBEA adaptée à l’enfant, renommée la batterie de Montréal d'évaluation du potentiel musical (MBEMP), est un nouvel outil qui permet d’identifier les troubles liés au traitement musical chez les enfants tout en permettant d'examiner le développement typique et atypique des habiletés musicales et leur relation présumée à d'autres fonctions cognitives.
Resumo:
Published also as thesis (PH. D.) Columbia University, 1922.
Resumo:
Cover title.
Resumo:
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Extracted from Sammelbände der Internationalen musikgesellschaft. v. 3.
Resumo:
This paper discusses two pitch detection algorithms (PDA) for simple audio signals which are based on zero-cross rate (ZCR) and autocorrelation function (ACF). As it is well known, pitch detection methods based on ZCR and ACF are widely used in signal processing. This work shows some features and problems in using these methods, as well as some improvements developed to increase their performance. © 2008 IEEE.
Resumo:
We previously observed that mental manipulation of the pitch level or temporal organization of melodies results in functional activation in the human intraparietal sulcus (IPS), a region also associated with visuospatial transformation and numerical calculation. Two outstanding questions about these musical transformations are whether pitch and time depend on separate or common processing in IPS, and whether IPS recruitment in melodic tasks varies depending upon the degree of transformation required (as it does in mental rotation). In the present study we sought to answer these questions by applying functional magnetic resonance imaging while musicians performed closely matched mental transposition (pitch transformation) and melody reversal (temporal transformation) tasks. A voxel-wise conjunction analysis showed that in individual subjects, both tasks activated overlapping regions in bilateral IPS, suggesting that a common neural substrate subserves both types of mental transformation. Varying the magnitude of mental pitch transposition resulted in variation of IPS BOLD signal in correlation with the musical key-distance of the transposition, but not with the pitch distance, indicating that the cognitive metric relevant for this type of operation is an abstract one, well described by music-theoretic concepts. These findings support a general role for the IPS in systematically transforming auditory stimulus representations in a nonspatial context. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Pitch discrimination skills are important for general musicianship. The ability to name musical notes or produce orally any named note without the benefit of a known reference is called Absolute Pitch (AP) and is comparatively rare. Relative Pitch (RP) is the ability to name notes when a known reference is available. AP has historically been regarded as being innate. This paper will examine the notion that pitch discrimination skill is based on knowledge constructed through a suite of experiences. That is, it is learnt. In particular, it will be argued that early experiences promote the development of AP. Second it will argue that AP and RP represent different types of knowledge, and that this knowledge emerges from different experiences. AP is a unique research phenomenon because it spans the fields of cognition and perception, in that it links verbal labels with physiological sensations, and because of its rarity. It may provide a vantage for investigating the nature/nurture of musicianship; expertise; knowledge structure development; and the role of knowledge in perception. The study of AP may inform educational practice and curriculum design both in music and cross-curriculur. This paper will report an initial investigation into the similarities and differences between the musical experiences of AP possessors and the manifestation of their AP skill. Interview and questionnaire data will be used for the development and proposal of a preliminary model for AP development.
Resumo:
Most advanced musicians are able to identify and label a heard pitch if given an opportunity to compare it to a known reference note. This is called ‘relative pitch’ (RP). A much rarer skill is the ability to identify and label a heard pitch without the need for a reference. This is colloquially referred to as ‘perfect pitch’, but appears in the academic literature as ‘absolute pitch’ (AP). AP is considered by many as a remarkable skill. As people do not seem able to develop it intentionally, it is generally regarded as innate. It is often seen as a unitary skill and that a set of identifiable criteria can distinguish those who possess the skill from those who do not. However, few studies have interrogated these notions. The present study developed and applied an interactive computer program to map pitch-labelling responses to various tonal stimuli without a known reference tone available to participants. This approach enabled the identification of the elements of sound that impacted on AP. Pitch-labelling responses of 14 participants with AP were recorded for their accuracy. Each participant’s response to the stimuli was unique. Their accuracy of labelling varied across dimensions such as timbre, range and tonality. The diversity of performance between individuals appeared to reflect their personal musical experience histories.
Resumo:
When communicating emotion in music, composers and performers encode their expressive intentions through the control of basic musical features such as: pitch, loudness, timbre, mode, and articulation. The extent to which emotion can be controlled through the systematic manipulation of these features has not been fully examined. In this paper we present CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time. CMERS performance was evaluated in two rounds of perceptual testing. In experiment I, 20 participants continuously rated the perceived emotion of 15 music samples generated by CMERS. Three music works, each with five emotional variations were used (normal, happy, sad, angry, and tender). The intended emotion by CMERS was correctly identified 78% of the time, with significant shifts in valence and arousal also recorded, regardless of the works’ original emotion.