975 resultados para musical tune
Resumo:
It has been suggested that different pathways through the brain are followed depending on the type of information that is being processed. Although it is now known that there is a continuous exchange of information through both hemispheres, language is considered to be processed by the left hemisphere, where Broca?s and Wernicke?s areas are located. On the other hand, music is thought to be processed mainly by the right hemisphere. According to Sininger Y.S. & Cone- Wesson, B. (2004), there is a similar but contralateral specialization of the human ears; due to the fact that auditory pathways cross-over at the brainstem. A previous study showed an effect of musical imagery on spontaneous otoacoustic emissions (SOAEs) (Perez-Acosta and Ramos-Amezquita, 2006), providing evidence of an efferent influence from the auditory cortex on the basilar membrane. Based on these results, the present work is a comparative study between left and right ears of a population of eight musicians that presented SOAEs. A familiar musical tune was chosen, and the subjects were trained in the task of evoking it after having heard it. Samples of ear-canal signals were obtained and processed in order to extract frequency and amplitude data on the SOAEs. This procedure was carried out before, during and after the musical image creation task. Results were then analyzed to compare the difference between SOAE responses of left and right ears. A clear asymmetrical SOAEs response to musical imagery tasks between left and right ears was obtained. Significant changes of SOAE amplitude related to musical imagery tasks were only observed on the right ear of the subjects. These results may suggest a predominant left hemisphere activity related to a melodic image creation task.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014
Resumo:
This study looked at how people store and retrieve tonal music explicitly and implicitly using a production task. Participants completed an implicit task (tune stem completion) followed by an explicit task (cued recall). The tasks were identical except for the instructions at test time. They listened to tunes and were then presented with tune stems from previously heard tunes and novel tunes. For the implicit task, they were asked to sing a note they thought would come next musically. For the explicit task, they were asked to sing the note they remembered as coming next. Experiment 1 found that people correctly completed significantly more old stems than new stems. Experiment 2 investigated the characteristics of music that fuel retrieval by varying a surface feature of the tune (same timbre ordifferent timbre) from study to test and the encoding task (semantic or nonsemantic). Although we did not find that implicit and explicit memory for music were significantly dissociated for levels of processing, we did find that surface features of music affect semantic judgments and subsequent explicit retrieval.
When that tune runs through your head: A PET investigation of auditory imagery for familiar melodies
Resumo:
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery forfamiliar tunes. Subjects either imagined the continuation of nonverbaltunes cued by their first few notes, listened to a short sequence of notesas a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area(SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealedactivation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliarsequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiartunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
Resumo:
The authors examined the effects of age, musical experience, and characteristics of musical stimuli on a melodic short-term memory task in which participants had to recognize whether a tune was an exact transposition of another tune recently presented. Participants were musicians and nonmusicians between ages 18 and 30 or 60 and 80. In 4 experiments, the authors found that age and experience affected different aspects of the task, with experience becoming more influential when interference was provided during the task. Age and experience interacted only weakly, and neither age nor experience influenced the superiority of tonal over atonal materials. Recognition memory for the sequences did not reflect the same pattern of results as the transposition task. The implications of these results for theories of aging, experience, and music cognition are discussed.
When that tune runs through your head: a PET investigation of auditory imagery for familiar melodies
Resumo:
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery for familiar tunes. Subjects either imagined the continuation of nonverbal tunes cued by their first few notes, listened to a short sequence of notes as a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area (SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealed activation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliar sequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiar tunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
Resumo:
Most people intuitively understand what it means to “hear a tune in your head.” Converging evidence now indicates that auditory cortical areas can be recruited even in the absence of sound and that this corresponds to the phenomenological experience of imagining music. We discuss these findings as well as some methodological challenges. We also consider the role of core versus belt areas in musical imagery, the relation between auditory and motor systems during imagery of music performance, and practical implications of this research.
Resumo:
Two voice parts with keyboard accompaniment.
Resumo:
Mode of access: Internet.
The role of musical aptitude in the pronunciation of English vowels among Polish learners of English
Resumo:
It has long been held that people who have musical training or talent acquire L2 pronunciation more successfully than those that do not. Indeed, there have been empirical studies to support this hypothesis (Pastuszek-Lipińska 2003, Fonseca-Mora et al. 2011, Zatorre and Baum 2012). However, in many of such studies, musical abilities in subjects were mostly verified through questionnaires rather than tested in a reliable, empirical manner. Therefore, we run three different musical hearing tests, i.e. pitch perception test, musical memory test, and rhythm perception test (Mandell 2009) to measure the actual musical aptitude in our subjects. The main research question is whether a better musical ear correlates with a higher rate of acquisition of English vowels in Polish EFL learners. Our group consists of 40 Polish university students studying English as their major who learn the British pronunciation model during an intense pronunciation course. 10 male and 30 female subjects with mean age of 20.1 were recorded in a recording studio. The procedure comprised spontaneous conversations, reading passages and reading words in isolation. Vowel measurements were conducted in Praat in all three speech styles and several consonantal contexts. The assumption was that participants who performed better in musical tests would produce vowels that are closer to the Southern British English model. We plotted them onto vowel charts and calculated the Euclidean distances. Preliminary results show that there is potential correlation between specific aspects of musical hearing and different elements of pronunciation. The study is a longitudinal project and will encompass two more years, during which we will repeat the recording procedure twice to measure the participants’ progress in mastering the English pronunciation and comparing it with their musical aptitude.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
SEVERAL MODELS OF TIME ESTIMATION HAVE BEEN developed in psychology; a few have been applied to music. In the present study, we assess the influence of the distances travelled through pitch space on retrospective time estimation. Participants listened to an isochronous chord sequence of 20-s duration. They were unexpectedly asked to reproduce the time interval of the sequence. The harmonic structure of the stimulus was manipulated so that the sequence either remained in the same key (CC) or travelled through a closely related key (CFC) or distant key (CGbC). Estimated times were shortened when the sequence modulated to a very distant key. This finding is discussed in light of Lerdahl's Tonal Pitch Space Theory (2001), Firmino and Bueno's Expected Development Fraction Model (in press), and models of time estimation.
Resumo:
Online music databases have increased significantly as a consequence of the rapid growth of the Internet and digital audio, requiring the development of faster and more efficient tools for music content analysis. Musical genres are widely used to organize music collections. In this paper, the problem of automatic single and multi-label music genre classification is addressed by exploring rhythm-based features obtained from a respective complex network representation. A Markov model is built in order to analyse the temporal sequence of rhythmic notation events. Feature analysis is performed by using two multi-variate statistical approaches: principal components analysis (unsupervised) and linear discriminant analysis (supervised). Similarly, two classifiers are applied in order to identify the category of rhythms: parametric Bayesian classifier under the Gaussian hypothesis (supervised) and agglomerative hierarchical clustering (unsupervised). Qualitative results obtained by using the kappa coefficient and the obtained clusters corroborated the effectiveness of the proposed method.
Resumo:
This work proposes an association between musical analysis techniques developed during the twentieth and the twenty-first centuries, presented by authors like Felix Salzer and Joseph Straus, and the musical theory concepts presented by Olivier Messiaen, for the analysis of Prelude n(o) 1, La Colombe. The analysis contributes to broaden the theory concepts presented by the composer. In the Conclusion we trace lines of an authorial sonority by Olivier Messiaen.