14 resultados para Jazz musicians.
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
There is a range of tempos within which listeners can identify familiar tunes (around 0.8 to 6.0 notes/s). Faster and slower tunes are difficult to identify. The authors assessed fast and slow melody-identification thresholds for 80 listeners ages 17–79 years with expertise varying from musically untrained to professional. On fast-to-slow (FS) trials the tune started at a very fast tempo and slowed until the listener identified it. Slow-to-fast (SF) trials started slow and accelerated. Tunes either retained their natural rhythms or were stylized isochronous versions. Increased expertise led to better performance for both FS and SF thresholds (r = .45). Performance declined uniformly across the 62-year age range in the FS condition (r = .27). SF performance was unaffected by age. Although early encoding processes may slow with age, expertise has a greater effect. Musical expertise involves perceptual learning with melodies at a wide range of tempos.
Resumo:
Two fMRI experiments explored the neural substrates of a musical imagery task that required manipulation of the imagined sounds: temporal reversal of a melody. Musicians were presented with the first few notes of a familiar tune (Experiment 1) or its title (Experiment 2), followed by a string of notes that was either an exact or an inexact reversal. The task was to judge whether the second string was correct or not by mentally reversing all its notes, thus requiring both maintenance and manipulation of the represented string. Both experiments showed considerable activation of the superior parietal lobe (intraparietal sulcus) during the reversal process. Ventrolateral and dorsolateral frontal cortices were also activated, consistent with the memory load required during the task. We also found weaker evidence for some activation of right auditory cortex in both studies, congruent with results from previous simpler music imagery tasks. We interpret these results in the context of other mental transformation tasks, such as mental rotation in the visual domain, which are known to recruit the intraparietal sulcus region, and we propose that this region subserves general computations that require transformations of a sensory input. Mental imagery tasks may thus have both task or modality-specific components as well as components that supersede any specific codes and instead represent amodal mental manipulation.
Resumo:
WE STUDIED THE EMOTIONAL RESPONSES BY MUSICIANS to familiar classical music excerpts both when the music was sounded, and when it was imagined.We used continuous response methodology to record response profiles for the dimensions of valence and arousal simultaneously and then on the single dimension of emotionality. The response profiles were compared using cross-correlation analysis, and an analysis of responses to musical feature turning points, which isolate instances of change in musical features thought to influence valence and arousal responses. We found strong similarity between the use of an emotionality arousal scale across the stimuli, regardless of condition (imagined or sounded). A majority of participants were able to create emotional response profiles while imagining the music, which were similar in timing to the response profiles created while listening to the sounded music.We conclude that similar mechanisms may be involved in the processing of emotion in music when the music is sounded and when imagined.
Resumo:
COMPOSERS COMMONLY USE MAJOR OR MINOR SCALES to create different moods in music.Nonmusicians show poor discrimination and classification of this musical dimension; however, they can perform these tasks if the decision is phrased as happy vs. sad.We created pairs of melodies identical except for mode; the first major or minor third or sixth was the critical note that distinguished major from minor mode. Musicians and nonmusicians judged each melody as major vs. minor or happy vs. sad.We collected ERP waveforms, triggered to the onset of the critical note. Musicians showed a late positive component (P3) to the critical note only for the minor melodies, and in both tasks.Nonmusicians could adequately classify the melodies as happy or sad but showed little evidence of processing the critical information. Major appears to be the default mode in music, and musicians and nonmusicians apparently process mode differently.
Resumo:
The two modes most widely used in Western music today convey opposite moods—a distinction that nonmusicians and even young children are able to make. However, the current studies provide evidence that, despite a strong link between mode and affect, mode perception is problematic. Nonmusicians found mode discrimination to be harder than discrimination of other melodic features, and they were not able to accurately classify major and minor melodies with these labels. Although nonmusicians were able to classify major and minor melodies using affective labels, they performed at chance in mode discrimination. Training, in the form of short lessons given to nonmusicians and the natural musical experience of musicians, improved performance, but not to ceiling levels. Tunes with high note density were classified as major, and tunes with low note density as minor, even though these features were actually unrelated in the experimental material. Although these findings provide support for the importance of mode in the perception of emotion, they clearly indicate that these mode perceptions are inaccurate, even in trained individuals, without the assistance of affective labeling.
Resumo:
An increased leftward asymmetry of the planum temporale (PT) in absolute-pitch (AP) musicians has been previously reported, with speculation that early exposure to music influences the degree of PT asymmetry. To test this hypothesis and to determine whether a larger left PT or a smaller right PT actually accounts for the increased overall PT asymmetry in AP musicians, anatomical magnetic resonance images were taken from a right-handed group of 27 AP musicians, 27 nonmusicians, and 22 non-AP musicians. A significantly greater leftward PT asymmetry and a significantly smaller right absolute PT size for the AP musicians compared to the two control groups was found, while the left PT was only marginally larger in the AP group. The absolute size of the right PT and not the left PT was a better predictor of music group membership, possibly indicating “pruning” of the right PT rather than expansion of the left underlying the increased PT asymmetry in AP musicians. Although early exposure to music may be a prerequisite for acquiring AP, the increased PT asymmetry in AP musicians may be determined in utero, implicating possible genetic influences on PT asymmetry. This may explain why the increased PT asymmetry of AP musicians was not seen in the group of early beginning non-AP musicians.
Resumo:
We explored the ability of older (60-80 years old) and younger (18-23 years old) musicians and nonmusicians to judge the similarity of transposed melodies varying on rhythm, mode, and/or contour (Experiment 1) and to discriminate among melodies differing only in rhythm, mode, or contour (Experiment 2). Similarity ratings did not vary greatly among groups, with tunes differing only by mode being rated as most similar. In the same/different discrimination task, musicians performed better than nonmusicians, but we found no age differences. We also found that discrimination of major from minor tunes was difficult for everyone, even for musicians. Mode is apparently a subtle dimension in music, despite its deliberate use in composition and despite people's ability to label minor as "sad" and major as "happy."
Resumo:
Two experiments explored the representation of the tonal hierarchy in Western music among older (aged 60 to 80) and younger (aged 15 to 22) musicians and nonmusicians. A probe tone technique was used: 4 notes from the major triad were presented, followed by 1 note chosen from the 12 notes of the chromatic scale. Whereas musicians had a better sense of the tonal hierarchy than nonmusicians, older adults were no worse than younger adults in differentiating the notes according to musical principles. However, older adults were more prone than younger adults to classify the notes by frequency proximity (pitch height) when proximity was made more salient, as were nonmusicians compared with musicians. With notes having ambiguous pitch height, pitch height effects disappeared among older adults but not nonmusicians. Older adults seem to have internalized tonal structure, but they sometimes fail to inhibit less musically relevant information.
Resumo:
The authors examined the effects of age, musical experience, and characteristics of musical stimuli on a melodic short-term memory task in which participants had to recognize whether a tune was an exact transposition of another tune recently presented. Participants were musicians and nonmusicians between ages 18 and 30 or 60 and 80. In 4 experiments, the authors found that age and experience affected different aspects of the task, with experience becoming more influential when interference was provided during the task. Age and experience interacted only weakly, and neither age nor experience influenced the superiority of tonal over atonal materials. Recognition memory for the sequences did not reflect the same pattern of results as the transposition task. The implications of these results for theories of aging, experience, and music cognition are discussed.
Resumo:
Four experiments investigated perception of major and minor thirds whose component tones were sounded simultaneously. Effects akin to categorical perception of speech sounds were found. In the first experiment, musicians demonstrated relatively sharp category boundaries in identification and peaks near the boundary in discrimination tasks of an interval continuum where the bottom note was always an F and the top note varied from A to A flat in seven equal logarithmic steps. Nonmusicians showed these effects only to a small extent. The musicians showed higher than predicted discrimination performance overall, and reaction time increases at category boundaries. In the second experiment, musicians failed to consistently identify or discriminate thirds which varied in absolute pitch, but retained the proper interval ratio. In the last two experiments, using selective adaptation, consistent shifts were found in both identification and discrimination, similar to those found in speech experiments. Manipulations of adapting and test showed that the mechanism underlying the effect appears to be centrally mediated and confined to a frequency-specific level. A multistage model of interval perception, where the first stages deal only with specific pitches may account for the results.
Resumo:
Four experiments were conducted to examine the ability of people without "perfect pitch" to retain the absolute pitch offamiliar tunes. In Experiment 1, participants imagined given tunes, and then hummed their first notes four times either between or within sessions. The variability of these productions was very low. Experiment 2 used a recognition paradigm, with results similar to those in Experiment 1 for musicians, but with some additional variability shown for unselected subjects. In Experiment 3, subjects rated the suitability ofvarious pitches to start familiar tunes. Previously given preferred notes were rated high, as were notes three or four semitones distant from the preferred notes, but not notes one or two semitones distant. In Experiment 4, subjects mentally transformed the pitches of familiar tunes to the highest and lowest levels possible. These experiments suggest some retention of the absolute pitch of tunes despite a paucity of verbal or visual cues for the pitch.
Resumo:
Two experiments plus a pilot investigated the role of melodic structure on short-term memory for musical notation by musicians and nonmusicians. In the pilot experiment, visually similar melodies that had been rated as either "good" or "bad" were presented briefly, followed by a 15-sec retention interval and then recall. Musicians remembered good melodies better than they remembered bad ones: nonmusicians did not distinguish between them. In the second experiment, good, bad, and random melodies were briefly presented, followed by immediate recall. The advantage of musicians over nonmusicians decreased as the melody type progressed from good to bad to random. In the third experiment, musicians and nonmusicians divided the stimulus melodies into groups. For each melody, the consistency of grouping was correlated with memory performance in the first two experiments. Evidence was found for use of musical groupings by musicians and for use of a simple visual strategy by nonmusicians. The nature of these musical groupings and how they may be learned are considered. The relation of this work to other studies of comprehension of symbolic diagrams is also discussed.
Resumo:
We previously observed that mental manipulation of the pitch level or temporal organization of melodies results in functional activation in the human intraparietal sulcus (IPS), a region also associated with visuospatial transformation and numerical calculation. Two outstanding questions about these musical transformations are whether pitch and time depend on separate or common processing in IPS, and whether IPS recruitment in melodic tasks varies depending upon the degree of transformation required (as it does in mental rotation). In the present study we sought to answer these questions by applying functional magnetic resonance imaging while musicians performed closely matched mental transposition (pitch transformation) and melody reversal (temporal transformation) tasks. A voxel-wise conjunction analysis showed that in individual subjects, both tasks activated overlapping regions in bilateral IPS, suggesting that a common neural substrate subserves both types of mental transformation. Varying the magnitude of mental pitch transposition resulted in variation of IPS BOLD signal in correlation with the musical key-distance of the transposition, but not with the pitch distance, indicating that the cognitive metric relevant for this type of operation is an abstract one, well described by music-theoretic concepts. These findings support a general role for the IPS in systematically transforming auditory stimulus representations in a nonspatial context. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Hemisity refers to binary thinking and behavioral style differences between right and left brain-oriented individuals. The inevitability of hemisity became clear when it was discovered by magnetic resonance imaging (MRI) that an anatomical element of the executive system was unilaterally embedded in either the right or the left side of the ventral gyrus of the anterior cingulate cortex in an idiosyncratic manner that was congruent with an individual's inherent hemisity subtype. Based upon the MRI-calibrated hemisity of many individuals, a set of earlier biophysical and questionnaire hemisity assays was calibrated for accuracy and found appropriate for use in the investigation of the hemisity of individuals and groups. It had been reported that a partial sorting of individuals into hemisity right and left brain-oriented subgroups occurred during the process of higher education and professional development. Here, these results were extended by comparison of the hemisity of a putative unsorted population of 1,049 high school upper classmen, with that of 228 university freshmen. These hemisity outcomes were further compared with that of 15 university librarians, here found to be predominantly left brain-oriented, and 91 academically trained musicians, including 47 professional pianists, here found to be mostly right brainers. The results further supported the existence of substantial hemisity selection occurring during the process of higher education and in professional development.