9 resultados para cognitive neuroscience
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
We examined age differences in the effectiveness of multiple repetitions and providing associative facts on tune memory. For both tune and fact recognition, three presentations were beneficial. Age was irrelevant in fact recognition, but older adults were less successful than younger in tune recognition. The associative fact did not affect young adults' performance. Among older people, the neutral association harmed performance; the emotional fact mitigated performance back to baseline. Young adults seemed to rely solely on procedural memory, or repetition, to learn tunes. Older adults benefitted by using emotional associative information to counteract memory burdens imposed by neutral associative information.
Resumo:
We describe some characteristics of persistent musical and verbal retrieval episodes, commonly known as "earworms." In Study 1, participants first filled out a survey summarizing their earworm experiences retrospectively. This was followed by a diary study to document each experience as it happened. Study 2 was an extension of the diary study with a larger sample and a focus on triggering events. Consistent with popular belief, these persistent musical memories were common across people and occurred frequently for most respondents, and were often linked to recent exposure to preferred music. Contrary to popular belief, the large majority of such experiences were not unpleasant. Verbal earworms were uncommon. These memory experiences provide an interesting example of extended memory retrieval for music in a naturalistic situation.
Resumo:
Eighty-one listeners defined by three age ranges (18–30, 31–59, and over 60 years) and three levels of musical experience performed an immediate recognition task requiring the detection of alterations in melodies. On each trial, a brief melody was presented, followed 5 sec later by a test stimulus that either was identical to the target or had two pitches changed, for a same–different judgment. Each melody pair was presented at 0.6 note/sec, 3.0 notes/sec, or 6.0 notes/sec. Performance was better with familiar melodies than with unfamiliar melodies. Overall performance declined slightly with age and improved substantially with increasing experience, in agreement with earlier results in an identification task. Tempo affected performance on familiar tunes (moderate was best), but not on unfamiliar tunes. We discuss these results in terms of theories of dynamic attending, cognitive slowing, and working memory in aging.
Resumo:
We investigated the effect of level-of-processing manipulations on “remember” and “know” responses in episodic melody recognition (Experiments 1 and 2) and how this effect is modulated by item familiarity (Experiment 2). In Experiment 1, participants performed 2 conceptual and 2 perceptual orienting tasks while listening to familiar melodies: judging the mood, continuing the tune, tracing the pitch contour, and counting long notes. The conceptual mood task led to higher d' rates for “remember” but not “know” responses. In Experiment 2, participants either judged the mood or counted long notes of tunes with high and low familiarity. A level-of-processing effect emerged again in participants’ “remember” d' rates regardless of melody familiarity. Results are discussed within the distinctive processing framework.
Resumo:
Two fMRI experiments explored the neural substrates of a musical imagery task that required manipulation of the imagined sounds: temporal reversal of a melody. Musicians were presented with the first few notes of a familiar tune (Experiment 1) or its title (Experiment 2), followed by a string of notes that was either an exact or an inexact reversal. The task was to judge whether the second string was correct or not by mentally reversing all its notes, thus requiring both maintenance and manipulation of the represented string. Both experiments showed considerable activation of the superior parietal lobe (intraparietal sulcus) during the reversal process. Ventrolateral and dorsolateral frontal cortices were also activated, consistent with the memory load required during the task. We also found weaker evidence for some activation of right auditory cortex in both studies, congruent with results from previous simpler music imagery tasks. We interpret these results in the context of other mental transformation tasks, such as mental rotation in the visual domain, which are known to recruit the intraparietal sulcus region, and we propose that this region subserves general computations that require transformations of a sensory input. Mental imagery tasks may thus have both task or modality-specific components as well as components that supersede any specific codes and instead represent amodal mental manipulation.
Resumo:
Music consists of sound sequences that require integration over time. As we become familiar with music, associations between notes, melodies, and entire symphonic movements become stronger and more complex. These associations can become so tight that, for example, hearing the end of one album track can elicit a robust image of the upcoming track while anticipating it in total silence. Here, we study this predictive “anticipatory imagery” at various stages throughout learning and investigate activity changes in corresponding neural structures using functional magnetic resonance imaging. Anticipatory imagery (in silence) for highly familiar naturalistic music was accompanied by pronounced activity in rostral prefrontal cortex (PFC) and premotor areas. Examining changes in the neural bases of anticipatory imagery during two stages of learning conditional associations between simple melodies, however, demonstrates the importance of fronto-striatal connections, consistent with a role of the basal ganglia in “training” frontal cortex (Pasupathy and Miller, 2005). Another striking change in neural resources during learning was a shift between caudal PFC earlier to rostral PFC later in learning. Our findings regarding musical anticipation and sound sequence learning are highly compatible with studies of motor sequence learning, suggesting common predictive mechanisms in both domains.
Resumo:
We investigated the effects of different encoding tasks and of manipulations of two supposedly surface parameters of music on implicit and explicit memory for tunes. In two experiments, participants were first asked to either categorize instrument or judge familiarity of 40 unfamiliar short tunes. Subsequently, participants were asked to give explicit and implicit memory ratings for a list of 80 tunes, which included 40 previously heard. Half of the 40 previously heard tunes differed in timbre (Experiment 1) or tempo (Experiment 2) in comparison with the first exposure. A third experiment compared similarity ratings of the tunes that varied in timbre or tempo. Analysis of variance (ANOVA) results suggest first that the encoding task made no difference for either memory mode. Secondly, timbre and tempo change both impaired explicit memory, whereas tempo change additionally made implicit tune recognition worse. Results are discussed in the context of implicit memory for nonsemantic materials and the possible differences in timbre and tempo in musical representations.
Resumo:
WE STUDIED THE EMOTIONAL RESPONSES BY MUSICIANS to familiar classical music excerpts both when the music was sounded, and when it was imagined.We used continuous response methodology to record response profiles for the dimensions of valence and arousal simultaneously and then on the single dimension of emotionality. The response profiles were compared using cross-correlation analysis, and an analysis of responses to musical feature turning points, which isolate instances of change in musical features thought to influence valence and arousal responses. We found strong similarity between the use of an emotionality arousal scale across the stimuli, regardless of condition (imagined or sounded). A majority of participants were able to create emotional response profiles while imagining the music, which were similar in timing to the response profiles created while listening to the sounded music.We conclude that similar mechanisms may be involved in the processing of emotion in music when the music is sounded and when imagined.