50 resultados para Halperín Donghi, Tulio
Resumo:
We explored the functional organization of semantic memory for music by comparing priming across familiar songs both within modalities (Experiment 1, tune to tune; Experiment 3, category label to lyrics) and across modalities (Experiment 2, category label to tune; Experiment 4, tune to lyrics). Participants judged whether or not the target tune or lyrics were real (akin to lexical decision tasks). We found significant priming, analogous to linguistic associative-priming effects, in reaction times for related primes as compared to unrelated primes, but primarily for within-modality comparisons. Reaction times to tunes (e.g., "Silent Night") were faster following related tunes ("Deck the Hall") than following unrelated tunes ("God Bless America"). However, a category label (e.g., Christmas) did not prime tunes from within that category. Lyrics were primed by a related category label, but not by a related tune. These results support the conceptual organization of music in semantic memory, but with potentially weaker associations across modalities.
Resumo:
We used fMRI to investigate the neuronal correlates of encoding and recognizing heard and imagined melodies. Ten participants were shown lyrics of familiar verbal tunes; they either heard the tune along with the lyrics, or they had to imagine it. In a subsequent surprise recognition test, they had to identify the titles of tunes that they had heard or imagined earlier. The functional data showed substantial overlap during melody perception and imagery, including secondary auditory areas. During imagery compared with perception, an extended network including pFC, SMA, intraparietal sulcus, and cerebellum showed increased activity, in line with the increased processing demands of imagery. Functional connectivity of anterior right temporal cortex with frontal areas was increased during imagery compared with perception, indicating that these areas form an imagery-related network. Activity in right superior temporal gyrus and pFC was correlated with the subjective rating of imagery vividness. Similar to the encoding phase, the recognition task recruited overlapping areas, including inferior frontal cortex associated with memory retrieval, as well as left middle temporal gyrus. The results present new evidence for the cortical network underlying goal-directed auditory imagery, with a prominent role of the right pFC both for the subjective impression of imagery vividness and for on-line mental monitoring of imagery-related activity in auditory areas.
Resumo:
We examined age differences in the effectiveness of multiple repetitions and providing associative facts on tune memory. For both tune and fact recognition, three presentations were beneficial. Age was irrelevant in fact recognition, but older adults were less successful than younger in tune recognition. The associative fact did not affect young adults' performance. Among older people, the neutral association harmed performance; the emotional fact mitigated performance back to baseline. Young adults seemed to rely solely on procedural memory, or repetition, to learn tunes. Older adults benefitted by using emotional associative information to counteract memory burdens imposed by neutral associative information.
Resumo:
We describe some characteristics of persistent musical and verbal retrieval episodes, commonly known as "earworms." In Study 1, participants first filled out a survey summarizing their earworm experiences retrospectively. This was followed by a diary study to document each experience as it happened. Study 2 was an extension of the diary study with a larger sample and a focus on triggering events. Consistent with popular belief, these persistent musical memories were common across people and occurred frequently for most respondents, and were often linked to recent exposure to preferred music. Contrary to popular belief, the large majority of such experiences were not unpleasant. Verbal earworms were uncommon. These memory experiences provide an interesting example of extended memory retrieval for music in a naturalistic situation.
Resumo:
Eighty-one listeners defined by three age ranges (18–30, 31–59, and over 60 years) and three levels of musical experience performed an immediate recognition task requiring the detection of alterations in melodies. On each trial, a brief melody was presented, followed 5 sec later by a test stimulus that either was identical to the target or had two pitches changed, for a same–different judgment. Each melody pair was presented at 0.6 note/sec, 3.0 notes/sec, or 6.0 notes/sec. Performance was better with familiar melodies than with unfamiliar melodies. Overall performance declined slightly with age and improved substantially with increasing experience, in agreement with earlier results in an identification task. Tempo affected performance on familiar tunes (moderate was best), but not on unfamiliar tunes. We discuss these results in terms of theories of dynamic attending, cognitive slowing, and working memory in aging.
Resumo:
We investigated the effect of level-of-processing manipulations on “remember” and “know” responses in episodic melody recognition (Experiments 1 and 2) and how this effect is modulated by item familiarity (Experiment 2). In Experiment 1, participants performed 2 conceptual and 2 perceptual orienting tasks while listening to familiar melodies: judging the mood, continuing the tune, tracing the pitch contour, and counting long notes. The conceptual mood task led to higher d' rates for “remember” but not “know” responses. In Experiment 2, participants either judged the mood or counted long notes of tunes with high and low familiarity. A level-of-processing effect emerged again in participants’ “remember” d' rates regardless of melody familiarity. Results are discussed within the distinctive processing framework.
Resumo:
Two fMRI experiments explored the neural substrates of a musical imagery task that required manipulation of the imagined sounds: temporal reversal of a melody. Musicians were presented with the first few notes of a familiar tune (Experiment 1) or its title (Experiment 2), followed by a string of notes that was either an exact or an inexact reversal. The task was to judge whether the second string was correct or not by mentally reversing all its notes, thus requiring both maintenance and manipulation of the represented string. Both experiments showed considerable activation of the superior parietal lobe (intraparietal sulcus) during the reversal process. Ventrolateral and dorsolateral frontal cortices were also activated, consistent with the memory load required during the task. We also found weaker evidence for some activation of right auditory cortex in both studies, congruent with results from previous simpler music imagery tasks. We interpret these results in the context of other mental transformation tasks, such as mental rotation in the visual domain, which are known to recruit the intraparietal sulcus region, and we propose that this region subserves general computations that require transformations of a sensory input. Mental imagery tasks may thus have both task or modality-specific components as well as components that supersede any specific codes and instead represent amodal mental manipulation.
Resumo:
Music consists of sound sequences that require integration over time. As we become familiar with music, associations between notes, melodies, and entire symphonic movements become stronger and more complex. These associations can become so tight that, for example, hearing the end of one album track can elicit a robust image of the upcoming track while anticipating it in total silence. Here, we study this predictive “anticipatory imagery” at various stages throughout learning and investigate activity changes in corresponding neural structures using functional magnetic resonance imaging. Anticipatory imagery (in silence) for highly familiar naturalistic music was accompanied by pronounced activity in rostral prefrontal cortex (PFC) and premotor areas. Examining changes in the neural bases of anticipatory imagery during two stages of learning conditional associations between simple melodies, however, demonstrates the importance of fronto-striatal connections, consistent with a role of the basal ganglia in “training” frontal cortex (Pasupathy and Miller, 2005). Another striking change in neural resources during learning was a shift between caudal PFC earlier to rostral PFC later in learning. Our findings regarding musical anticipation and sound sequence learning are highly compatible with studies of motor sequence learning, suggesting common predictive mechanisms in both domains.
Resumo:
The study of musical timbre by Bailes (2007) raises important questions concerning the relative ease of imaging complex perceptual attributes such as timbre, compared to more unidimensional attributes. I also raise the issue of individual differences in auditory imagery ability, especially for timbre.
Resumo:
We investigated the effects of different encoding tasks and of manipulations of two supposedly surface parameters of music on implicit and explicit memory for tunes. In two experiments, participants were first asked to either categorize instrument or judge familiarity of 40 unfamiliar short tunes. Subsequently, participants were asked to give explicit and implicit memory ratings for a list of 80 tunes, which included 40 previously heard. Half of the 40 previously heard tunes differed in timbre (Experiment 1) or tempo (Experiment 2) in comparison with the first exposure. A third experiment compared similarity ratings of the tunes that varied in timbre or tempo. Analysis of variance (ANOVA) results suggest first that the encoding task made no difference for either memory mode. Secondly, timbre and tempo change both impaired explicit memory, whereas tempo change additionally made implicit tune recognition worse. Results are discussed in the context of implicit memory for nonsemantic materials and the possible differences in timbre and tempo in musical representations.
Resumo:
Two studies explored the stability of art preference in patients with Alzheimer’s disease and age-matched control participants. Preferences for three different styles of paintings, displayed on art postcards, were examined over two sessions. Preference for specific paintings differed among individuals but AD and non-AD groups maintained about the same stability in terms of preference judgments across two weeks, even though the AD patients did not have explicit memory for the paintings. We conclude that aesthetic responses can be preserved in the face of cognitive decline. This should encourage caregivers and family to engage in arts appreciation activities with patients, and reinforces the validity of a preference response as a dependent measure in testing paradigms.
Resumo:
WE STUDIED THE EMOTIONAL RESPONSES BY MUSICIANS to familiar classical music excerpts both when the music was sounded, and when it was imagined.We used continuous response methodology to record response profiles for the dimensions of valence and arousal simultaneously and then on the single dimension of emotionality. The response profiles were compared using cross-correlation analysis, and an analysis of responses to musical feature turning points, which isolate instances of change in musical features thought to influence valence and arousal responses. We found strong similarity between the use of an emotionality arousal scale across the stimuli, regardless of condition (imagined or sounded). A majority of participants were able to create emotional response profiles while imagining the music, which were similar in timing to the response profiles created while listening to the sounded music.We conclude that similar mechanisms may be involved in the processing of emotion in music when the music is sounded and when imagined.
Resumo:
This study looked at how people store and retrieve tonal music explicitly and implicitly using a production task. Participants completed an implicit task (tune stem completion) followed by an explicit task (cued recall). The tasks were identical except for the instructions at test time. They listened to tunes and were then presented with tune stems from previously heard tunes and novel tunes. For the implicit task, they were asked to sing a note they thought would come next musically. For the explicit task, they were asked to sing the note they remembered as coming next. Experiment 1 found that people correctly completed significantly more old stems than new stems. Experiment 2 investigated the characteristics of music that fuel retrieval by varying a surface feature of the tune (same timbre ordifferent timbre) from study to test and the encoding task (semantic or nonsemantic). Although we did not find that implicit and explicit memory for music were significantly dissociated for levels of processing, we did find that surface features of music affect semantic judgments and subsequent explicit retrieval.
Resumo:
COMPOSERS COMMONLY USE MAJOR OR MINOR SCALES to create different moods in music.Nonmusicians show poor discrimination and classification of this musical dimension; however, they can perform these tasks if the decision is phrased as happy vs. sad.We created pairs of melodies identical except for mode; the first major or minor third or sixth was the critical note that distinguished major from minor mode. Musicians and nonmusicians judged each melody as major vs. minor or happy vs. sad.We collected ERP waveforms, triggered to the onset of the critical note. Musicians showed a late positive component (P3) to the critical note only for the minor melodies, and in both tasks.Nonmusicians could adequately classify the melodies as happy or sad but showed little evidence of processing the critical information. Major appears to be the default mode in music, and musicians and nonmusicians apparently process mode differently.