19 resultados para Consonance dissonance sounds

em Duke University


Relevância:

10.00% 10.00%

Publicador:

Resumo:

While cochlear implants (CIs) usually provide high levels of speech recognition in quiet, speech recognition in noise remains challenging. To overcome these difficulties, it is important to understand how implanted listeners separate a target signal from interferers. Stream segregation has been studied extensively in both normal and electric hearing, as a function of place of stimulation. However, the effects of pulse rate, independent of place, on the perceptual grouping of sequential sounds in electric hearing have not yet been investigated. A rhythm detection task was used to measure stream segregation. The results of this study suggest that while CI listeners can segregate streams based on differences in pulse rate alone, the amount of stream segregation observed decreases as the base pulse rate increases. Further investigation of the perceptual dimensions encoded by the pulse rate and the effect of sequential presentation of different stimulation rates on perception could be beneficial for the future development of speech processing strategies for CIs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to isolate a single sound source among concurrent sources and reverberant energy is necessary for understanding the auditory world. The precedence effect describes a related experimental finding, that when presented with identical sounds from two locations with a short onset asynchrony (on the order of milliseconds), listeners report a single source with a location dominated by the lead sound. Single-cell recordings in multiple animal models have indicated that there are low-level mechanisms that may contribute to the precedence effect, yet psychophysical studies in humans have provided evidence that top-down cognitive processes have a great deal of influence on the perception of simulated echoes. In the present study, event-related potentials evoked by click pairs at and around listeners' echo thresholds indicate that perception of the lead and lag sound as individual sources elicits a negativity between 100 and 250 msec, previously termed the object-related negativity (ORN). Even for physically identical stimuli, the ORN is evident when listeners report hearing, as compared with not hearing, a second sound source. These results define a neural mechanism related to the conscious perception of multiple auditory objects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to imitate complex sounds is rare, and among birds has been found only in parrots, songbirds, and hummingbirds. Parrots exhibit the most advanced vocal mimicry among non-human animals. A few studies have noted differences in connectivity, brain position and shape in the vocal learning systems of parrots relative to songbirds and hummingbirds. However, only one parrot species, the budgerigar, has been examined and no differences in the presence of song system structures were found with other avian vocal learners. Motivated by questions of whether there are important differences in the vocal systems of parrots relative to other vocal learners, we used specialized constitutive gene expression, singing-driven gene expression, and neural connectivity tracing experiments to further characterize the song system of budgerigars and/or other parrots. We found that the parrot brain uniquely contains a song system within a song system. The parrot "core" song system is similar to the song systems of songbirds and hummingbirds, whereas the "shell" song system is unique to parrots. The core with only rudimentary shell regions were found in the New Zealand kea, representing one of the only living species at a basal divergence with all other parrots, implying that parrots evolved vocal learning systems at least 29 million years ago. Relative size differences in the core and shell regions occur among species, which we suggest could be related to species differences in vocal and cognitive abilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Parrots belong to a group of behaviorally advanced vertebrates and have an advanced ability of vocal learning relative to other vocal-learning birds. They can imitate human speech, synchronize their body movements to a rhythmic beat, and understand complex concepts of referential meaning to sounds. However, little is known about the genetics of these traits. Elucidating the genetic bases would require whole genome sequencing and a robust assembly of a parrot genome. FINDINGS: We present a genomic resource for the budgerigar, an Australian Parakeet (Melopsittacus undulatus) -- the most widely studied parrot species in neuroscience and behavior. We present genomic sequence data that includes over 300× raw read coverage from multiple sequencing technologies and chromosome optical maps from a single male animal. The reads and optical maps were used to create three hybrid assemblies representing some of the largest genomic scaffolds to date for a bird; two of which were annotated based on similarities to reference sets of non-redundant human, zebra finch and chicken proteins, and budgerigar transcriptome sequence assemblies. The sequence reads for this project were in part generated and used for both the Assemblathon 2 competition and the first de novo assembly of a giga-scale vertebrate genome utilizing PacBio single-molecule sequencing. CONCLUSIONS: Across several quality metrics, these budgerigar assemblies are comparable to or better than the chicken and zebra finch genome assemblies built from traditional Sanger sequencing reads, and are sufficient to analyze regions that are difficult to sequence and assemble, including those not yet assembled in prior bird genomes, and promoter regions of genes differentially regulated in vocal learning brain regions. This work provides valuable data and material for genome technology development and for investigating the genomics of complex behavioral traits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contrary Voices examines composer Hanns Eisler’s settings of nineteenth-century poetry under changing political pressures from 1925 to 1962. The poets’ ideologically fraught reception histories, both under Nazism and in East Germany, led Eisler to intervene in this reception and voice dissent by radically fragmenting the texts. His musical settings both absorb and disturb the charisma of nineteenth-century sound materials, through formal parody, dissonance, and interruption. Eisler’s montage-like work foregrounds the difficult position of a modernist artist speaking both to and against political demands placed on art. Often the very charisma the composer seeks to expose for its power to sway the body politic exerts a force of its own. At the same time, his text-settings resist ideological rigidity in their polyphonic play. A dialogic approach to musical adaptation shows that, as Eisler seeks to resignify Heine’s problematic status in the Weimar Republic, Hölderlin’s appropriation under Nazism, and Goethe’s status as a nationalist symbol in the nascent German Democratic Republic, his music invests these poetic voices with surprising fragility and multivalence. It also destabilizes received gender tropes, in the masculine vulnerability of Eisler’s Heine choruses from 1925 and in the androgynous voices of his 1940s Hölderlin exile songs and later Goethe settings. Cross-reading the texts after hearing such musical treatment illuminates faultlines and complexities less obvious in text-only analysis. Ultimately Eisler’s music translates canonical material into a form as paradoxically faithful as it is violently fragmented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sound is a key sensory modality for Hawaiian spinner dolphins. Like many other marine animals, these dolphins rely on sound and their acoustic environment for many aspects of their daily lives, making it is essential to understand soundscape in areas that are critical to their survival. Hawaiian spinner dolphins rest during the day in shallow coastal areas and forage offshore at night. In my dissertation I focus on the soundscape of the bays where Hawaiian spinner dolphins rest taking a soundscape ecology approach. I primarily relied on passive acoustic monitoring using four DSG-Ocean acoustic loggers in four Hawaiian spinner dolphin resting bays on the Kona Coast of Hawai‛i Island. 30-second recordings were made every four minutes in each of the bays for 20 to 27 months between January 8, 2011 and March 30, 2013. I also utilized concomitant vessel-based visual surveys in the four bays to provide context for these recordings. In my first chapter I used the contributions of the dolphins to the soundscape to monitor presence in the bays and found the degree of presence varied greatly from less than 40% to nearly 90% of days monitored with dolphins present. Having established these bays as important to the animals, in my second chapter I explored the many components of their resting bay soundscape and evaluated the influence of natural and human events on the soundscape. I characterized the overall soundscape in each of the four bays, used the tsunami event of March 2011 to approximate a natural soundscape and identified all loud daytime outliers. Overall, sound levels were consistently louder at night and quieter during the daytime due to the sounds from snapping shrimp. In fact, peak Hawaiian spinner dolphin resting time co-occurs with the quietest part of the day. However, I also found that humans drastically alter this daytime soundscape with sound from offshore aquaculture, vessel sound and military mid-frequency active sonar. During one recorded mid-frequency active sonar event in August 2011, sound pressure levels in the 3.15 kHz 1/3rd-octave band were as high as 45.8 dB above median ambient noise levels. Human activity both inside (vessels) and outside (sonar and aquaculture) the bays significantly altered the resting bay soundscape. Inside the bays there are high levels of human activity including vessel-based tourism directly targeting the dolphins. The interactions between humans and dolphins in their resting bays are of concern; therefore, my third chapter aimed to assess the acoustic response of the dolphins to human activity. Using days where acoustic recordings overlapped with visual surveys I found the greatest response in a bay with dolphin-centric activities, not in the bay with the most vessel activity, indicating that it is not the magnitude that elicits a response but the focus of the activity. In my fourth chapter I summarize the key results from my first three chapters to illustrate the power of multiple site design to prioritize action to protect Hawaiian spinner dolphins in their resting bays, a chapter I hope will be useful for managers should they take further action to protect the dolphins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

1. nowhere landscape, for clarinets, trombones, percussion, violins, and electronics

nowhere landscape is an eighty-minute work for nine performers, composed of acoustic and electronic sounds. Its fifteen movements invoke a variety of listening strategies, using slow change, stasis, layering, coincidence, and silence to draw attention to the sonic effects of the environment—inside the concert hall as well as the world outside of it. The work incorporates a unique stage set-up: the audience sits in close proximity to the instruments, facing in one of four different directions, while the musicians play from a number of constantly-shifting locations, including in front of, next to, and behind the audience.

Much of nowhere landscape’s material is derived from a collection of field recordings

made by the composer during a road trip from Springfield, MA to Douglas, WY along US- 20, a cross-country route made effectively obsolete by the completion of I-90 in the mid- 20th century. In an homage to artist Ed Ruscha’s 1963 book Twentysix Gasoline Stations, the composer made twenty-six recordings at gas stations along US-20. Many of the movements of nowhere landscape examine the musical potential of these captured soundscapes: familiar and anonymous, yet filled with poignancy and poetic possibility.

2. “The Map and the Territory: Documenting David Dunn’s Sky Drift”

In 1977, David Dunn recruited twenty-six musicians to play his work Sky Drift in the

Anza-Borrego Desert in Southern California. This outdoor performance was documented with photos and recorded with four stationary microphones to tape. A year later, Dunn presented the work in New York City as a “performance/documentation,” playing back the audio recording and projecting slides. In this paper I examine the consequences of this kind of act: what does it mean for a recording of an outdoor work to be shared at an indoor concert event? Can such a complex and interactive experience be successfully flattened into some kind of re-playable documentation? What can a recording capture and what must it exclude?

This paper engages with these questions as they relate to David Dunn’s Sky Drift and to similar works by Karlheinz Stockhausen and John Luther Adams. These case-studies demonstrate different solutions to the difficulty of documenting outdoor performances. Because this music is often heard from a variety of equally-valid perspectives—and because any single microphone only captures sound from one of these perspectives—the physical set-up of these kind of pieces complicate what it means to even “hear the music” at all. To this end, I discuss issues around the “work itself” and “aura” as well as “transparency” and “liveness” in recorded sound, bringing in thoughts and ideas from Walter Benjamin, Howard Becker, Joshua Glasgow, and others. In addition, the artist Robert Irwin and the composer Barry Truax have written about the conceptual distinctions between “the work” and “not- the-work”; these distinctions are complicated by documentation and recording. Without the context, the being-there, the music is stripped of much of its ability to communicate meaning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.

The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.

First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.

Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.

My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.

In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Involuntary episodic memories are memories that come into consciousness without preceding retrieval effort. These memories are commonplace and are relevant to multiple mental disorders. However, they are vastly understudied. We use a novel paradigm to elicit involuntary memories in the laboratory so that we can study their neural basis. In session one, an encoding session, sounds are presented with picture pairs or alone. In session two, in the scanner, sounds-picture pairs and unpaired sounds are reencoded. Immediately following, participants are split into two groups: a voluntary and an involuntary group. Both groups perform a sound localization task in which they hear the sounds and indicate the side from which they are coming. The voluntary group additionally tries to remember the pictures that were paired with the sounds. Looking at neural activity, we find a main effect of condition (paired vs. unpaired sounds) showing similar activity in both groups for voluntary and involuntary memories in regions typically associated with retrieval. There is also a main effect of group (voluntary vs. involuntary) in the dorsolateral prefrontal cortex, a region typically associated with cognitive control. Turning to connectivity similarities and differences between groups again, there is a main effect of condition showing paired > unpaired sounds are associated with a recollection network. In addition, three group differences were found: (1) increased connectivity between the pulvinar nucleus of the thalamus and the recollection network for the voluntary group, (2) a higher association between the voluntary group and a network that includes regions typically found in frontoparietal and cingulo-opercular networks, and (3) shorter path length for about half of the nodes in these networks for the voluntary group. Finally, we use the same paradigm to compare involuntary memories in people with posttraumatic stress disorder (PTSD) to trauma-controls. This study also included the addition of emotional pictures. There were two main findings. (1) A similar pattern of activity was found for paired > unpaired sounds for both groups but this activity was delayed in the PTSD group. (2) A similar pattern of activity was found for high > low emotion stimuli but it occurred early in the PTSD group compared to the control group. Our results suggest that involuntary and voluntary memories share the same neural representation but that voluntary memories are associated with additional cognitive control processes. They also suggest that disorders associated with cognitive deficits, like PTSD, can affect the processing of involuntary memories.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A class of multi-process models is developed for collections of time indexed count data. Autocorrelation in counts is achieved with dynamic models for the natural parameter of the binomial distribution. In addition to modeling binomial time series, the framework includes dynamic models for multinomial and Poisson time series. Markov chain Monte Carlo (MCMC) and Po ́lya-Gamma data augmentation (Polson et al., 2013) are critical for fitting multi-process models of counts. To facilitate computation when the counts are high, a Gaussian approximation to the P ́olya- Gamma random variable is developed.

Three applied analyses are presented to explore the utility and versatility of the framework. The first analysis develops a model for complex dynamic behavior of themes in collections of text documents. Documents are modeled as a “bag of words”, and the multinomial distribution is used to characterize uncertainty in the vocabulary terms appearing in each document. State-space models for the natural parameters of the multinomial distribution induce autocorrelation in themes and their proportional representation in the corpus over time.

The second analysis develops a dynamic mixed membership model for Poisson counts. The model is applied to a collection of time series which record neuron level firing patterns in rhesus monkeys. The monkey is exposed to two sounds simultaneously, and Gaussian processes are used to smoothly model the time-varying rate at which the neuron’s firing pattern fluctuates between features associated with each sound in isolation.

The third analysis presents a switching dynamic generalized linear model for the time-varying home run totals of professional baseball players. The model endows each player with an age specific latent natural ability class and a performance enhancing drug (PED) use indicator. As players age, they randomly transition through a sequence of ability classes in a manner consistent with traditional aging patterns. When the performance of the player significantly deviates from the expected aging pattern, he is identified as a player whose performance is consistent with PED use.

All three models provide a mechanism for sharing information across related series locally in time. The models are fit with variations on the P ́olya-Gamma Gibbs sampler, MCMC convergence diagnostics are developed, and reproducible inference is emphasized throughout the dissertation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

“Globalizing the Sculptural Landscape of Isis and Sarapis Cults in Roman Greece,” asks questions of cross-cultural exchange and viewership of sculptural assemblages set up in sanctuaries to the Egyptian gods. Focusing on cognitive dissonance, cultural imagining, and manipulations of time and space, I theorize ancient globalization as a set of loosely related processes that shifted a community's connections with place. My case studies range from the 3rd century BCE to the 2nd century CE, including sanctuaries at Rhodes, Thessaloniki, Dion, Marathon, Gortyna, and Delos. At these sites, devotees combined mainstream Greco-Roman sculptures, Egyptian imports, and locally produced imitations of Egyptian artifacts. In the last case, local sculptors represented Egyptian subjects with Greco-Roman naturalistic styles, creating an exoticized visual ideal that had both local and global resonance. My dissertation argues that the sculptural assemblages set up in Egyptian sanctuaries allowed each community to construct complex narratives about the nature of the Egyptian gods. Further, these images participated in a form of globalization that motivated local communities to adopt foreign gods and reinterpret them to suit local needs.

I begin my dissertation by examining how Isis and Sarapis were represented in Greece. My first chapter focuses on single statues of Egyptian gods, describing their iconographies and stylistic tendencies through examples from Corinth and Gortyna. By comparing Greek examples with images of Sarapis, Isis, and Harpokrates from around the Mediterranean, I demonstrate that Greek communities relied on globally available visual tropes rather than creating site or region-specific interpretations. In the next section, I examine what other sources viewers drew upon to inform their experiences of Egyptian sculpture. In Chapter 3, I survey the textual evidence for Isiac cult practice in Greece as a way to reconstruct devotees’ expectations of sculptures in sanctuary contexts. At the core of this analysis are Apuleius’ Metamorphoses and Plutarch’s De Iside et Osiride, which offer a Greek perspective on the cult’s theology. These literary works rely on a tradition of aretalogical inscriptions—long hymns produced from roughly the late 4th century B.C.E. into the 4th century C.E. that describe the expansive syncretistic powers of Isis, Sarapis, and Harpokrates. This chapter argues that the textual evidence suggests that devotees may have expected their images to be especially miraculous and likely to intervene on their behalf, particularly when involved in ritual activity inside the sanctuary.

In the final two chapters, I consider sculptural programs and ritual activity in concert with sanctuary architecture. My fourth chapter focuses on sanctuaries where large amounts of sculpture were found in underground water crypts: Thessaloniki and Rhodes. These groups of statues can be connected to a particular sanctuary space, but their precise display contexts are not known. By reading these images together, I argue that local communities used these globally available images to construct new interpretations of these gods, ones that explored the complex intersections of Egyptian, Greek, and Roman identities in a globalized Mediterranean. My final chapter explores the Egyptian sanctuary at Marathon, a site where exceptional preservation allows us to study how viewers would have experienced images in architectural space. Using the Isiac visuality established in Chapter 3, I reconstruct the viewer's experience, arguing that the patron, Herodes Atticus, intended his viewer to inform his experience with the complex theology of Middle Platonism and prevailing elite attitudes about Roman imperialism.

Throughout my dissertation, I diverge from traditional approaches to culture change that center on the concepts of Romanization and identity. In order to access local experiences of globalization, I examine viewership on a micro-scale. I argue that viewers brought their concerns about culture change into dialogue with elements of cult, social status, art, and text to create new interpretations of Roman sculpture sensitive to the challenges of a highly connected Mediterranean world. In turn, these transcultural perspectives motivated Isiac devotees to create assemblages that combined elements from multiple cultures. These expansive attitudes also inspired Isiac devotees to commission exoticized images that brought together disparate cultures and styles in an eclectic manner that mirrored the haphazard way that travel brought change to the Mediterranean world. My dissertation thus offers a more theoretically rigorous way of modeling culture change in antiquity that recognizes local communities’ agency in producing their cultural landscapes, reconciling some of the problems of scale that have plagued earlier approaches to provincial Roman art.

These case studies demonstrate that cultural anxieties played a key role in how viewers experienced artistic imagery in the Hellenistic and Roman Mediterranean. This dissertation thus offers a new component in our understanding of ancient visuality, and, in turn, a better way to analyze how local communities dealt with the rise of connectivity and globalization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Into the Bends of Time is a 40-minute work in seven movements for a large chamber orchestra with electronics, utilizing real-time computer-assisted processing of music performed by live musicians. The piece explores various combinations of interactive relationships between players and electronics, ranging from relatively basic processing effects to musical gestures achieved through stages of computer analysis, in which resulting sounds are crafted according to parameters of the incoming musical material. Additionally, some elements of interaction are multi-dimensional, in that they rely on the participation of two or more performers fulfilling distinct roles in the interactive process with the computer in order to generate musical material. Through processes of controlled randomness, several electronic effects induce elements of chance into their realization so that no two performances of this work are exactly alike. The piece gets its name from the notion that real-time computer-assisted processing, in which sound pressure waves are transduced into electrical energy, converted to digital data, artfully modified, converted back into electrical energy and transduced into sound waves, represents a “bending” of time.

The Bill Evans Trio featuring bassist Scott LaFaro and drummer Paul Motian is widely regarded as one of the most important and influential piano trios in the history of jazz, lauded for its unparalleled level of group interaction. Most analyses of Bill Evans’ recordings, however, focus on his playing alone and fail to take group interaction into account. This paper examines one performance in particular, of Victor Young’s “My Foolish Heart” as recorded in a live performance by the Bill Evans Trio in 1961. In Part One, I discuss Steve Larson’s theory of musical forces (expanded by Robert S. Hatten) and its applicability to jazz performance. I examine other recordings of ballads by this same trio in order to draw observations about normative ballad performance practice. I discuss meter and phrase structure and show how the relationship between the two is fixed in a formal structure of repeated choruses. I then develop a model of perpetual motion based on the musical forces inherent in this structure. In Part Two, I offer a full transcription and close analysis of “My Foolish Heart,” showing how elements of group interaction work with and against the musical forces inherent in the model of perpetual motion to achieve an unconventional, dynamic use of double-time. I explore the concept of a unified agential persona and discuss its role in imparting the song’s inherent rhetorical tension to the instrumental musical discourse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others' goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.