749 resultados para Sounds
Resumo:
Objective: The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Methods: Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135. ms) and long (1500. ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Results: Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. Conclusions: The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. Significance: This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound.
Resumo:
Ebben a tanulmányban a szerző egy új harmóniakereső metaheurisztikát mutat be, amely a minimális időtartamú erőforrás-korlátos ütemezések halmazán a projekt nettó jelenértékét maximalizálja. Az optimális ütemezés elméletileg két egész értékű (nulla-egy típusú) programozási feladat megoldását jelenti, ahol az első lépésben meghatározzuk a minimális időtartamú erőforrás-korlátos ütemezések időtartamát, majd a második lépésben az optimális időtartamot feltételként kezelve megoldjuk a nettó jelenérték maximalizálási problémát minimális időtartamú erőforrás-korlátos ütemezések halmazán. A probléma NP-hard jellege miatt az egzakt megoldás elfogadható idő alatt csak kisméretű projektek esetében képzelhető el. A bemutatandó metaheurisztika a Csébfalvi (2007) által a minimális időtartamú erőforrás-korlátos ütemezések időtartamának meghatározására és a tevékenységek ennek megfelelő ütemezésére kifejlesztett harmóniakereső metaheurisztika továbbfejlesztése, amely az erőforrás-felhasználási konfliktusokat elsőbbségi kapcsolatok beépítésével oldja fel. Az ajánlott metaheurisztika hatékonyságának és életképességének szemléltetésére számítási eredményeket adunk a jól ismert és népszerű PSPLIB tesztkönyvtár J30 részhalmazán futtatva. Az egzakt megoldás generálásához egy korszerű MILP-szoftvert (CPLEX) alkalmaztunk. _______________ This paper presents a harmony search metaheuristic for the resource-constrained project scheduling problem with discounted cash flows. In the proposed approach, a resource-constrained project is characterized by its „best” schedule, where best means a makespan minimal resource constrained schedule for which the net present value (NPV) measure is maximal. Theoretically the optimal schedule searching process is formulated as a twophase mixed integer linear programming (MILP) problem, which can be solved for small-scale projects in reasonable time. The applied metaheuristic is based on the "conflict repairing" version of the "Sounds of Silence" harmony search metaheuristic developed by Csébfalvi (2007) for the resource-constrained project scheduling problem (RCPSP). In order to illustrate the essence and viability of the proposed harmony search metaheuristic, we present computational results for a J30 subset from the well-known and popular PSPLIB. To generate the exact solutions a state-of-the-art MILP solver (CPLEX) was used.
Resumo:
The purpose of this thesis was to build the Guitar Application ToolKit (GATK), a series of applications used to expand the sonic capabilities of the acoustic/electric stereo guitar. Furthermore, the goal of the GATK was to extend improvisational capabilities and the compositional techniques generated by this innovative instrument. ^ During the GATK creation process, the current production guitar techniques and overall sonic result were enhanced by planning and implementing a personalized electro-acoustic performance set up, designing custom-made performance interfaces, creating interactive compositional strategies, crafting non-standardized sounds, and controlling various music parameters in real-time using the Max/MSP programming environment. ^ This was the fast thesis project of its kind. It is expected that this thesis will be useful as a reference paper for electronic musicians and music technology students; as a product demonstration for companies that manufacture the relevant software; and as a personal portfolio for future technology related jobs. ^
Resumo:
In his study -The IRS Collection Division: Contacts and Settlements - by John M. Tarras, Assistant Professor School of Hotel, Restaurant and Institutional Management, Michigan State University, Tarras initially states: “The collection division of the internal revenue service is often the point of contact for many hospitality businesses. The author describes how the division operates, what the hospitality firm can expect when contacted by it, and what types of strategies firms might find helpful when negotiating a settlement with the IRS.” The author will have you know that even though most chance meetings with the IRS Collection Division are due to unfortunate tax payment circumstances, there are actually more benign reasons for close encounters of the IRS kind. This does not mean, however, that brushes with the IRS Collection Division will end on an ever friendlier note. “…the Tax Reform Act of 1986 with its added complexity will cause some hospitality firms to inadvertently fail to make proper payments on a timely basis,” Tarras affords in illustrating a perhaps less pugnacious side of IRS relations. Should a hospitality business owner represent himself/herself before the IRS? Never, says Tarras. “Too many taxpayers ruin their chances of a fair settlement by making what to them seem innocent remarks, but ones that turn out to be far different,” warns Professor Tarras. Tarras makes the distinction between IRS the Collection Division, and IRS the Audit Division. “While the Audit Division is interested in how the tax liability arose, the Collection Division is generally only interested in collecting the liability,” he informs you. Either sounds firmly in hostile territory. They don’t bluff. Tarras does want you to know that when the IRS threatens to levy on the assets of a hospitality business, they will do so. Those assets may extend to personal and real property as well, he says. The levy action is generally the final resort in an IRS collection effort. Professor Tarras explains the lien process and the due process attached to that IRS collection tactic. “The IRS can also levy a hospitality firm owner's wages. In this case, it is important to realize that you are allowed to exempt from levy $75 per week, along with $25 per week for each of your dependents (unless your spouse works),” Professor Tarras says with the appropriate citation. What are the options available to the hospitality business owner who finds himself on the wrong side of the IRS Collection Division? Negotiate in good faith says Professor Tarras. “In many cases, a visit to the IRS office will greatly reduce the chances that a simple problem will turn into a major one,” Tarras advises. He dedicates the last pages of the discussion to negotiation strategies.
Resumo:
One of the most popular techniques for creating spatialized virtual sounds is based on the use of Head-Related Transfer Functions (HRTFs). HRTFs are signal processing models that represent the modifications undergone by the acoustic signal as it travels from a sound source to each of the listener's eardrums. These modifications are due to the interaction of the acoustic waves with the listener's torso, shoulders, head and pinnae, or outer ears. As such, HRTFs are somewhat different for each listener. For a listener to perceive synthesized 3-D sound cues correctly, the synthesized cues must be similar to the listener's own HRTFs. ^ One can measure individual HRTFs using specialized recording systems, however, these systems are prohibitively expensive and restrict the portability of the 3-D sound system. HRTF-based systems also face several computational challenges. This dissertation presents an alternative method for the synthesis of binaural spatialized sounds. The sound entering the pinna undergoes several reflective, diffractive and resonant phenomena, which determine the HRTF. Using signal processing tools, such as Prony's signal modeling method, an appropriate set of time delays and a resonant frequency were used to approximate the measured Head-Related Impulse Responses (HRIRs). Statistical analysis was used to find out empirical equations describing how the reflections and resonances are determined by the shape and size of the pinna features obtained from 3D images of 15 experimental subjects modeled in the project. These equations were used to yield “Model HRTFs” that can create elevation effects. ^ Listening tests conducted on 10 subjects show that these model HRTFs are 5% more effective than generic HRTFs when it comes to localizing sounds in the frontal plane. The number of reversals (perception of sound source above the horizontal plane when actually it is below the plane and vice versa) was also reduced by 5.7%, showing the perceptual effectiveness of this approach. The model is simple, yet versatile because it relies on easy to measure parameters to create an individualized HRTF. This low-order parameterized model also reduces the computational and storage demands, while maintaining a sufficient number of perceptually relevant spectral cues. ^
Resumo:
This study analyzed the reader's relationship to the sounds embedded in a written text for the purpose of identifying those sounds' contribution to the reader's interpretation of that text. To achieve this objective, this study negotiated Heideggerian phenomenology, Freudian and Lacanian psychoanalysis, linguistics, and musicology into a reader response theory, which was then applied to Edgar Allen Poe's "The Raven." This study argues that the orchestration of sounds in "The Raven" forces its reader into a regression, which the reader then represses, only to carry the resulting sound-image // away from the poem as a psychic scar.
Resumo:
Reverberation is caused by the reflection of the sound in adjacent surfaces close to the sound source during its propagation to the listener. The impulsive response of an environment represents its reverberation characteristics. Being dependent on the environment, reverberation takes to the listener characteristics of the space where the sound is originated and its absence does not commonly sounds like “natural”. When recording sounds, it is not always possible to have the desirable characteristics of reverberation of an environment, therefore methods for artificial reverberation have been developed, always seeking a more efficient implementations and more faithful to the real environments. This work presents an implementation in FPGAs (Field Programmable Gate Arrays ) of a classic digital reverberation audio structure, based on a proposal of Manfred Schroeder, using sets of all-pass and comb filters. The developed system exploits the use of reconfigurable hardware as a platform development and implementation of digital audio effects, focusing on the modularity and reuse characteristics
Resumo:
The study aimed to analyze the nursing diagnoses of the nutrition domain from NANDA International in patients undergoing hemodialysis. This is a transversal study conducted in a university hospital in northeastern Brazil, with 50 hemodialysis patients. The data collection instrument was an interview form and a physical examination, in digital format, applied between the months of December 2013 to May 2014. Data analysis was divided into two stages. In the first, defining characteristics, related factors and risk factors were judged as to their presence by the researcher, according to the data collected. In the second stage, based on data from the first, diagnostic inference by experts was held. The results were organized in tables and analyzed using descriptive and inferential statistics for the diagnoses that showed higher frequencies than 50%. The project was approved by the Ethics Committee responsible for the research institution (protocol number 392 535), with Certificate Presentation to Ethics Assessment 18710613.4.00005537 number. The results indicate a median of 7 (± 1.51) nursing diagnoses of the nutrition domain per patient. Six diagnoses with greater frequency than 50% were identified, namely: Risk for electrolyte imbalance, Risk for unstable blood glucose level, Excess fluid volume, Readiness for enhanced fluid balance, Readiness for enhanced nutrition and Risk for deficient fluid volume. The defining characteristics, related and risk factors presented an average of 34.78 (± 6.86), 15.50 (± 3.40) and a median of 4 (± 1.93), respectively, and 11 of these components had statistically significant association with the respective diagnoses. Were identified associations between adventitious breath sounds, edema and pulmonary congestion with the diagnosis Excess fluid volume; Expressed desire to increase fluid balance with the nursing diagnosis Readiness for enhanced fluid balance; It feeds regularly, Attitude to food consistent with the health goals, Consume adequate food, expresses knowledge about healthy food choices, expresses desire to improve nutrition, expresses knowledge about liquid healthier choices and following appropriate standard supply with diagnosis Readiness for enhanced nutrition. It is concluded that the diagnosis of the nutrition domain related to electrolyte problems are prevalent in customer submitted to hemodialysis. The identification of these diagnoses contributes to the development of a plan of care targeted to the needs of these clients, providing better quality of life and advance in the practice of care
Resumo:
Recognized for his relevant writing for the cello, Silvio Ferraz wrote , in 2012, Segundo Responsório for cello solo and chamber group which followed Responsório ao Vento , a version of the same piece for solo cello . The work is characterized by the idea of continuity of sound moving through different textures , timbres , dynamics and musical gestures. The composer uses extended techniques, such as large sections in sul tasto playing three strings simultaneously, trills of natural harmonics , muffled trills with natural harmonics , col legno batuto , different types of glissando and simultaneous sounds of harmonic and non harmonic notes corroborate to a wealth of sounds and layers that create different textures. This article investigates the relationship of the composer with the cello, and relates Responsório ao Vento to his other works and studies the influences of the composer addressing technical and interpretive aspects of the piece drawn from performance experiences.
Resumo:
Le projet de recherche-création proposé survole la québécité à travers un pan majeur de l’étude des Amériques : le territoire. L’adoption du territoire québécois, de son espace, et de sa densité – effectué à la fois sous un axe méridional (le Saint-Laurent) et septentrional (le Nord) –, s’effectue dans mon corpus acousmatique à travers l’utilisation de concepts théoriques établis par plusieurs figures québécoises et internationales. La description des sources d'inspiration du cycle d’œuvres acousmatique proposé, étant principalement issues de sphères extramusicales — la démarche de divers artistes et chercheurs québécois ayant contribué à l’émergence poétique de mon corpus tels que Pierre Perrault, René Derouin, Daniel Chartier, et Louis-Edmond Hamelin — y tient une place importante. La portion musicale est effectuée de façon analytique à l’aide de deux méthodes propres au genre électroacoustique – analyse typologique de Pierre Schaeffer, et fonctionnelle de Stéphane Roy –, qui, à travers l’œuvre de certains compositeurs de musiques électroniques internationaux permettent de souligner la pluralité des conceptions territoriales et le réseau sémantique universel sous-jacent, laissant place à une lecture plus large de cette thématique. La méthodologie proposée permet donc à la fois de cerner l’universel – modèles naturels, références psychoacoustiques –, le local – utilisation de poèmes québécois, référents animaux ou anecdotiques précis tels que des cris d’oiseaux et des prises sonores du Saint-Laurent –, et la relation dichotomique entre la nature et la culture dans mon corpus, afin qu’émerge un discours musical cohérent basé sur le territoire québécois.
Resumo:
Les compositions musicales de l’étudiante qui accompagne cette thèse sous la forme d’un disque audio est disponible au comptoir de la Bibliothèque de musique sous le titre : Cristina García Islas (https://umontreal.on.worldcat.org/oclc/1135201695)
Resumo:
Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.
Resumo:
Sound is a key sensory modality for Hawaiian spinner dolphins. Like many other marine animals, these dolphins rely on sound and their acoustic environment for many aspects of their daily lives, making it is essential to understand soundscape in areas that are critical to their survival. Hawaiian spinner dolphins rest during the day in shallow coastal areas and forage offshore at night. In my dissertation I focus on the soundscape of the bays where Hawaiian spinner dolphins rest taking a soundscape ecology approach. I primarily relied on passive acoustic monitoring using four DSG-Ocean acoustic loggers in four Hawaiian spinner dolphin resting bays on the Kona Coast of Hawai‛i Island. 30-second recordings were made every four minutes in each of the bays for 20 to 27 months between January 8, 2011 and March 30, 2013. I also utilized concomitant vessel-based visual surveys in the four bays to provide context for these recordings. In my first chapter I used the contributions of the dolphins to the soundscape to monitor presence in the bays and found the degree of presence varied greatly from less than 40% to nearly 90% of days monitored with dolphins present. Having established these bays as important to the animals, in my second chapter I explored the many components of their resting bay soundscape and evaluated the influence of natural and human events on the soundscape. I characterized the overall soundscape in each of the four bays, used the tsunami event of March 2011 to approximate a natural soundscape and identified all loud daytime outliers. Overall, sound levels were consistently louder at night and quieter during the daytime due to the sounds from snapping shrimp. In fact, peak Hawaiian spinner dolphin resting time co-occurs with the quietest part of the day. However, I also found that humans drastically alter this daytime soundscape with sound from offshore aquaculture, vessel sound and military mid-frequency active sonar. During one recorded mid-frequency active sonar event in August 2011, sound pressure levels in the 3.15 kHz 1/3rd-octave band were as high as 45.8 dB above median ambient noise levels. Human activity both inside (vessels) and outside (sonar and aquaculture) the bays significantly altered the resting bay soundscape. Inside the bays there are high levels of human activity including vessel-based tourism directly targeting the dolphins. The interactions between humans and dolphins in their resting bays are of concern; therefore, my third chapter aimed to assess the acoustic response of the dolphins to human activity. Using days where acoustic recordings overlapped with visual surveys I found the greatest response in a bay with dolphin-centric activities, not in the bay with the most vessel activity, indicating that it is not the magnitude that elicits a response but the focus of the activity. In my fourth chapter I summarize the key results from my first three chapters to illustrate the power of multiple site design to prioritize action to protect Hawaiian spinner dolphins in their resting bays, a chapter I hope will be useful for managers should they take further action to protect the dolphins.
Resumo:
1. nowhere landscape, for clarinets, trombones, percussion, violins, and electronics
nowhere landscape is an eighty-minute work for nine performers, composed of acoustic and electronic sounds. Its fifteen movements invoke a variety of listening strategies, using slow change, stasis, layering, coincidence, and silence to draw attention to the sonic effects of the environment—inside the concert hall as well as the world outside of it. The work incorporates a unique stage set-up: the audience sits in close proximity to the instruments, facing in one of four different directions, while the musicians play from a number of constantly-shifting locations, including in front of, next to, and behind the audience.
Much of nowhere landscape’s material is derived from a collection of field recordings
made by the composer during a road trip from Springfield, MA to Douglas, WY along US- 20, a cross-country route made effectively obsolete by the completion of I-90 in the mid- 20th century. In an homage to artist Ed Ruscha’s 1963 book Twentysix Gasoline Stations, the composer made twenty-six recordings at gas stations along US-20. Many of the movements of nowhere landscape examine the musical potential of these captured soundscapes: familiar and anonymous, yet filled with poignancy and poetic possibility.
2. “The Map and the Territory: Documenting David Dunn’s Sky Drift”
In 1977, David Dunn recruited twenty-six musicians to play his work Sky Drift in the
Anza-Borrego Desert in Southern California. This outdoor performance was documented with photos and recorded with four stationary microphones to tape. A year later, Dunn presented the work in New York City as a “performance/documentation,” playing back the audio recording and projecting slides. In this paper I examine the consequences of this kind of act: what does it mean for a recording of an outdoor work to be shared at an indoor concert event? Can such a complex and interactive experience be successfully flattened into some kind of re-playable documentation? What can a recording capture and what must it exclude?
This paper engages with these questions as they relate to David Dunn’s Sky Drift and to similar works by Karlheinz Stockhausen and John Luther Adams. These case-studies demonstrate different solutions to the difficulty of documenting outdoor performances. Because this music is often heard from a variety of equally-valid perspectives—and because any single microphone only captures sound from one of these perspectives—the physical set-up of these kind of pieces complicate what it means to even “hear the music” at all. To this end, I discuss issues around the “work itself” and “aura” as well as “transparency” and “liveness” in recorded sound, bringing in thoughts and ideas from Walter Benjamin, Howard Becker, Joshua Glasgow, and others. In addition, the artist Robert Irwin and the composer Barry Truax have written about the conceptual distinctions between “the work” and “not- the-work”; these distinctions are complicated by documentation and recording. Without the context, the being-there, the music is stripped of much of its ability to communicate meaning.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.