9 resultados para Recording and registration
em Duke University
Resumo:
The spiking activity of nearby cortical neurons is correlated on both short and long time scales. Understanding this shared variability in firing patterns is critical for appreciating the representation of sensory stimuli in ensembles of neurons, the coincident influences of neurons on common targets, and the functional implications of microcircuitry. Our knowledge about neuronal correlations, however, derives largely from experiments that used different recording methods, analysis techniques, and cortical regions. Here we studied the structure of neuronal correlation in area V4 of alert macaques using recording and analysis procedures designed to match those used previously in primary visual cortex (V1), the major input to V4. We found that the spatial and temporal properties of correlations in V4 were remarkably similar to those of V1, with two notable differences: correlated variability in V4 was approximately one-third the magnitude of that in V1 and synchrony in V4 was less temporally precise than in V1. In both areas, spontaneous activity (measured during fixation while viewing a blank screen) was approximately twice as correlated as visual-evoked activity. The results provide a foundation for understanding how the structure of neuronal correlation differs among brain regions and stages in cortical processing and suggest that it is likely governed by features of neuronal circuits that are shared across the visual cortex.
Resumo:
Phenomenologically, humans effectively label and report feeling distinct emotions, yet the extent to which emotions are represented categorically in nervous system activity is controversial. Theoretical accounts differ in this regard, some positing distinct emotional experiences emerge from a dimensional representation (e.g., along axes of valence and arousal) whereas others propose emotions are natural categories, with dedicated neural bases and associated response profiles. This dissertation aims to empirically assess these theoretical accounts by examining how emotions are represented (either as disjoint categories or as points along continuous dimensions) in autonomic and central nervous system activity by integrating psychophysiological recording and functional neuroimaging with machine-learning based analytical methods. Results demonstrate that experientially, emotional events are well-characterized both along dimensional and categorical frameworks. Measures of central and peripheral responding discriminate among emotion categories, but are largely independent of valence and arousal. These findings suggest dimensional and categorical aspects of emotional experience are driven by separable neural substrates and demonstrate that emotional states can be objectively quantified on the basis of nervous system activity.
Resumo:
1. nowhere landscape, for clarinets, trombones, percussion, violins, and electronics
nowhere landscape is an eighty-minute work for nine performers, composed of acoustic and electronic sounds. Its fifteen movements invoke a variety of listening strategies, using slow change, stasis, layering, coincidence, and silence to draw attention to the sonic effects of the environment—inside the concert hall as well as the world outside of it. The work incorporates a unique stage set-up: the audience sits in close proximity to the instruments, facing in one of four different directions, while the musicians play from a number of constantly-shifting locations, including in front of, next to, and behind the audience.
Much of nowhere landscape’s material is derived from a collection of field recordings
made by the composer during a road trip from Springfield, MA to Douglas, WY along US- 20, a cross-country route made effectively obsolete by the completion of I-90 in the mid- 20th century. In an homage to artist Ed Ruscha’s 1963 book Twentysix Gasoline Stations, the composer made twenty-six recordings at gas stations along US-20. Many of the movements of nowhere landscape examine the musical potential of these captured soundscapes: familiar and anonymous, yet filled with poignancy and poetic possibility.
2. “The Map and the Territory: Documenting David Dunn’s Sky Drift”
In 1977, David Dunn recruited twenty-six musicians to play his work Sky Drift in the
Anza-Borrego Desert in Southern California. This outdoor performance was documented with photos and recorded with four stationary microphones to tape. A year later, Dunn presented the work in New York City as a “performance/documentation,” playing back the audio recording and projecting slides. In this paper I examine the consequences of this kind of act: what does it mean for a recording of an outdoor work to be shared at an indoor concert event? Can such a complex and interactive experience be successfully flattened into some kind of re-playable documentation? What can a recording capture and what must it exclude?
This paper engages with these questions as they relate to David Dunn’s Sky Drift and to similar works by Karlheinz Stockhausen and John Luther Adams. These case-studies demonstrate different solutions to the difficulty of documenting outdoor performances. Because this music is often heard from a variety of equally-valid perspectives—and because any single microphone only captures sound from one of these perspectives—the physical set-up of these kind of pieces complicate what it means to even “hear the music” at all. To this end, I discuss issues around the “work itself” and “aura” as well as “transparency” and “liveness” in recorded sound, bringing in thoughts and ideas from Walter Benjamin, Howard Becker, Joshua Glasgow, and others. In addition, the artist Robert Irwin and the composer Barry Truax have written about the conceptual distinctions between “the work” and “not- the-work”; these distinctions are complicated by documentation and recording. Without the context, the being-there, the music is stripped of much of its ability to communicate meaning.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
PURPOSE: The purpose of this work is to improve the noise power spectrum (NPS), and thus the detective quantum efficiency (DQE), of computed radiography (CR) images by correcting for spatial gain variations specific to individual imaging plates. CR devices have not traditionally employed gain-map corrections, unlike the case with flat-panel detectors, because of the multiplicity of plates used with each reader. The lack of gain-map correction has limited the DQE(f) at higher exposures with CR. This current work describes a feasible solution to generating plate-specific gain maps. METHODS: Ten high-exposure open field images were taken with an RQA5 spectrum, using a sixth generation CR plate suspended in air without a cassette. Image values were converted to exposure, the plates registered using fiducial dots on the plate, the ten images averaged, and then high-pass filtered to remove low frequency contributions from field inhomogeneity. A gain-map was then produced by converting all pixel values in the average into fractions with mean of one. The resultant gain-map of the plate was used to normalize subsequent single images to correct for spatial gain fluctuation. To validate performance, the normalized NPS (NNPS) for all images was calculated both with and without the gain-map correction. Variations in the quality of correction due to exposure levels, beam voltage/spectrum, CR reader used, and registration were investigated. RESULTS: The NNPS with plate-specific gain-map correction showed improvement over the noncorrected case over the range of frequencies from 0.15 to 2.5 mm(-1). At high exposure (40 mR), NNPS was 50%-90% better with gain-map correction than without. A small further improvement in NNPS was seen from carefully registering the gain-map with subsequent images using small fiducial dots, because of slight misregistration during scanning. Further improvement was seen in the NNPS from scaling the gain map about the mean to account for different beam spectra. CONCLUSIONS: This study demonstrates that a simple gain-map can be used to correct for the fixed-pattern noise in a given plate and thus improve the DQE of CR imaging. Such a method could easily be implemented by manufacturers because each plate has a unique bar code and the gain-map for all plates associated with a reader could be stored for future retrieval. These experiments indicated that an improvement in NPS (and hence, DQE) is possible, depending on exposure level, over a wide range of frequencies with this technique.
Resumo:
Perceiving or producing complex vocalizations such as speech and birdsongs require the coordinated activity of neuronal populations, and these activity patterns can vary over space and time. How learned communication signals are represented by populations of sensorimotor neurons essential to vocal perception and production remains poorly understood. Using a combination of two-photon calcium imaging, intracellular electrophysiological recording and retrograde tracing methods in anesthetized adult male zebra finches (
Resumo:
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF’s local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.
Resumo:
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF's local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.
Resumo:
Transcranial magnetic stimulation (TMS) is a widely used, noninvasive method for stimulating nervous tissue, yet its mechanisms of effect are poorly understood. Here we report new methods for studying the influence of TMS on single neurons in the brain of alert non-human primates. We designed a TMS coil that focuses its effect near the tip of a recording electrode and recording electronics that enable direct acquisition of neuronal signals at the site of peak stimulus strength minimally perturbed by stimulation artifact in awake monkeys (Macaca mulatta). We recorded action potentials within ∼1 ms after 0.4-ms TMS pulses and observed changes in activity that differed significantly for active stimulation as compared with sham stimulation. This methodology is compatible with standard equipment in primate laboratories, allowing easy implementation. Application of these tools will facilitate the refinement of next generation TMS devices, experiments and treatment protocols.