4 resultados para Independent Eye Movement
em Duke University
Resumo:
As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.
Resumo:
Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
While it is well known that exposure to radiation can result in cataract formation, questions still remain about the presence of a dose threshold in radiation cataractogenesis. Since the exposure history from diagnostic CT exams is well documented in a patient’s medical record, the population of patients chronically exposed to radiation from head CT exams may be an interesting area to explore for further research in this area. However, there are some challenges in estimating lens dose from head CT exams. An accurate lens dosimetry model would have to account for differences in imaging protocols, differences in head size, and the use of any dose reduction methods.
The overall objective of this dissertation was to develop a comprehensive method to estimate radiation dose to the lens of the eye for patients receiving CT scans of the head. This research is comprised of a physics component, in which a lens dosimetry model was derived for head CT, and a clinical component, which involved the application of that dosimetry model to patient data.
The physics component includes experiments related to the physical measurement of the radiation dose to the lens by various types of dosimeters placed within anthropomorphic phantoms. These dosimeters include high-sensitivity MOSFETs, TLDs, and radiochromic film. The six anthropomorphic phantoms used in these experiments range in age from newborn to adult.
First, the lens dose from five clinically relevant head CT protocols was measured in the anthropomorphic phantoms with MOSFET dosimeters on two state-of-the-art CT scanners. The volume CT dose index (CTDIvol), which is a standard CT output index, was compared to the measured lens doses. Phantom age-specific CTDIvol-to-lens dose conversion factors were derived using linear regression analysis. Since head size can vary among individuals of the same age, a method was derived to estimate the CTDIvol-to-lens dose conversion factor using the effective head diameter. These conversion factors were derived for each scanner individually, but also were derived with the combined data from the two scanners as a means to investigate the feasibility of a scanner-independent method. Using the scanner-independent method to derive the CTDIvol-to-lens dose conversion factor from the effective head diameter, most of the fitted lens dose values fell within 10-15% of the measured values from the phantom study, suggesting that this is a fairly accurate method of estimating lens dose from the CTDIvol with knowledge of the patient’s head size.
Second, the dose reduction potential of organ-based tube current modulation (OB-TCM) and its effect on the CTDIvol-to-lens dose estimation method was investigated. The lens dose was measured with MOSFET dosimeters placed within the same six anthropomorphic phantoms. The phantoms were scanned with the five clinical head CT protocols with OB-TCM enabled on the one scanner model at our institution equipped with this software. The average decrease in lens dose with OB-TCM ranged from 13.5 to 26.0%. Using the size-specific method to derive the CTDIvol-to-lens dose conversion factor from the effective head diameter for protocols with OB-TCM, the majority of the fitted lens dose values fell within 15-18% of the measured values from the phantom study.
Third, the effect of gantry angulation on lens dose was investigated by measuring the lens dose with TLDs placed within the six anthropomorphic phantoms. The 2-dimensional spatial distribution of dose within the areas of the phantoms containing the orbit was measured with radiochromic film. A method was derived to determine the CTDIvol-to-lens dose conversion factor based upon distance from the primary beam scan range to the lens. The average dose to the lens region decreased substantially for almost all the phantoms (ranging from 67 to 92%) when the orbit was exposed to scattered radiation compared to the primary beam. The effectiveness of this method to reduce lens dose is highly dependent upon the shape and size of the head, which influences whether or not the angled scan range coverage can include the entire brain volume and still avoid the orbit.
The clinical component of this dissertation involved performing retrospective patient studies in the pediatric and adult populations, and reconstructing the lens doses from head CT examinations with the methods derived in the physics component. The cumulative lens doses in the patients selected for the retrospective study ranged from 40 to 1020 mGy in the pediatric group, and 53 to 2900 mGy in the adult group.
This dissertation represents a comprehensive approach to lens of the eye dosimetry in CT imaging of the head. The collected data and derived formulas can be used in future studies on radiation-induced cataracts from repeated CT imaging of the head. Additionally, it can be used in the areas of personalized patient dose management, and protocol optimization and clinician training.