12 resultados para planes of vision
em Duke University
Resumo:
Simultaneous neural recordings taken from multiple areas of the rodent brain are garnering growing interest due to the insight they can provide about spatially distributed neural circuitry. The promise of such recordings has inspired great progress in methods for surgically implanting large numbers of metal electrodes into intact rodent brains. However, methods for localizing the precise location of these electrodes have remained severely lacking. Traditional histological techniques that require slicing and staining of physical brain tissue are cumbersome, and become increasingly impractical as the number of implanted electrodes increases. Here we solve these problems by describing a method that registers 3-D computerized tomography (CT) images of intact rat brains implanted with metal electrode bundles to a Magnetic Resonance Imaging Histology (MRH) Atlas. Our method allows accurate visualization of each electrode bundle's trajectory and location without removing the electrodes from the brain or surgically implanting external markers. In addition, unlike physical brain slices, once the 3D images of the electrode bundles and the MRH atlas are registered, it is possible to verify electrode placements from many angles by "re-slicing" the images along different planes of view. Further, our method can be fully automated and easily scaled to applications with large numbers of specimens. Our digital imaging approach to efficiently localizing metal electrodes offers a substantial addition to currently available methods, which, in turn, may help accelerate the rate at which insights are gleaned from rodent network neuroscience.
Gene loss, adaptive evolution and the co-evolution of plumage coloration genes with opsins in birds.
Resumo:
BACKGROUND: The wide range of complex photic systems observed in birds exemplifies one of their key evolutionary adaptions, a well-developed visual system. However, genomic approaches have yet to be used to disentangle the evolutionary mechanisms that govern evolution of avian visual systems. RESULTS: We performed comparative genomic analyses across 48 avian genomes that span extant bird phylogenetic diversity to assess evolutionary changes in the 17 representatives of the opsin gene family and five plumage coloration genes. Our analyses suggest modern birds have maintained a repertoire of up to 15 opsins. Synteny analyses indicate that PARA and PARIE pineal opsins were lost, probably in conjunction with the degeneration of the parietal organ. Eleven of the 15 avian opsins evolved in a non-neutral pattern, confirming the adaptive importance of vision in birds. Visual conopsins sw1, sw2 and lw evolved under negative selection, while the dim-light RH1 photopigment diversified. The evolutionary patterns of sw1 and of violet/ultraviolet sensitivity in birds suggest that avian ancestors had violet-sensitive vision. Additionally, we demonstrate an adaptive association between the RH2 opsin and the MC1R plumage color gene, suggesting that plumage coloration has been photic mediated. At the intra-avian level we observed some unique adaptive patterns. For example, barn owl showed early signs of pseudogenization in RH2, perhaps in response to nocturnal behavior, and penguins had amino acid deletions in RH2 sites responsible for the red shift and retinal binding. These patterns in the barn owl and penguins were convergent with adaptive strategies in nocturnal and aquatic mammals, respectively. CONCLUSIONS: We conclude that birds have evolved diverse opsin adaptations through gene loss, adaptive selection and coevolution with plumage coloration, and that differentiated selective patterns at the species level suggest novel photic pressures to influence evolutionary patterns of more-recent lineages.
Resumo:
Many neurons in the frontal eye field (FEF) exhibit visual responses and are thought to play important roles in visuosaccadic behavior. The FEF, however, is far removed from striate cortex. Where do the FEF's visual signals come from? Usually they are reasonably assumed to enter the FEF through afferents from extrastriate cortex. Here we show that, surprisingly, visual signals also enter the FEF through a subcortical route: a disynaptic, ascending pathway originating in the intermediate layers of the superior colliculus (SC). We recorded from identified neurons at all three stages of this pathway (n=30-40 in each sample): FEF recipient neurons, orthodromically activated from the SC; mediodorsal thalamus (MD) relay neurons, antidromically activated from FEF and orthodromically activated from SC; and SC source neurons, antidromically activated from MD. We studied the neurons while monkeys performed delayed saccade tasks designed to temporally resolve visual responses from presaccadic discharges. We found, first, that most neurons at every stage in the pathway had visual responses, presaccadic bursts, or both. Second, we found marked similarities between the SC source neurons and MD relay neurons: in both samples, about 15% of the neurons had only a visual response, 10% had only a presaccadic burst, and 75% had both. In contrast, FEF recipient neurons tended to be more visual in nature: 50% had only a visual response, none had only a presaccadic burst, and 50% had both a visual response and a presaccadic burst. This suggests that in addition to their subcortical inputs, these FEF neurons also receive other visual inputs, e.g. from extrastriate cortex. We conclude that visual activity in the FEF results not only from cortical afferents but also from subcortical inputs. Intriguingly, this implies that some of the visual signals in FEF are pre-processed by the SC.
Resumo:
Complement factor H (CFH) is a major susceptibility gene for age-related macular degeneration (AMD); however, its impact on AMD pathobiology is unresolved. Here, the role of CFH in the development of AMD pathology in vivo was interrogated by analyzing aged Cfh+/- and Cfh-/- mice fed a high fat, cholesterol-enriched diet. Strikingly, decreased levels of CFH led to increased sub-retinal pigmented epithelium (RPE) deposit formation, specifically basal laminar deposits, following high fat diet. Mechanistically, our data show that deposits are due to CFH competition for lipoprotein binding sites in Bruch’s membrane. Interestingly and despite sub-RPE deposit formation occurring in both Cfh+/- and Cfh-/- mice, RPE damage accompanied by loss of vision occurred only in old Cfh+/- mice. We demonstrate that such pathology is a function of excess complement activation and C5a production, associated with monocyte recruitment, in Cfh+/- mice versus complement deficiency in Cfh-/- animals. Due to the CFH dependent increase in sub-RPE deposit height we interrogated the potential of CFH as a novel regulator of Bruch’s membrane lipoprotein binding and show, using human Bruch’s membrane explants, that CFH removes endogenous human lipoproteins in aged donors. Interestingly, although the CFH H402 variant shows altered binding to BrM, this does not affect its ability to remove endogenous lipoproteins. This new understanding of the complicated interactions of CFH in AMD-like pathology provides an improved foundation for the development of targeted therapies for AMD.
Resumo:
The goal of my Ph.D. thesis is to enhance the visualization of the peripheral retina using wide-field optical coherence tomography (OCT) in a clinical setting.
OCT has gain widespread adoption in clinical ophthalmology due to its ability to visualize the diseases of the macula and central retina in three-dimensions, however, clinical OCT has a limited field-of-view of 300. There has been increasing interest to obtain high-resolution images outside of this narrow field-of-view, because three-dimensional imaging of the peripheral retina may prove to be important in the early detection of neurodegenerative diseases, such as Alzheimer's and dementia, and the monitoring of known ocular diseases, such as diabetic retinopathy, retinal vein occlusions, and choroid masses.
Before attempting to build a wide-field OCT system, we need to better understand the peripheral optics of the human eye. Shack-Hartmann wavefront sensors are commonly used tools for measuring the optical imperfections of the eye, but their acquisition speed is limited by their underlying camera hardware. The first aim of my thesis research is to create a fast method of ocular wavefront sensing such that we can measure the wavefront aberrations at numerous points across a wide visual field. In order to address aim one, we will develop a sparse Zernike reconstruction technique (SPARZER) that will enable Shack-Hartmann wavefront sensors to use as little as 1/10th of the data that would normally be required for an accurate wavefront reading. If less data needs to be acquired, then we can increase the speed at which wavefronts can be recorded.
For my second aim, we will create a sophisticated optical model that reproduces the measured aberrations of the human eye. If we know how the average eye's optics distort light, then we can engineer ophthalmic imaging systems that preemptively cancel inherent ocular aberrations. This invention will help the retinal imaging community to design systems that are capable of acquiring high resolution images across a wide visual field. The proposed model eye is also of interest to the field of vision science as it aids in the study of how anatomy affects visual performance in the peripheral retina.
Using the optical model from aim two, we will design and reduce to practice a clinical OCT system that is capable of imaging a large (800) field-of-view with enhanced visualization of the peripheral retina. A key aspect of this third and final aim is to make the imaging system compatible with standard clinical practices. To this end, we will incorporate sensorless adaptive optics in order to correct the inter- and intra- patient variability in ophthalmic aberrations. Sensorless adaptive optics will improve both the brightness (signal) and clarity (resolution) of features in the peripheral retina without affecting the size of the imaging system.
The proposed work should not only be a noteworthy contribution to the ophthalmic and engineering communities, but it should strengthen our existing collaborations with the Duke Eye Center by advancing their capability to diagnose pathologies of the peripheral retinal.
Resumo:
PURPOSE: To develop a mathematical model that can predict refractive changes after Descemet stripping endothelial keratoplasty (DSEK). METHODS: A mathematical formula based on the Gullstrand eye model was generated to estimate the change in refractive power of the eye after DSEK. This model was applied to four DSEK cases retrospectively, to compare measured and predicted refractive changes after DSEK. RESULTS: The refractive change after DSEK is determined by calculating the difference in the power of the eye before and after DSEK surgery. The power of the eye post-DSEK surgery can be calculated with modified Gullstrand eye model equations that incorporate the change in the posterior radius of curvature and change in the distance between the principal planes of the cornea and lens after DSEK. Analysis of this model suggests that the ratio of central to peripheral graft thickness (CP ratio) and central thickness can have significant effect on refractive change where smaller CP ratios and larger graft thicknesses result in larger hyperopic shifts. This model was applied to four patients, and the average predicted hyperopic shift in the overall power of the eye was calculated to be 0.83 D. This change reflected in a mean of 93% (range, 75%-110%) of patients' measured refractive shifts. CONCLUSIONS: This simplified DSEK mathematical model can be used as a first step for estimating the hyperopic shift after DSEK. Further studies are necessary to refine the validity of this model.
Resumo:
We perceive a stable visual world even though saccades often move our retinas. One way the brain may achieve a stable visual percept is through predictive remapping of visual receptive fields: just before a saccade, the receptive field of many neurons moves from its current location ("current receptive field") to the location it is expected to occupy after the saccade ("future receptive field"). Goldberg and colleagues found such remapping in cortical areas, e.g. in the frontal eye field (FEF), as well as in the intermediate layers of the superior colliculus (SC). In the present study we investigated the source of the SC's remapped visual signals. Do some of them come from the FEF? We identified FEF neurons that project to the SC using antidromic stimulation. For neurons with a visual response, we tested whether the receptive field shifted just prior to making a saccade. Saccadic amplitudes were chosen to be as small as possible while clearly separating the current and future receptive fields; they ranged from 5-30 deg. in amplitude and were directed contraversively. The saccadic target was a small red spot. We probed visual responsiveness at the current and future receptive field locations using a white spot flashed at various times before or after the saccade. Predictive remapping was indicated by a visual response to a probe flashed in the future receptive field just before the saccade began. We found that many FEF neurons projecting to the SC exhibited predictive remapping. Moreover, the remapping was as fast and strong as any previously reported for FEF or SC. It is clear, therefore, that remapped visual signals are sent from FEF to SC, providing direct evidence that the FEF is one source of the SC's remapped visual signals. Because remapping requires information about an imminent saccade, we hypothesize that remapping in FEF depends on corollary discharge signals such as those ascending from the SC through MD thalamus (Sommer and Wurtz 2002).
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
Dissertation
Resumo:
The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated which promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests that behavioral signs can be observed late in the first year of life. Many of these studies involve extensive frame-by-frame video observation and analysis of a child's natural behavior. Although nonintrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are burdensome for clinical and large population research purposes. This work is a first milestone in a long-term project on non-invasive early observation of children in order to aid in risk detection and research of neurodevelopmental disorders. We focus on providing low-cost computer vision tools to measure and identify ASD behavioral signs based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure responses to general ASD risk assessment tasks and activities outlined by the AOSI which assess visual attention by tracking facial features. We show results, including comparisons with expert and nonexpert clinicians, which demonstrate that the proposed computer vision tools can capture critical behavioral observations and potentially augment the clinician's behavioral observations obtained from real in-clinic assessments.
Resumo:
The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated that promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests behavioral markers can be observed late in the first year of life. Many of these studies involved extensive frame-by-frame video observation and analysis of a child's natural behavior. Although non-intrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are impractical for clinical and large population research purposes. Diagnostic measures for ASD are available for infants but are only accurate when used by specialists experienced in early diagnosis. This work is a first milestone in a long-term multidisciplinary project that aims at helping clinicians and general practitioners accomplish this early detection/measurement task automatically. We focus on providing computer vision tools to measure and identify ASD behavioral markers based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure three critical AOSI activities that assess visual attention. We augment these AOSI activities with an additional test that analyzes asymmetrical patterns in unsupported gait. The first set of algorithms involves assessing head motion by tracking facial features, while the gait analysis relies on joint foreground segmentation and 2D body pose estimation in video. We show results that provide insightful knowledge to augment the clinician's behavioral observations obtained from real in-clinic assessments.
Resumo:
© 2005-2012 IEEE.Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies.