6 resultados para NON-HUMAN PRIMATE

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The insula is a mammalian cortical structure that has been implicated in a wide range of low- and high-level functions governing one’s sensory, emotional, and cognitive experiences. One particular role of this region is considered to be processing of olfactory stimuli. The ability to detect and evaluate odors has significant effects on an organism’s eating behavior and survival and, in case of humans, on complex decision making. Despite such importance of this function, the mechanism in which olfactory information is processed in the insula has not been thoroughly studied. Moreover, due to the structure’s close spatial relationship with the neighboring claustrum, it is not entirely clear whether the connectivity and olfactory functions attributed to the insula are truly those of the insula, rather than of the claustrum. My graduate work, consisting of two studies, seeks to help fill these gaps. In the first, the structural connectivity patterns of the insula and the claustrum in a non-human primate brain is assayed using an ultra-high-quality diffusion magnetic resonance image, and the results suggest dissociation of connectivity — and hence function — between the two structures. In the second study, a functional neuroimaging experiment investigates the insular activity during odor evaluation tasks in humans, and uncovers a potential spatial organization within the anterior portion of the insula for processing different aspects of odor characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The visual system is a remarkable platform that evolved to solve difficult computational problems such as detection, recognition, and classification of objects. Of great interest is the face-processing network, a sub-system buried deep in the temporal lobe, dedicated for analyzing specific type of objects (faces). In this thesis, I focus on the problem of face detection by the face-processing network. Insights obtained from years of developing computer-vision algorithms to solve this task have suggested that it may be efficiently and effectively solved by detection and integration of local contrast features. Does the brain use a similar strategy? To answer this question, I embark on a journey that takes me through the development and optimization of dedicated tools for targeting and perturbing deep brain structures. Data collected using MR-guided electrophysiology in early face-processing regions was found to have strong selectivity for contrast features, similar to ones used by artificial systems. While individual cells were tuned for only a small subset of features, the population as a whole encoded the full spectrum of features that are predictive to the presence of a face in an image. Together with additional evidence, my results suggest a possible computational mechanism for face detection in early face processing regions. To move from correlation to causation, I focus on adopting an emergent technology for perturbing brain activity using light: optogenetics. While this technique has the potential to overcome problems associated with the de-facto way of brain stimulation (electrical microstimulation), many open questions remain about its applicability and effectiveness for perturbing the non-human primate (NHP) brain. In a set of experiments, I use viral vectors to deliver genetically encoded optogenetic constructs to the frontal eye field and faceselective regions in NHP and examine their effects side-by-side with electrical microstimulation to assess their effectiveness in perturbing neural activity as well as behavior. Results suggest that cells are robustly and strongly modulated upon light delivery and that such perturbation can modulate and even initiate motor behavior, thus, paving the way for future explorations that may apply these tools to study connectivity and information flow in the face processing network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, research efforts into directly interfacing with the neurons of individuals with motor deficits have increased. The goal of such research is clear: Enable individuals affected by paralysis or amputation to regain control of their environments by manipulating external devices with thought alone. Though the motor cortices are the usual brain areas upon which neural prosthetics depend, research into the parietal lobe and its subregions, primarily in non-human primates, has uncovered alternative areas that could also benefit neural interfaces. Similar to the motor cortical areas, parietal regions can supply information about the trajectories of movements. In addition, the parietal lobe also contains cognitive signals like movement goals and intentions. But, these areas are also known to be tuned to saccadic eye movements, which could interfere with the function of a prosthetic designed to capture motor intentions only. In this thesis, we develop and examine the functionality of a neural prosthetic with a non-human primate model using the superior parietal lobe to examine the effectiveness of such an interface and the effects of unconstrained eye movements in a task that more closely simulates clinical applications. Additionally, we examine methods for improving usability of such interfaces.

The parietal cortex is also believed to contain neural signals relating to monitoring of the state of the limbs through visual and somatosensory feedback. In one of the world’s first clinical neural prosthetics based on the human parietal lobe, we examine the extent to which feedback regarding the state of a movement effector alters parietal neural signals and what the implications are for motor neural prosthetics and how this informs our understanding of this area of the human brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using neuromorphic analog VLSI techniques for modeling large neural systems has several advantages over software techniques. By designing massively-parallel analog circuit arrays which are ubiquitous in neural systems, analog VLSI models are extremely fast, particularly when local interactions are important in the computation. While analog VLSI circuits are not as flexible as software methods, the constraints posed by this approach are often very similar to the constraints faced by biological systems. As a result, these constraints can offer many insights into the solutions found by evolution. This dissertation describes a hardware modeling effort to mimic the primate oculomotor system which requires both fast sensory processing and fast motor control. A one-dimensional hardware model of the primate eye has been built which simulates the physical dynamics of the biological system. It is driven by analog VLSI circuits mimicking brainstem and cortical circuits that control eye movements. In this framework, a visually-triggered saccadic system is demonstrated which generates averaging saccades. In addition, an auditory localization system, based on the neural circuits of the barn owl, is used to trigger saccades to acoustic targets in parallel with visual targets. Two different types of learning are also demonstrated on the saccadic system using floating-gate technology allowing the non-volatile storage of analog parameters directly on the chip. Finally, a model of visual attention is used to select and track moving targets against textured backgrounds, driving both saccadic and smooth pursuit eye movements to maintain the image of the target in the center of the field of view. This system represents one of the few efforts in this field to integrate both neuromorphic sensory processing and motor control in a closed-loop fashion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.

Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study of human eye movements was made in order to elucidate the nature of the control mechanism in the binocular oculomotor system.

We first examined spontaneous eye movements during monocular and binocular fixation in order to determine the corrective roles of flicks and drifts. It was found that both types of motion correct fixational errors, although flicks are somewhat more active in this respect. Vergence error is a stimulus for correction by drifts but not by flicks, while binocular vertical discrepancy of the visual axes does not trigger corrective movements.

Second, we investigated the non-linearities of the oculomotor system by examining the eye movement responses to point targets moving in two dimensions in a subjectively unpredictable manner. Such motions consisted of hand-limited Gaussian random motion and also of the sum of several non-integrally related sinusoids. We found that there is no direct relationship between the phase and the gain of the oculomotor system. Delay of eye movements relative to target motion is determined by the necessity of generating a minimum afferent (input) signal at the retina in order to trigger corrective eye movements. The amplitude of the response is a function of the biological constraints of the efferent (output) portion of the system: for target motions of narrow bandwidth, the system responds preferentially to the highest frequency; for large bandwidth motions, the system distributes the available energy equally over all frequencies. Third, the power spectra of spontaneous eye movements were compared with the spectra of tracking eye movements for Gaussian random target motions of varying bandwidths. It was found that there is essentially no difference among the various curves. The oculomotor system tracks a target, not by increasing the mean rate of impulses along the motoneurons of the extra-ocular muscles, but rather by coordinating those spontaneous impulses which propagate along the motoneurons during stationary fixation. Thus, the system operates at full output at all times.

Fourth, we examined the relative magnitude and phase of motions of the left and the right visual axes during monocular and binocular viewing. We found that the two visual axes move vertically in perfect synchronization at all frequencies for any viewing condition. This is not true for horizontal motions: the amount of vergence noise is highest for stationary fixation and diminishes for tracking tasks as the bandwidth of the target motion increases. Furthermore, movements of the occluded eye are larger than those of the seeing eye in monocular viewing. This effect is more pronounced for horizontal motions, for stationary fixation, and for lower frequencies.

Finally, we have related our findings to previously known facts about the pertinent nerve pathways in order to postulate a model for the neurological binocular control of the visual axes.