3 resultados para wavefront vergence

em Boston University Digital Common


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acousto-optic imaging (AOI) in optically diffuse media is a hybrid imaging modality in which a focused ultrasound beam is used to locally phase modulate light inside of turbid media. The modulated optical field carries with it information about the optical properties in the region where the light and sound interact. The motivation for the development of AOI systems is to measure optical properties at large depths within biological tissue with high spatial resolution. A photorefractive crystal (PRC) based interferometry system is developed for the detection of phase modulated light in AOI applications. Two-wave mixing in the PRC creates a reference beam that is wavefront matched to the modulated optical field collected from the specimen. The phase modulation is converted to an intensity modulation at the optical detector when these two fields interfere. The interferometer has a high optical etendue, making it well suited for AOI where the scattered light levels are typically low. A theoretical model for the detection of acoustically induced phase modulation in turbid media using PRC based interferometry is detailed. An AOI system, using a single element focused ultrasound transducer to pump the AO interaction and the PRC based detection system, is fabricated and tested on tissue mimicking phantoms. It is found that the system has sufficient sensitivity to detect broadband AO signals generated using pulsed ultrasound, allowing for AOI at low time averaged ultrasound output levels. The spatial resolution of the AO imaging system is studied as a function of the ultrasound pulse parameters. A theoretical model of light propagation in turbid media is used to explore the dependence of the AO response on the experimental geometry, light collection aperture, and target optical properties. Finally, a multimodal imaging system combining pulsed AOI and conventional B- mode ultrasound imaging is developed. B-mode ultrasound and AO images of targets embedded in both highly diffuse phantoms and biological tissue ex vivo are obtained, and millimeter resolution is demonstrated in three dimensions. The AO images are intrinsically co-registered with the B-mode ultrasound images. The results suggest that AOI can be used to supplement conventional B-mode ultrasound imaging with optical information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A neural model is described of how the brain may autonomously learn a body-centered representation of 3-D target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation--otherwise known as a parcellated distributed representation--of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of non-foveated target position to learn a visuomotor representation of both foveated and non-foveated target position that is capable of commanding yoked eye movementes. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates arc compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article describes how corollary discharges from outflow eye movement commands can be transformed by two stages of opponent neural processing into a head-centered representation of 3-D target position. This representation implicitly defines a cyclopean coordinate system whose variables approximate the binocular vergence and spherical horizontal and vertical angles with respect to the observer's head. Various psychophysical data concerning binocular distance perception and reaching behavior are clarified by this representation. The representation provides a foundation for learning head-centered and body-centered invariant representations of both foveated and non-foveated 3-D target positions. It also enables a solution to be developed of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space.