3 resultados para MULTIMODAL ELUTION

em Boston University Digital Common


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acousto-optic imaging (AOI) in optically diffuse media is a hybrid imaging modality in which a focused ultrasound beam is used to locally phase modulate light inside of turbid media. The modulated optical field carries with it information about the optical properties in the region where the light and sound interact. The motivation for the development of AOI systems is to measure optical properties at large depths within biological tissue with high spatial resolution. A photorefractive crystal (PRC) based interferometry system is developed for the detection of phase modulated light in AOI applications. Two-wave mixing in the PRC creates a reference beam that is wavefront matched to the modulated optical field collected from the specimen. The phase modulation is converted to an intensity modulation at the optical detector when these two fields interfere. The interferometer has a high optical etendue, making it well suited for AOI where the scattered light levels are typically low. A theoretical model for the detection of acoustically induced phase modulation in turbid media using PRC based interferometry is detailed. An AOI system, using a single element focused ultrasound transducer to pump the AO interaction and the PRC based detection system, is fabricated and tested on tissue mimicking phantoms. It is found that the system has sufficient sensitivity to detect broadband AO signals generated using pulsed ultrasound, allowing for AOI at low time averaged ultrasound output levels. The spatial resolution of the AO imaging system is studied as a function of the ultrasound pulse parameters. A theoretical model of light propagation in turbid media is used to explore the dependence of the AO response on the experimental geometry, light collection aperture, and target optical properties. Finally, a multimodal imaging system combining pulsed AOI and conventional B- mode ultrasound imaging is developed. B-mode ultrasound and AO images of targets embedded in both highly diffuse phantoms and biological tissue ex vivo are obtained, and millimeter resolution is demonstrated in three dimensions. The AO images are intrinsically co-registered with the B-mode ultrasound images. The results suggest that AOI can be used to supplement conventional B-mode ultrasound imaging with optical information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A neural network system, NAVITE, for incremental trajectory generation and obstacle avoidance is presented. Unlike other approaches, the system is effective in unstructured environments. Multimodal inforrnation from visual and range data is used for obstacle detection and to eliminate uncertainty in the measurements. Optimal paths are computed without explicitly optimizing cost functions, therefore reducing computational expenses. Simulations of a planar mobile robot (including the dynamic characteristics of the plant) in obstacle-free and object avoidance trajectories are presented. The system can be extended to incorporate global map information into the local decision-making process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A neural model is described of how the brain may autonomously learn a body-centered representation of 3-D target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation--otherwise known as a parcellated distributed representation--of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of non-foveated target position to learn a visuomotor representation of both foveated and non-foveated target position that is capable of commanding yoked eye movementes. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates arc compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.