9 resultados para Optic nerve head
em Boston University Digital Common
Resumo:
Acousto-optic imaging (AOI) in optically diffuse media is a hybrid imaging modality in which a focused ultrasound beam is used to locally phase modulate light inside of turbid media. The modulated optical field carries with it information about the optical properties in the region where the light and sound interact. The motivation for the development of AOI systems is to measure optical properties at large depths within biological tissue with high spatial resolution. A photorefractive crystal (PRC) based interferometry system is developed for the detection of phase modulated light in AOI applications. Two-wave mixing in the PRC creates a reference beam that is wavefront matched to the modulated optical field collected from the specimen. The phase modulation is converted to an intensity modulation at the optical detector when these two fields interfere. The interferometer has a high optical etendue, making it well suited for AOI where the scattered light levels are typically low. A theoretical model for the detection of acoustically induced phase modulation in turbid media using PRC based interferometry is detailed. An AOI system, using a single element focused ultrasound transducer to pump the AO interaction and the PRC based detection system, is fabricated and tested on tissue mimicking phantoms. It is found that the system has sufficient sensitivity to detect broadband AO signals generated using pulsed ultrasound, allowing for AOI at low time averaged ultrasound output levels. The spatial resolution of the AO imaging system is studied as a function of the ultrasound pulse parameters. A theoretical model of light propagation in turbid media is used to explore the dependence of the AO response on the experimental geometry, light collection aperture, and target optical properties. Finally, a multimodal imaging system combining pulsed AOI and conventional B- mode ultrasound imaging is developed. B-mode ultrasound and AO images of targets embedded in both highly diffuse phantoms and biological tissue ex vivo are obtained, and millimeter resolution is demonstrated in three dimensions. The AO images are intrinsically co-registered with the B-mode ultrasound images. The results suggest that AOI can be used to supplement conventional B-mode ultrasound imaging with optical information.
Resumo:
Acousto-optic (AO) sensing and imaging (AOI) is a dual-wave modality that combines ultrasound with diffusive light to measure and/or image the optical properties of optically diffusive media, including biological tissues such as breast and brain. The light passing through a focused ultrasound beam undergoes a phase modulation at the ultrasound frequency that is detected using an adaptive interferometer scheme employing a GaAs photorefractive crystal (PRC). The PRC-based AO system operating at 1064 nm is described, along with the underlying theory, validating experiments, characterization, and optimization of this sensing and imaging apparatus. The spatial resolution of AO sensing, which is determined by spatial dimensions of the ultrasound beam or pulse, can be sub-millimeter for megahertz-frequency sound waves.A modified approach for quantifying the optical properties of diffuse media with AO sensing employs the ratio of AO signals generated at two different ultrasound focal pressures. The resulting “pressure contrast signal” (PCS), once calibrated for a particular set of pressure pulses, yields a direct measure of the spatially averaged optical transport attenuation coefficient within the interaction volume between light and sound. This is a significant improvement over current AO sensing methods since it produces a quantitative measure of the optical properties of optically diffuse media without a priori knowledge of the background illumination. It can also be used to generate images based on spatial variations in both optical scattering and absorption. Finally, the AO sensing system is modified to monitor the irreversible optical changes associated with the tissue heating from high intensity focused ultrasound (HIFU) therapy, providing a powerful method for noninvasively sensing the onset and growth of thermal lesions in soft tissues. A single HIFU transducer is used to simultaneously generate tissue damage and pump the AO interaction. Experimental results performed in excised chicken breast demonstrate that AO sensing can identify the onset and growth of lesion formation in real time and, when used as feedback to guide exposure parameters, results in more predictable lesion formation.
Resumo:
Malignant or benign tumors may be ablated with high‐intensity focused ultrasound (HIFU). This technique, known as focused ultrasound surgery (FUS), has been actively investigated for decades, but slow to be implemented and difficult to control due to lack of real‐time feedback during ablation. Two methods of imaging and monitoring HIFU lesions during formation were implemented simultaneously, in order to investigate the efficacy of each and to increase confidence in the detection of the lesion. The first, Acousto‐Optic Imaging (AOI) detects the increasing optical absorption and scattering in the lesion. The intensity of a diffuse optical field in illuminated tissue is mapped at the spatial resolution of an ultrasound focal spot, using the acousto‐optic effect. The second, Harmonic Motion Imaging (HMI), detects the changing stiffness in the lesion. The HIFU beam is modulated to force oscillatory motion in the tissue, and the amplitude of this motion, measured by ultrasound pulse‐echo techniques, is influenced by the stiffness. Experiments were performed on store‐bought chicken breast and freshly slaughtered bovine liver. The AOI results correlated with the onset and relative size of forming lesions much better than prior knowledge of the HIFU power and duration. For HMI, a significant artifact was discovered due to acoustic nonlinearity. The artifact was mitigated by adjusting the phase of the HIFU and imaging pulses. A more detailed model of the HMI process than previously published was made using finite element analysis. The model showed that the amplitude of harmonic motion was primarily affected by increases in acoustic attenuation and stiffness as the lesion formed and the interaction of these effects was complex and often counteracted each other. Further biological variability in tissue properties meant that changes in motion were masked by sample‐to‐sample variation. The HMI experiments predicted lesion formation in only about a quarter of the lesions made. In simultaneous AOI/HMI experiments it appeared that AOI was a more robust method for lesion detection.
Resumo:
A novel method for 3D head tracking in the presence of large head rotations and facial expression changes is described. Tracking is formulated in terms of color image registration in the texture map of a 3D surface model. Model appearance is recursively updated via image mosaicking in the texture map as the head orientation varies. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. Parameters are estimated via a robust minimization procedure; this provides robustness to occlusions, wrinkles, shadows, and specular highlights. The system was tested on a variety of sequences taken with low quality, uncalibrated video cameras. Experimental results are reported.
Resumo:
An automated system for detection of head movements is described. The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary. Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases.
Resumo:
An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is then achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2-D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The warping templates are computed at the first frame of the sequence. Illumination templates are precomputed off-line over a training set of face images collected under varying lighting conditions. Experiments in tracking are reported.
Resumo:
An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The formulation is tailored to take advantage of texture mapping hardware available in many workstations, PC's, and game consoles. The non-optimized implementation runs at about 15 frames per second on a SGI O2 graphic workstation. Extensive experiments evaluating the effectiveness of the formulation are reported. The sensitivity of the technique to illumination, regularization parameters, errors in the initial positioning and internal camera parameters are analyzed. Examples and applications of tracking are reported.
Resumo:
Accurate head tilt detection has a large potential to aid people with disabilities in the use of human-computer interfaces and provide universal access to communication software. We show how it can be utilized to tab through links on a web page or control a video game with head motions. It may also be useful as a correction method for currently available video-based assistive technology that requires upright facial poses. Few of the existing computer vision methods that detect head rotations in and out of the image plane with reasonable accuracy can operate within the context of a real-time communication interface because the computational expense that they incur is too great. Our method uses a variety of metrics to obtain a robust head tilt estimate without incurring the computational cost of previous methods. Our system runs in real time on a computer with a 2.53 GHz processor, 256 MB of RAM and an inexpensive webcam, using only 55% of the processor cycles.
Resumo:
This article describes how corollary discharges from outflow eye movement commands can be transformed by two stages of opponent neural processing into a head-centered representation of 3-D target position. This representation implicitly defines a cyclopean coordinate system whose variables approximate the binocular vergence and spherical horizontal and vertical angles with respect to the observer's head. Various psychophysical data concerning binocular distance perception and reaching behavior are clarified by this representation. The representation provides a foundation for learning head-centered and body-centered invariant representations of both foveated and non-foveated 3-D target positions. It also enables a solution to be developed of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space.