972 resultados para extended depth of field


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe the use of a Wigner distribution function approach for exploring the problem of extending the depth of field in a hybrid imaging system. The Wigner distribution function, in connection with the phase-space curve that formulates a joint phase-space description of an optical field, is employed as a tool to display and characterize the evolving behavior of the amplitude point spread function as a wave propagating along the optical axis. It provides a comprehensive exhibition of the characteristics for the hybrid imaging system in extending the depth of field from both wave optics and geometrical optics. We use it to analyze several well-known optical designs in extending the depth of field from a new viewpoint. The relationships between this approach and the earlier ambiguity function approach are also briefly investigated. (c) 2006 Optical Society of America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By properly designing a phase pupil mask to modulate or encode the optical images and then digitally restoring them, one can greatly extend the depth of field and improve image quality. The original works done by Dowski and Cathey introduce the use of a cubic phase pupil mask to extend the depth of field. The theoretical and experimental researches all verified its effectiveness. In this paper, we suggest the use of an exponential phase pupil mask to extend the depth of field. This phase mask has two variable parameters for designing to control the shape of the mask so as to modulate the wave-front more flexible. We employ an optimization procedure based on the Fisher information metric to obtain the optimum values of the parameters for the exponential and the cubic masks, respectively. A series of performance comparisons between these two optimized phase masks in extending the depth of field are then done. The results show that the exponential phase mask provide slight advantage to the cubic one in several aspects. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wavefront coding is a powerful technique that can be used to extend the depth of field of an incoherent imaging system. By adding a suitable phase mask to the aperture plane, the optical transfer function of a conventional imaging system can be made defocus invariant. Since 1995, when a cubic phase mask was first suggested, many kinds of phase masks have been proposed to achieve the goal of depth extension. In this Letter, a phase mask based on sinusoidal function is designed to enrich the family of phase masks. Numerical evaluation demonstrates that the proposed mask is not only less sensitive to focus errors than cubic, exponential, and modified logarithmic masks are, but it also has a smaller point-spread-function shifting effect. (C) 2010 Optical Society of America

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wave-front coding is a well known technique used to extend the depth of field of incoherent imaging system. The core of this technique lies in the design of suitable phase masks, among which the most important one is the cubic phase mask suggested by Dowski and Cathey (1995) [1]. In this paper, we propose a new type called cubic sinusoidal phase mask which is generated by combing the cubic one and another component having the sinusoidal form. Numerical evaluations and real experimental results demonstrate that the composite phase mask is superior to the original cubic phase mask with parameters optimized and provides another choice to achieve the goal of depth extension. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retinal blurring resulting from the human eye's depth of focus has been shown to assist visual perception. Infinite focal depth within stereoscopically displayed virtual environments may cause undesirable effects, for instance, objects positioned at a distance in front of or behind the observer's fixation point will be perceived in sharp focus with large disparities thereby causing diplopia. Although published research on incorporation of synthetically generated Depth of Field (DoF) suggests that this might act as an enhancement to perceived image quality, no quantitative testimonies of perceptional performance gains exist. This may be due to the difficulty of dynamic generation of synthetic DoF where focal distance is actively linked to fixation distance. In this paper, such a system is described. A desktop stereographic display is used to project a virtual scene in which synthetically generated DoF is actively controlled from vergence-derived distance. A performance evaluation experiment on this system which involved subjects carrying out observations in a spatially complex virtual environment was undertaken. The virtual environment consisted of components interconnected by pipes on a distractive background. The subject was tasked with making an observation based on the connectivity of the components. The effects of focal depth variation in static and actively controlled focal distance conditions were investigated. The results and analysis are presented which show that performance gains may be achieved by addition of synthetic DoF. The merits of the application of synthetic DoF are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The stretch zone width (SZW) data for 15-5PH steel CTOD specimens fractured at -150 degrees C to + 23 degrees C temperature were measured based on focused images and 3D maps obtained by extended depth-of-field reconstruction from light microscopy (LM) image stacks. This LM-based method, with a larger lateral resolution, seems to be as effective for quantitative analysis of SZW as scanning electron microscopy (SEM) or confocal scanning laser microscopy (CSLM), permitting to clearly identify stretch zone boundaries. Despite the worst sharpness of focused images, a robust linear correlation was established to fracture toughness (KC) and SZW data for the 15-5PH steel tested specimens, measured at their center region. The method is an alternative to evaluate the boundaries of stretched zones, at a lower cost of implementation and training, since topographic data from elevation maps can be associated with reconstructed image, which summarizes the original contrast and brightness information. Finally, the extended depth-of-field method is presented here as a valuable tool for failure analysis, as a cheaper alternative to investigate rough surfaces or fracture, compared to scanning electron or confocal light microscopes. Microsc. Res. Tech. 75:11551158, 2012. (C) 2012 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In general optical systems, the range of distances over which the detector cannot detect any change in focus is called the depth-of-field. This may be specified by movement of the object or image planes, with the former being referred to as depth-of-field and the latter as depth-of-focus (DOF). Either term can be used in vision science, where we refer to changes in vergence which have the same value in both object and image space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.

This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications.

Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.

Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.

Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.

Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.