4 resultados para Dynamic high-speed videokeratoscopy

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simultaneous measurements of high-altitude optical emissions and magnetic fields produced by sprite-associated lightning discharges enable a close examination of the link between low-altitude lightning processes and high-altitude sprite processes. We report results of the coordinated analysis of high-speed sprite video and wideband magnetic field measurements recorded simultaneously at Yucca Ridge Field Station and Duke University. From June to August 2005, sprites were detected following 67 lightning strokes, all of which had positive polarity. Our data showed that 46% of the 83 discrete sprite events in these sequences initiated more than 10 ms after the lightning return stroke, and we focus on these delayed sprites in this work. All delayed sprites were preceded by continuing current moments that averaged at least 11 kA km between the return stroke and sprites. The total lightning charge moment change at sprite initiation varied from 600 to 18,600 C km, and the minimum value to initiate long-delayed sprites ranged from 600 for 15 ms delay to 2000 C km for more than 120 ms delay. We numerically simulated electric fields at altitudes above these lightning discharges and found that the maximum normalized electric fields are essentially the same as fields that produce short-delayed sprites. Both estimated and simulation-predicted sprite initiation altitudes indicate that long-delayed sprites generally initiate around 5 km lower than short-delayed sprites. The simulation results also reveal that slow (5-20 ms) intensifications in continuing current can play a major role in initiating delayed sprites. Copyright 2008 by the American Geophysical Union.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present fast functional photoacoustic microscopy (PAM) for three-dimensional high-resolution, high-speed imaging of the mouse brain, complementary to other imaging modalities. We implemented a single-wavelength pulse-width-based method with a one-dimensional imaging rate of 100 kHz to image blood oxygenation with capillary-level resolution. We applied PAM to image the vascular morphology, blood oxygenation, blood flow and oxygen metabolism in both resting and stimulated states in the mouse brain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dynamic interaction between laser-generated tandem bubble and individual polystyrene particles of 2 and 10 μm in diameter is studied in a microfluidic channel (25 μm height) by high-speed imaging and particle image velocimetry. The asymmetric collapse of the tandem bubble produces a pair of microjets and associated long-lasting vortices that can propel a single particle to a maximum velocity of 1.4 m∕s in 30 μs after the bubble collapse with a resultant directional displacement up to 60 μm in 150 μs. This method may be useful for high-throughput cell sorting in microfluidic devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.

This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications.

Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.

Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.

Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.

Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.