7 resultados para multiscale

em Duke University


Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-efficiency collection of photons emitted by a point source over a wide field of view (FoV) is crucial for many applications. Multiscale optics offer improved light collection by utilizing small optical components placed close to the optical source, while maintaining a wide FoV provided by conventional imaging optics. In this work, we demonstrate collection efficiency of 26% of photons emitted by a pointlike source using a micromirror fabricated in silicon with no significant decrease in collection efficiency over a 10 mm object space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Computer simulations are of increasing importance in modeling biological phenomena. Their purpose is to predict behavior and guide future experiments. The aim of this project is to model the early immune response to vaccination by an agent based immune response simulation that incorporates realistic biophysics and intracellular dynamics, and which is sufficiently flexible to accurately model the multi-scale nature and complexity of the immune system, while maintaining the high performance critical to scientific computing. RESULTS: The Multiscale Systems Immunology (MSI) simulation framework is an object-oriented, modular simulation framework written in C++ and Python. The software implements a modular design that allows for flexible configuration of components and initialization of parameters, thus allowing simulations to be run that model processes occurring over different temporal and spatial scales. CONCLUSION: MSI addresses the need for a flexible and high-performing agent based model of the immune system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Photoacoustic tomography (PAT) of genetically encoded probes allows for imaging of targeted biological processes deep in tissues with high spatial resolution; however, high background signals from blood can limit the achievable detection sensitivity. Here we describe a reversibly switchable nonfluorescent bacterial phytochrome for use in multiscale photoacoustic imaging, BphP1, with the most red-shifted absorption among genetically encoded probes. BphP1 binds a heme-derived biliverdin chromophore and is reversibly photoconvertible between red and near-infrared light-absorption states. We combined single-wavelength PAT with efficient BphP1 photoswitching, which enabled differential imaging with substantially decreased background signals, enhanced detection sensitivity, increased penetration depth and improved spatial resolution. We monitored tumor growth and metastasis with ∼ 100-μm resolution at depths approaching 10 mm using photoacoustic computed tomography, and we imaged individual cancer cells with a suboptical-diffraction resolution of ∼ 140 nm using photoacoustic microscopy. This technology is promising for biomedical studies at several scales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Changes in land use, land cover, and land management present some of the greatest potential global environmental challenges of the 21st century. Urbanization, one of the principal drivers of these transformations, is commonly thought to be generating land changes that are increasingly similar. An implication of this multiscale homogenization hypothesis is that the ecosystem structure and function and human behaviors associated with urbanization should be more similar in certain kinds of urbanized locations across biogeophysical gradients than across urbanization gradients in places with similar biogeophysical characteristics. This paper introduces an analytical framework for testing this hypothesis, and applies the framework to the case of residential lawn care. This set of land management behaviors are often assumed--not demonstrated--to exhibit homogeneity. Multivariate analyses are conducted on telephone survey responses from a geographically stratified random sample of homeowners (n = 9,480), equally distributed across six US metropolitan areas. Two behaviors are examined: lawn fertilizing and irrigating. Limited support for strong homogenization is found at two scales (i.e., multi- and single-city; 2 of 36 cases), but significant support is found for homogenization at only one scale (22 cases) or at neither scale (12 cases). These results suggest that US lawn care behaviors are more differentiated in practice than in theory. Thus, even if the biophysical outcomes of urbanization are homogenizing, managing the associated sustainability implications may require a multiscale, differentiated approach because the underlying social practices appear relatively varied. The analytical approach introduced here should also be productive for other facets of urban-ecological homogenization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.

This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications.

Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.

Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.

Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.

Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.