9 resultados para virtual sensing
em Duke University
Resumo:
The authors explore nanoscale sensor processor (nSP) architectures. Their design includes a simple accumulator-based instruction-set architecture, sensors, limited memory, and instruction-fused sensing. Using nSP technology based on optical resonance energy transfer logic helps them decrease the design's size; their smallest design is about the size of the largest-known virus. © 2006 IEEE.
Resumo:
This study involves two aspects of our investigations of plasmonics-active systems: (i) theoretical and simulation studies and (ii) experimental fabrication of plasmonics-active nanostructures. Two types of nanostructures are selected as the model systems for their unique plasmonics properties: (1) nanoparticles and (2) nanowires on substrate. Special focus is devoted to regions where the electromagnetic field is strongly concentrated by the metallic nanostructures or between nanostructures. The theoretical investigations deal with dimers of nanoparticles and nanoshells using a semi-analytical method based on a multipole expansion (ME) and the finite-element method (FEM) in order to determine the electromagnetic enhancement, especially at the interface areas of two adjacent nanoparticles. The experimental study involves the design of plasmonics-active nanowire arrays on substrates that can provide efficient electromagnetic enhancement in regions around and between the nanostructures. Fabrication of these nanowire structures over large chip-scale areas (from a few millimeters to a few centimeters) as well as FDTD simulations to estimate the EM fields between the nanowires are described. The application of these nanowire chips using surface-enhanced Raman scattering (SERS) for detection of chemicals and labeled DNA molecules is described to illustrate the potential of the plasmonics chips for sensing.
Resumo:
Light is a universal signal perceived by organisms, including fungi, in which light regulates common and unique biological processes depending on the species. Previous research has established that conserved proteins, originally called White collar 1 and 2 from the ascomycete Neurospora crassa, regulate UV/blue light sensing. Homologous proteins function in distant relatives of N. crassa, including the basidiomycetes and zygomycetes, which diverged as long as a billion years ago. Here we conducted microarray experiments on the basidiomycete fungus Cryptococcus neoformans to identify light-regulated genes. Surprisingly, only a single gene was induced by light above the commonly used twofold threshold. This gene, HEM15, is predicted to encode a ferrochelatase that catalyses the final step in haem biosynthesis from highly photoreactive porphyrins. The C. neoformans gene complements a Saccharomyces cerevisiae hem15Delta strain and is essential for viability, and the Hem15 protein localizes to mitochondria, three lines of evidence that the gene encodes ferrochelatase. Regulation of HEM15 by light suggests a mechanism by which bwc1/bwc2 mutants are photosensitive and exhibit reduced virulence. We show that ferrochelatase is also light-regulated in a white collar-dependent fashion in N. crassa and the zygomycete Phycomyces blakesleeanus, indicating that ferrochelatase is an ancient target of photoregulation in the fungal kingdom.
Resumo:
Previous studies have shown that the isoplanatic distortion due to turbulence and the image of a remote object may be jointly estimated from the 4D mutual intensity across an aperture. This Letter shows that decompressive inference on a 2D slice of the 4D mutual intensity, as measured by a rotational shear interferometer, is sufficient for estimation of sparse objects imaged through turbulence. The 2D slice is processed using an iterative algorithm that alternates between estimating the sparse objects and estimating the turbulence-induced phase screen. This approach may enable new systems that infer object properties through turbulence without exhaustive sampling of coherence functions.
Resumo:
Hydrologic research is a very demanding application of fiber-optic distributed temperature sensing (DTS) in terms of precision, accuracy and calibration. The physics behind the most frequently used DTS instruments are considered as they apply to four calibration methods for single-ended DTS installations. The new methods presented are more accurate than the instrument-calibrated data, achieving accuracies on the order of tenths of a degree root mean square error (RMSE) and mean bias. Effects of localized non-uniformities that violate the assumptions of single-ended calibration data are explored and quantified. Experimental design considerations such as selection of integration times or selection of the length of the reference sections are discussed, and the impacts of these considerations on calibrated temperatures are explored in two case studies.
Resumo:
This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to realtime implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems. © 2013 IEEE.
Resumo:
A framework for adaptive and non-adaptive statistical compressive sensing is developed, where a statistical model replaces the standard sparsity model of classical compressive sensing. We propose within this framework optimal task-specific sensing protocols specifically and jointly designed for classification and reconstruction. A two-step adaptive sensing paradigm is developed, where online sensing is applied to detect the signal class in the first step, followed by a reconstruction step adapted to the detected class and the observed samples. The approach is based on information theory, here tailored for Gaussian mixture models (GMMs), where an information-theoretic objective relationship between the sensed signals and a representation of the specific task of interest is maximized. Experimental results using synthetic signals, Landsat satellite attributes, and natural images of different sizes and with different noise levels show the improvements achieved using the proposed framework when compared to more standard sensing protocols. The underlying formulation can be applied beyond GMMs, at the price of higher mathematical and computational complexity. © 1991-2012 IEEE.
Resumo:
© 2015 IEEE.In virtual reality applications, there is an aim to provide real time graphics which run at high refresh rates. However, there are many situations in which this is not possible due to simulation or rendering issues. When running at low frame rates, several aspects of the user experience are affected. For example, each frame is displayed for an extended period of time, causing a high persistence image artifact. The effect of this artifact is that movement may lose continuity, and the image jumps from one frame to another. In this paper, we discuss our initial exploration of the effects of high persistence frames caused by low refresh rates and compare it to high frame rates and to a technique we developed to mitigate the effects of low frame rates. In this technique, the low frame rate simulation images are displayed with low persistence by blanking out the display during the extra time such image would be displayed. In order to isolate the visual effects, we constructed a simulator for low and high persistence displays that does not affect input latency. A controlled user study comparing the three conditions for the tasks of 3D selection and navigation was conducted. Results indicate that the low persistence display technique may not negatively impact user experience or performance as compared to the high persistence case. Directions for future work on the use of low persistence displays for low frame rate situations are discussed.
Resumo:
This research validates a computerized dietary selection task (Food-Linked Virtual Response or FLVR) for use in studies of food consumption. In two studies, FLVR task responses were compared with measures of health consciousness, mood, body mass index, personality, cognitive restraint toward food, and actual food selections from a buffet table. The FLVR task was associated with variables which typically predict healthy decision-making and was unrelated to mood or body mass index. Furthermore, the FLVR task predicted participants' unhealthy selections from the buffet, but not overall amount of food. The FLVR task is an inexpensive, valid, and easily administered option for assessing momentary dietary decisions.