871 resultados para temporal compressive sensing ratio design
Resumo:
This paper introduces an area- and power-efficient approach for compressive recording of cortical signals used in an implantable system prior to transmission. Recent research on compressive sensing has shown promising results for sub-Nyquist sampling of sparse biological signals. Still, any large-scale implementation of this technique faces critical issues caused by the increased hardware intensity. The cost of implementing compressive sensing in a multichannel system in terms of area usage can be significantly higher than a conventional data acquisition system without compression. To tackle this issue, a new multichannel compressive sensing scheme which exploits the spatial sparsity of the signals recorded from the electrodes of the sensor array is proposed. The analysis shows that using this method, the power efficiency is preserved to a great extent while the area overhead is significantly reduced resulting in an improved power-area product. The proposed circuit architecture is implemented in a UMC 0.18 [Formula: see text]m CMOS technology. Extensive performance analysis and design optimization has been done resulting in a low-noise, compact and power-efficient implementation. The results of simulations and subsequent reconstructions show the possibility of recovering fourfold compressed intracranial EEG signals with an SNR as high as 21.8 dB, while consuming 10.5 [Formula: see text]W of power within an effective area of 250 [Formula: see text]m × 250 [Formula: see text]m per channel.
Resumo:
'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.
This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications.
Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.
Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.
Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.
Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.
Resumo:
Novel computer vision techniques have been developed to automatically detect unusual events in crowded scenes from video feeds of surveillance cameras. The research is useful in the design of the next generation intelligent video surveillance systems. Two major contributions are the construction of a novel machine learning model for multiple instance learning through compressive sensing, and the design of novel feature descriptors in the compressed video domain.
Resumo:
In this paper, we present the design and characterization of a vibratory yaw rate MEMS sensor that uses in-plane motion for both actuation and sensing. The design criterion for the rate sensor is based on a high sensitivity and low bandwidth. The required sensitivity of the yawrate sensor is attained by using the inplane motion in which the dominant damping mechanism is the fluid loss due to slide film damping i.e. two-three orders of magnitude less than the squeeze-film damping in other rate sensors with out-of-plane motion. The low bandwidth is achieved by matching the drive and the sense mode frequencies. Based on these factors, the yaw rate sensor is designed and finally realized using surface micromachining. The inplane motion of the sensor is experimentally characterized to determine the sense and the drive mode frequencies, and corresponding damping ratios. It is found that the experimental results match well with the numerical and the analytical models with less than 5% error in frequencies measurements. The measured quality factor of the sensor is approximately 467, which is two orders of magnitude higher than that for a similar rate sensor with out-of-plane sense direction.
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.
The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.
The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.
The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.
Resumo:
We propose an estimation-theoretic approach to the inference of an incoherent 3D scattering density from 2D scattered speckle field measurements. The object density is derived from the covariance of the speckle field. The inference is performed by a constrained optimization technique inspired by compressive sensing theory. Experimental results demonstrate and verify the performance of our estimates.
Resumo:
PURPOSE: To investigate whether myopia is becoming more common across Europe and explore whether increasing education levels, an important environmental risk factor for myopia, might explain any temporal trend.
DESIGN: Meta-analysis of population-based, cross-sectional studies from the European Eye Epidemiology (E(3)) Consortium.
PARTICIPANTS: The E(3) Consortium is a collaborative network of epidemiological studies of common eye diseases in adults across Europe. Refractive data were available for 61 946 participants from 15 population-based studies performed between 1990 and 2013; participants had a range of median ages from 44 to 78 years.
METHODS: Noncycloplegic refraction, year of birth, and highest educational level achieved were obtained for all participants. Myopia was defined as a mean spherical equivalent ≤-0.75 diopters. A random-effects meta-analysis of age-specific myopia prevalence was performed, with sequential analyses stratified by year of birth and highest level of educational attainment.
MAIN OUTCOME MEASURES: Variation in age-specific myopia prevalence for differing years of birth and educational level.
RESULTS: There was a significant cohort effect for increasing myopia prevalence across more recent birth decades; age-standardized myopia prevalence increased from 17.8% (95% confidence interval [CI], 17.6-18.1) to 23.5% (95% CI, 23.2-23.7) in those born between 1910 and 1939 compared with 1940 and 1979 (P = 0.03). Education was significantly associated with myopia; for those completing primary, secondary, and higher education, the age-standardized prevalences were 25.4% (CI, 25.0-25.8), 29.1% (CI, 28.8-29.5), and 36.6% (CI, 36.1-37.2), respectively. Although more recent birth cohorts were more educated, this did not fully explain the cohort effect. Compared with the reference risk of participants born in the 1920s with only primary education, higher education or being born in the 1960s doubled the myopia prevalence ratio-2.43 (CI, 1.26-4.17) and 2.62 (CI, 1.31-5.00), respectively-whereas individuals born in the 1960s and completing higher education had approximately 4 times the reference risk: a prevalence ratio of 3.76 (CI, 2.21-6.57).
CONCLUSIONS: Myopia is becoming more common in Europe; although education levels have increased and are associated with myopia, higher education seems to be an additive rather than explanatory factor. Increasing levels of myopia carry significant clinical and economic implications, with more people at risk of the sight-threatening complications associated with high myopia.
Resumo:
Introduzione al metodo del Compressed Sensing per il campionamento di segnali sparsi.
Resumo:
Tecniche per l'acquisizione a basso consumo di segnali sparsi tramite compressed sensing
Resumo:
In a statistical inference scenario, the estimation of target signal or its parameters is done by processing data from informative measurements. The estimation performance can be enhanced if we choose the measurements based on some criteria that help to direct our sensing resources such that the measurements are more informative about the parameter we intend to estimate. While taking multiple measurements, the measurements can be chosen online so that more information could be extracted from the data in each measurement process. This approach fits well in Bayesian inference model often used to produce successive posterior distributions of the associated parameter. We explore the sensor array processing scenario for adaptive sensing of a target parameter. The measurement choice is described by a measurement matrix that multiplies the data vector normally associated with the array signal processing. The adaptive sensing of both static and dynamic system models is done by the online selection of proper measurement matrix over time. For the dynamic system model, the target is assumed to move with some distribution and the prior distribution at each time step is changed. The information gained through adaptive sensing of the moving target is lost due to the relative shift of the target. The adaptive sensing paradigm has many similarities with compressive sensing. We have attempted to reconcile the two approaches by modifying the observation model of adaptive sensing to match the compressive sensing model for the estimation of a sparse vector.
Resumo:
hyDRaCAT Spectral Reflectance Library for tundra provides the surface reflectance data and the bidirectional reflectance distribution function (BRDF) of important Arctic tundra vegetation communities at representative Siberian and Alaskan tundra sites. The aim of this dataset is the hyperspectral and spectro-directional reflectance characterization as basis for the extraction of vegetation parameters, and the normalization of BRDF effects in off-nadir and multi-temporal remote sensing data. The spectroscopic and field spectro-goniometric measurements were undertaken on the YAMAL2011 expedition of representative Siberian vegetation fields and on the North American Arctic Transect NAAT2012 expedition of Alaskan vegetation fields both belonging to the Greening-of-the-Arctic (GOA) program. For the field spectroscopy each 100 m2 vegetation study grid was divided into quadrats of 1 × 1 m. The averaged reflectance of all quadrats represents the spectral reflectance at the scale of the whole grid at the 10 × 10 m scale. For the surface radiometric measurements two GER1500 portable field spectroradiometers (Spectra Vista Corporation, Poughkeepsie, NY, USA) were used. The GER1500 measures radiance across the wavelength range of 350-1,050 nm, with sampling intervals of 1.5 nm and a radiance accuracy of 1.2 × 10**-1 W/cm**2/nm/sr. In order to increase the signal-to-noise ratio, 32 individual measurements were averaged per one target scan. To minimize variations in the target reflectance due to sun zenith angle changes, all measurements at one study location have been performed under similar sun zenith angles and during clear-sky conditions. The field spectrometer measurements were carried out with a GER1500 UV-VIS spectrometer The spectrogoniometer measurements were carried out with a self-designed spectro-goniometer: the Manual Transportable Instrument platform for ground-based Spectro-directional observations (ManTIS, patent publication number: DE 10 2011 117 713.A1). The ManTIS was equipped with the GER1500 spectrometer allowing spectro-directional measurements with up to 30° viewing zenith angle by full 360° viewing azimuth angles. Measurements in central Yamal (Siberia) at the research site 'Vaskiny Dachi' were carried out in the late summer phenological state from August 12 2011 to August 28 2011. All measurements in Alaska along the North South transect on the North Slope were taken between 29 June and 11 July 2012, ensuring that the vegetation was in the same phenological state near peak growing season.
Resumo:
O presente trabalho foi desenvolvido a partir de um recorte temporal da história do design que compreende o período entre o pós-guerra e o pós-modernismo, com foco no design automotivo promovido pela Scuderia Ferrari, marca italiana de carros esportivos de luxo. Serão abordadas as mudanças sofridas pelo design ao longo do tempo, e de que maneira estas influenciaram em alguns dos projetos mais icônicos da marca na categoria Granturismo – também conhecido pela denominação GT.
Resumo:
Alginate polysaccharide forms viscous aqueous dispersions and has the ability to form gels in the presence of divalent cations such as calcium and copper. In this work, we have studied cooper ions binding during Cu‐alginate gelation, obtaining quantitative information about the amount and kinetics of cation binding. Our results indicate that copper binding during gelation occurs until a Langmuir‐type equilibrium is reached between bound and free ions in the gel‐contacting solution. The kinetics of metal ions binding can be modeled using Ritchie equation–derived models, allowing the prediction of ionic binding and gel formation temporal evolution. The ratio between cationic and polysaccharide quantities in the gelation system determines the kinetics of gelation and the characteristics of the gel formed. The experimental results and models applied in the work give more insights on alginate gelation and contribute to a reliable design and control of production methods for alginate gel structures.