936 resultados para Shannon Sampling Theorem
Resumo:
The classical Kramer sampling theorem provides a method for obtaining orthogonal sampling formulas. Besides, it has been the cornerstone for a significant mathematical literature on the topic of sampling theorems associated with differential and difference problems. In this work we provide, in an unified way, new and old generalizations of this result corresponding to various different settings; all these generalizations are illustrated with examples. All the different situations along the paper share a basic approach: the functions to be sampled are obtaining by duality in a separable Hilbert space H through an H -valued kernel K defined on an appropriate domain.
Resumo:
2000 Mathematics Subject Classification: 94A12, 94A20, 30D20, 41A05.
Resumo:
We address the problem of computing the level-crossings of an analog signal from samples measured on a uniform grid. Such a problem is important, for example, in multilevel analog-to-digital (A/D) converters. The first operation in such sampling modalities is a comparator, which gives rise to a bilevel waveform. Since bilevel signals are not bandlimited, measuring the level-crossing times exactly becomes impractical within the conventional framework of Shannon sampling. In this paper, we propose a novel sub-Nyquist sampling technique for making measurements on a uniform grid and thereby for exactly computing the level-crossing times from those samples. The computational complexity of the technique is low and comprises simple arithmetic operations. We also present a finite-rate-of-innovation sampling perspective of the proposed approach and also show how exponential splines fit in naturally into the proposed sampling framework. We also discuss some concrete practical applications of the sampling technique.
Resumo:
The classical Kramer sampling theorem provides a method for obtaining orthogonal sampling formulas. In particular, when the involved kernel is analytic in the sampling parameter it can be stated in an abstract setting of reproducing kernel Hilbert spaces of entire functions which includes as a particular case the classical Shannon sampling theory. This abstract setting allows us to obtain a sort of converse result and to characterize when the sampling formula associated with an analytic Kramer kernel can be expressed as a Lagrange-type interpolation series. On the other hand, the de Branges spaces of entire functions satisfy orthogonal sampling formulas which can be written as Lagrange-type interpolation series. In this work some links between all these ideas are established.
Resumo:
Mobile sensor networks have unique advantages compared with wireless sensor networks. The mobility enables mobile sensors to flexibly reconfigure themselves to meet sensing requirements. In this dissertation, an adaptive sampling method for mobile sensor networks is presented. Based on the consideration of sensing resource constraints, computing abilities, and onboard energy limitations, the adaptive sampling method follows a down sampling scheme, which could reduce the total number of measurements, and lower sampling cost. Compressive sensing is a recently developed down sampling method, using a small number of randomly distributed measurements for signal reconstruction. However, original signals cannot be reconstructed using condensed measurements, as addressed by Shannon Sampling Theory. Measurements have to be processed under a sparse domain, and convex optimization methods should be applied to reconstruct original signals. Restricted isometry property would guarantee signals can be recovered with little information loss. While compressive sensing could effectively lower sampling cost, signal reconstruction is still a great research challenge. Compressive sensing always collects random measurements, whose information amount cannot be determined in prior. If each measurement is optimized as the most informative measurement, the reconstruction performance can perform much better. Based on the above consideration, this dissertation is focusing on an adaptive sampling approach, which could find the most informative measurements in unknown environments and reconstruct original signals. With mobile sensors, measurements are collect sequentially, giving the chance to uniquely optimize each of them. When mobile sensors are about to collect a new measurement from the surrounding environments, existing information is shared among networked sensors so that each sensor would have a global view of the entire environment. Shared information is analyzed under Haar Wavelet domain, under which most nature signals appear sparse, to infer a model of the environments. The most informative measurements can be determined by optimizing model parameters. As a result, all the measurements collected by the mobile sensor network are the most informative measurements given existing information, and a perfect reconstruction would be expected. To present the adaptive sampling method, a series of research issues will be addressed, including measurement evaluation and collection, mobile network establishment, data fusion, sensor motion, signal reconstruction, etc. Two dimensional scalar field will be reconstructed using the method proposed. Both single mobile sensors and mobile sensor networks will be deployed in the environment, and reconstruction performance of both will be compared.In addition, a particular mobile sensor, a quadrotor UAV is developed, so that the adaptive sampling method can be used in three dimensional scenarios.
Resumo:
The classical approach to A/D conversion has been uniform sampling and we get perfect reconstruction for bandlimited signals by satisfying the Nyquist Sampling Theorem. We propose a non-uniform sampling scheme based on level crossing (LC) time information. We show stable reconstruction of bandpass signals with correct scale factor and hence a unique reconstruction from only the non-uniform time information. For reconstruction from the level crossings we make use of the sparse reconstruction based optimization by constraining the bandpass signal to be sparse in its frequency content. While overdetermined system of equations is resorted to in the literature we use an undetermined approach along with sparse reconstruction formulation. We could get a reconstruction SNR > 20dB and perfect support recovery with probability close to 1, in noise-less case and with lower probability in the noisy case. Random picking of LC from different levels over the same limited signal duration and for the same length of information, is seen to be advantageous for reconstruction.
Resumo:
The discretization size is limited by the sampling theorem, and the limit is one half of the wavelength of the highest frequency of the problem. However, one half of the wavelength is an ideal value. In general, the discretization size that can ensure the accuracy of the simulation is much smaller than this value in the traditional finite element method. The possible reason of this phenomenon is analyzed in this paper, and an efficient method is given to improve the simulation accuracy.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
The classical Kramer sampling theorem, which provides a method for obtaining orthogonal sampling formulas, can be formulated in a more general nonorthogonal setting. In this setting, a challenging problem is to characterize the situations when the obtained nonorthogonal sampling formulas can be expressed as Lagrange-type interpolation series. In this article a necessary and sufficient condition is given in terms of the zero removing property. Roughly speaking, this property concerns the stability of the sampled functions on removing a finite number of their zeros.
Resumo:
The Biscayne Bay Benthic Sampling Program was divided into two phases. In Phase I, sixty sampling stations were established in Biscayne Bay (including Dumfoundling Bay and Card Sound) representing diverse habitats. The stations were visited in the wet season (late fall of 1981) and in the dry season (midwinter of 1982). At each station certain abiotic conditions were measured or estimated. These included depth, sources of freshwater inflow and pollution, bottom characteristics, current direction and speed, surface and bottom temperature, salinity and dissolved oxygen, and water clarity was estimated with a secchi disk. Seagrass blades and macroalgae were counted in a 0.1-m2 grid placed so as to best represent the bottom community within a 50-foot radius. Underwater 35-mm photographs were made of the bottom using flash apparatus. Benthic samples were collected using a petite Ponar dredge. These samples were washed through a 5-mm mesh screen, fixed in formalin in the field, and later sorted and identified by experts to a pre-agreed taxonomic level. During the wet season sampling period, a nonquantitative one-meter wide trawl was made of the epibenthic community. These samples were also washed, fixed, sorted and identified. During the dry season sampling period, sediment cores were collected at each station not located on bare rock. These cores were analyzed for sediment size and organic composition by personnel of the University of Miami. Data resulting from the sampling were entered into a computer. These data were subjected to cluster analyses, Shannon-Weaver diversity analysis, multiple regression analysis of variance and covariance, and factor analysis. In Phase II of the program, fifteen stations were selected from among the sixty of Phase I. These stations were sampled quarterly. At each quarter, five Petite Ponar dredge samples were collected from each station. As in Phase I, observations and measurements, including seagrass blade counts, were made at each station. In Phase II, polychaete specimens collected were given to a separate contractor for analysis to the species level. These analyses included mean, standard deviation, coefficient of dispersion, percent of total, and numeric rank for each organism in each station as well as number of species, Shannon-Weaver taxa diversity, and dominance (the compliment of Simpson's Index) for each station. Multiple regression analysis of variance and covariance, and factor analysis were applied to the data to determine effect of abiotic factors measured at each station. (PDF contains 96 pages)
Resumo:
PURPOSE: Currently, in forensic medicine cross-sectional imaging gains recognition and a wide use as a non-invasive examination approach. Today, computed tomography (CT) or magnetic resonance imaging that are available for patients are unable to provide tissue information on the cellular level in a non-invasive manner and also diatom detection, DNA, bacteriological, chemical toxicological and other specific tissue analyses are impossible using radiology. We hypothesised that post-mortem minimally invasive tissue sampling using needle biopsies under CT guidance might significantly enhance the potential of virtual autopsy. The purpose of this study was to test the use of a clinically approved biopsy needle for minimally invasive post-mortem sampling of tissue specimens under CT guidance. MATERIAL AND METHODS: ACN III biopsy core needles 14 gauge x 160 mm with automatic pistol device were used on three bodies dedicated to research from the local anatomical institute. Tissue probes from the brain, heart, lung, liver, spleen, kidney and muscle tissue were obtained under CT fluoroscopy. RESULTS: CT fluoroscopy enabled accurate placement of the needle within the organs and tissues. The needles allowed for sampling of tissue probes with a mean width of 1.7 mm (range 1.2-2 mm) and the maximal length of 20 mm at all locations. The obtained tissue specimens were of sufficient size and adequate quality for histological analysis. CONCLUSION: Our results indicate that, similar to the clinical experience but in many more organs, the tissue specimens obtained using the clinically approved biopsy needle are of a sufficient size and adequate quality for a histological examination. We suggest that post-mortem biopsy using the ACN III needle under CT guidance may become a reliable method for targeted sampling of tissue probes of the body.