916 resultados para Sampling schemes


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Different approaches to visual object recognition can be divided into two general classes: model-based vs. non model-based schemes. In this paper we establish some limitation on the class of non model-based recognition schemes. We show that every function that is invariant to viewing position of all objects is the trivial (constant) function. It follows that every consistent recognition scheme for recognizing all 3-D objects must in general be model based. The result is extended to recognition schemes that are imperfect (allowed to make mistakes) or restricted to certain classes of objects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent work in sensor databases has focused extensively on distributed query problems, notably distributed computation of aggregates. Existing methods for computing aggregates broadcast queries to all sensors and use in-network aggregation of responses to minimize messaging costs. In this work, we focus on uniform random sampling across nodes, which can serve both as an alternative building block for aggregation and as an integral component of many other useful randomized algorithms. Prior to our work, the best existing proposals for uniform random sampling of sensors involve contacting all nodes in the network. We propose a practical method which is only approximately uniform, but contacts a number of sensors proportional to the diameter of the network instead of its size. The approximation achieved is tunably close to exact uniform sampling, and only relies on well-known existing primitives, namely geographic routing, distributed computation of Voronoi regions and von Neumann's rejection method. Ultimately, our sampling algorithm has the same worst-case asymptotic cost as routing a point-to-point message, and thus it is asymptotically optimal among request/reply-based sampling methods. We provide experimental results demonstrating the effectiveness of our algorithm on both synthetic and real sensor topologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel technique to detect and localize periodic movements in video is presented. The distinctive feature of the technique is that it requires neither feature tracking nor object segmentation. Intensity patterns along linear sample paths in space-time are used in estimation of period of object motion in a given sequence of frames. Sample paths are obtained by connecting (in space-time) sample points from regions of high motion magnitude in the first and last frames. Oscillations in intensity values are induced at time instants when an object intersects the sample path. The locations of peaks in intensity are determined by parameters of both cyclic object motion and orientation of the sample path with respect to object motion. The information about peaks is used in a least squares framework to obtain an initial estimate of these parameters. The estimate is further refined using the full intensity profile. The best estimate for the period of cyclic object motion is obtained by looking for consensus among estimates from many sample paths. The proposed technique is evaluated with synthetic videos where ground-truth is known, and with American Sign Language videos where the goal is to detect periodic hand motions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute studies have led some authors to conclude that the router graph of the Internet is a scale-free graph, or more generally a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. In this paper we argue that the evidence to date for this conclusion is at best insufficient. We show that graphs appearing to have power-law degree distributions can arise surprisingly easily, when sampling graphs whose true degree distribution is not at all like a power-law. For example, given a classical Erdös-Rényi sparse, random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can easily appear to show a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to distinguish measurements taken from the Erdös-Rényi graphs from those taken from power-law random graphs. When we apply this distinction to a number of well-known datasets, we find that the evidence for sampling bias in these datasets is strong.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose generalized sampling approaches for measuring a multi-dimensional object using a compact compound-eye imaging system called thin observation module by bound optics (TOMBO). This paper shows the proposed system model, physical examples, and simulations to verify TOMBO imaging using generalized sampling. In the system, an object is modulated and multiplied by a weight distribution with physical coding, and the coded optical signal is integrated on to a detector array. A numerical estimation algorithm employing a sparsity constraint is used for object reconstruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pianists of the twenty-first century have a wealth of repertoire at their fingertips. They busily study music from the different periods -- Baroque, Classical, Romantic, and some of the twentieth century -- trying to understand the culture and performance practice of the time and the stylistic traits of each composer so they can communicate their music effectively. Unfortunately, this leaves little time to notice the composers who are writing music today. Whether this neglect proceeds from lack of time or lack of curiosity, I feel we should be connected to music that was written in our own lifetime, when we already understand the culture and have knowledge of the different styles that preceded us. Therefore, in an attempt to promote today’s composers, I have selected piano music written during my lifetime, to show that contemporary music is effective and worthwhile and deserves as much attention as the music that preceded it. This dissertation showcases piano music composed from 1978 to 2005. A point of departure in selecting the pieces for this recording project is to represent the major genres in the piano repertoire in order to show a variety of styles, moods, lengths, and difficulties. Therefore, from these recordings, there is enough variety to successfully program a complete contemporary recital from the selected works, and there is enough variety to meet the demands of pianists with different skill levels and recital programming needs. Since we live in an increasingly global society, music from all parts of the world is included to offer a fair representation of music being composed everywhere. Half of the music in this project comes from the United States. The other half comes from Australia, Japan, Russia, and Argentina. The composers represented in these recordings are: Lowell Liebermann, Richard Danielpour, Frederic Rzewski, Judith Lang Zaimont, Samuel Adler, Carl Vine, Nikolai Kapustin, Akira Miyoshi and Osvaldo Golijov. With the exception of one piano concerto, all the works are for solo piano. This recording project dissertation consists of two 60 minute CDs of selected repertoire, accompanied by a substantial document of in-depth program notes. The recordings are documented on compact discs that are housed within the University of Maryland Library System.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world's oceans. It is a simultaneous global mega-sequencing campaign aiming to generate the largest standardized microbial data set in a single day. This will be achievable only through the coordinated efforts of an Ocean Sampling Day Consortium, supportive partnerships and networks between sites. This commentary outlines the establishment, function and aims of the Consortium and describes our vision for a sustainable study of marine microbial communities and their embedded functional traits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

X-ray mammography has been the gold standard for breast imaging for decades, despite the significant limitations posed by the two dimensional (2D) image acquisitions. Difficulty in diagnosing lesions close to the chest wall and axilla, high amount of structural overlap and patient discomfort due to compression are only some of these limitations. To overcome these drawbacks, three dimensional (3D) breast imaging modalities have been developed including dual modality single photon emission computed tomography (SPECT) and computed tomography (CT) systems. This thesis focuses on the development and integration of the next generation of such a device for dedicated breast imaging. The goals of this dissertation work are to: [1] understand and characterize any effects of fully 3-D trajectories on reconstructed image scatter correction, absorbed dose and Hounsifeld Unit accuracy, and [2] design, develop and implement the fully flexible, third generation hybrid SPECT-CT system capable of traversing complex 3D orbits about a pendant breast volume, without interference from the other. Such a system would overcome artifacts resulting from incompletely sampled divergent cone beam imaging schemes and allow imaging closer to the chest wall, which other systems currently under research and development elsewhere cannot achieve.

The dependence of x-ray scatter radiation on object shape, size, material composition and the CT acquisition trajectory, was investigated with a well-established beam stop array (BSA) scatter correction method. While the 2D scatter to primary ratio (SPR) was the main metric used to characterize total system scatter, a new metric called ‘normalized scatter contribution’ was developed to compare the results of scatter correction on 3D reconstructed volumes. Scatter estimation studies were undertaken with a sinusoidal saddle (±15° polar tilt) orbit and a traditional circular (AZOR) orbit. Clinical studies to acquire data for scatter correction were used to evaluate the 2D SPR on a small set of patients scanned with the AZOR orbit. Clinical SPR results showed clear dependence of scatter on breast composition and glandular tissue distribution, otherwise consistent with the overall phantom-based size and density measurements. Additionally, SPR dependence was also observed on the acquisition trajectory where 2D scatter increased with an increase in the polar tilt angle of the system.

The dose delivered by any imaging system is of primary importance from the patient’s point of view, and therefore trajectory related differences in the dose distribution in a target volume were evaluated. Monte Carlo simulations as well as physical measurements using radiochromic film were undertaken using saddle and AZOR orbits. Results illustrated that both orbits deliver comparable dose to the target volume, and only slightly differ in distribution within the volume. Simulations and measurements showed similar results, and all measured dose values were within the standard screening mammography-specific, 6 mGy dose limit, which is used as a benchmark for dose comparisons.

Hounsfield Units (HU) are used clinically in differentiating tissue types in a reconstructed CT image, and therefore the HU accuracy of a system is very important, especially when using non-traditional trajectories. Uniform phantoms filled with various uniform density fluids were used to investigate differences in HU accuracy between saddle and AZOR orbits. Results illustrate the considerably better performance of the saddle orbit, especially close to the chest and nipple region of what would clinically be a pedant breast volume. The AZOR orbit causes shading artifacts near the nipple, due to insufficient sampling, rendering a major portion of the scanned phantom unusable, whereas the saddle orbit performs exceptionally well and provides a tighter distribution of HU values in reconstructed volumes.

Finally, the third generation, fully-suspended SPECT-CT system was designed in and developed in our lab. A novel mechanical method using a linear motor was developed for tilting the CT system. A new x-ray source and a custom made 40 x 30 cm2 detector were integrated on to this system. The SPECT system was nested, in the center of the gantry, orthogonal to the CT source-detector pair. The SPECT system tilts on a goniometer, and the newly developed CT tilting mechanism allows ±15° maximum polar tilting of the CT system. The entire gantry is mounted on a rotation stage, allowing complex arbitrary trajectories for each system, without interference from the other, while having a common field of view. This hybrid system shows potential to be used clinically as a diagnostic tool for dedicated breast imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is no abstract for this record.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semi-Lagrangian finite volume schemes for the numerical approximation of linear advection equations are presented. These schemes are constructed so that the conservation properties are preserved by the numerical approximation. This is achieved using an interpolation procedure based on area-weighting. Numerical results are presented illustrating some of the features of these schemes.