971 resultados para scattered data interpolation
Resumo:
Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function (CPDF) is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures.
Resumo:
Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient's spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient's fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements. © 2013 Paolo Bifulco et al.
Resumo:
The central aim for the research undertaken in this PhD thesis is the development of a model for simulating water droplet movement on a leaf surface and to compare the model behavior with experimental observations. A series of five papers has been presented to explain systematically the way in which this droplet modelling work has been realised. Knowing the path of the droplet on the leaf surface is important for understanding how a droplet of water, pesticide, or nutrient will be absorbed through the leaf surface. An important aspect of the research is the generation of a leaf surface representation that acts as the foundation of the droplet model. Initially a laser scanner is used to capture the surface characteristics for two types of leaves in the form of a large scattered data set. After the identification of the leaf surface boundary, a set of internal points is chosen over which a triangulation of the surface is constructed. We present a novel hybrid approach for leaf surface fitting on this triangulation that combines Clough-Tocher (CT) and radial basis function (RBF) methods to achieve a surface with a continuously turning normal. The accuracy of the hybrid technique is assessed using numerical experimentation. The hybrid CT-RBF method is shown to give good representations of Frangipani and Anthurium leaves. Such leaf models facilitate an understanding of plant development and permit the modelling of the interaction of plants with their environment. The motion of a droplet traversing this virtual leaf surface is affected by various forces including gravity, friction and resistance between the surface and the droplet. The innovation of our model is the use of thin-film theory in the context of droplet movement to determine the thickness of the droplet as it moves on the surface. Experimental verification shows that the droplet model captures reality quite well and produces realistic droplet motion on the leaf surface. Most importantly, we observed that the simulated droplet motion follows the contours of the surface and spreads as a thin film. In the future, the model may be applied to determine the path of a droplet of pesticide along a leaf surface before it falls from or comes to a standstill on the surface. It will also be used to study the paths of many droplets of water or pesticide moving and colliding on the surface.
Resumo:
The foliage of a plant performs vital functions. As such, leaf models are required to be developed for modelling the plant architecture from a set of scattered data captured using a scanning device. The leaf model can be used for purely visual purposes or as part of a further model, such as a fluid movement model or biological process. For these reasons, an accurate mathematical representation of the surface and boundary is required. This paper compares three approaches for fitting a continuously differentiable surface through a set of scanned data points from a leaf surface, with a technique already used for reconstructing leaf surfaces. The techniques which will be considered are discrete smoothing D2-splines [R. Arcangeli, M. C. Lopez de Silanes, and J. J. Torrens, Multidimensional Minimising Splines, Springer, 2004.], the thin plate spline finite element smoother [S. Roberts, M. Hegland, and I. Altas, Approximation of a Thin Plate Spline Smoother using Continuous Piecewise Polynomial Functions, SIAM, 1 (2003), pp. 208--234] and the radial basis function Clough-Tocher method [M. Oqielat, I. Turner, and J. Belward, A hybrid Clough-Tocher method for surface fitting with application to leaf data., Appl. Math. Modelling, 33 (2009), pp. 2582-2595]. Numerical results show that discrete smoothing D2-splines produce reconstructed leaf surfaces which better represent the original physical leaf.
Resumo:
As a fast and effective method for approximate calculation of seismic numerical simulation, ray tracing method, which has important theory and practical application value, in terms of seismic theory and seismic simulation, inversion, migration, imaging, simplified from seismic theory according to geometric seismic, means that the main energy of seismic wave field propagates along ray paths in condition of high-frequency asymptotic approximation. Calculation of ray paths and traveltimes is one of key steps in seismic simulation, inversion, migration, and imaging. Integrated triangular grids layout on wavefront with wavefront reconstruction ray tracing method, the thesis puts forward wavefront reconstruction ray tracing method based on triangular grids layout on wavefront, achieves accurate and fast calculation of ray paths and traveltimes. This method has stable and reasonable ray distribution, and overcomes problems caused by shadows in conventional ray tracing methods. The application of triangular grids layout on wavefront, keeps all the triangular grids stable, and makes the division of grids and interpolation of a new ray convenient. This technology reduces grids and memory, and then improves calculation efficiency. It enhances calculation accuracy by accurate and effective description and division on wavefront. Ray tracing traveltime table, which shares the character of 2-D or 3-D scatter data, has great amount of data points in process of seismic simulation, inversion, migration, and imaging. Therefore the traveltime table file will be frequently read, and the calculation efficiency is very low. Due to these reasons, reasonable traveltime table compression will be very necessary. This thesis proposes surface fitting and scattered data compression with B-spline function method, applies to 2-D and 3-D traveltime table compression. In order to compress 2-D (3-D) traveltime table, first we need construct a smallest rectangular (cuboidal) region with regular grids to cover all the traveltime data points, through the coordinate range of them in 2-D surface (3-D space). Then the value of finite regular grids, which are stored in memory, can be calculated using least square method. The traveltime table can be decompressed when necessary, according to liner interpolation method of 2-D (3-D) B-spline function. In the above calculation, the coefficient matrix is stored using sparse method and the liner system equations are solved using LU decomposition based on the multi-frontal method according to the sparse character of the least square method matrix. This method is practiced successfully in several models, and the cubic B-spline function can be the best basal function for surface fitting. It make the construction surface smooth, has stable and effective compression with high approximate accuracy using regular grids. In this way, through constructing reasonable regular grids to insure the calculation efficiency and accuracy of compression and surface fitting, we achieved the aim of traveltime table compression. This greatly improves calculation efficiency in process of seismic simulation, inversion, migration, and imaging.
Resumo:
La tecnología LiDAR (Light Detection and Ranging), basada en el escaneado del territorio por un telémetro láser aerotransportado, permite la construcción de Modelos Digitales de Superficie (DSM) mediante una simple interpolación, así como de Modelos Digitales del Terreno (DTM) mediante la identificación y eliminación de los objetos existentes en el terreno (edificios, puentes o árboles). El Laboratorio de Geomática del Politécnico de Milán – Campus de Como- desarrolló un algoritmo de filtrado de datos LiDAR basado en la interpolación con splines bilineares y bicúbicas con una regularización de Tychonov en una aproximación de mínimos cuadrados. Sin embargo, en muchos casos son todavía necesarios modelos más refinados y complejos en los cuales se hace obligatorio la diferenciación entre edificios y vegetación. Este puede ser el caso de algunos modelos de prevención de riesgos hidrológicos, donde la vegetación no es necesaria; o la modelización tridimensional de centros urbanos, donde la vegetación es factor problemático. (...)
Resumo:
In the last years, the use of every type of Digital Elevation Models has iimproved. The LiDAR (Light Detection and Ranging) technology, based on the scansion of the territory b airborne laser telemeters, allows the construction of digital Surface Models (DSM), in an easy way by a simple data interpolation
Resumo:
Modelling droplet movement on leaf surfaces is an important component in understanding how water, pesticide or nutrient is absorbed through the leaf surface. A simple mathematical model is proposed in this paper for generating a realistic, or natural looking trajectory of a water droplet traversing a virtual leaf surface. The virtual surface is comprised of a triangular mesh structure over which a hybrid Clough-Tocher seamed element interpolant is constructed from real-life scattered data captured by a laser scanner. The motion of the droplet is assumed to be affected by gravitational, frictional and surface resistance forces and the innovation of our approach is the use of thin-film theory to develop a stopping criterion for the droplet as it moves on the surface. The droplet model is verified and calibrated using experimental measurement; the results are promising and appear to capture reality quite well.
Resumo:
Three dimensional geological modelling techniques have been applied since 1996 with an aim to characterise the lithological and chronological units of New Zealand’s many diverse aquifers. Models of property-scattered data have also been applied to assess physical properties of aquifers and the distribution of groundwater chemistry, including groundwater age, to inform an understanding of groundwater systems. These models, fundamental to understanding groundwater recharge, flow and discharge have found many uses as outlined in this paper.
Resumo:
In the present work, we study the transverse vortex-induced vibrations of an elastically mounted rigid cylinder in a fluid flow. We employ a technique to accurately control the structural damping, enabling the system to take on both negative and positive damping. This permits a systematic study of the effects of system mass and damping on the peak vibration response. Previous experiments over the last 30 years indicate a large scatter in peak-amplitude data ($A^*$) versus the product of mass–damping ($\alpha$), in the so-called ‘Griffin plot’. A principal result in the present work is the discovery that the data collapse very well if one takes into account the effect of Reynolds number ($\mbox{\textit{Re}}$), as an extra parameter in a modified Griffin plot. Peak amplitudes corresponding to zero damping ($A^*_{{\alpha}{=}0}$), for a compilation of experiments over a wide range of $\mbox{\textit{Re}}\,{=}\,500-33000$, are very well represented by the functional form $A^*_{\alpha{=}0} \,{=}\, f(\mbox{\textit{Re}}) \,{=}\, \log(0.41\,\mbox{\textit{Re}}^{0.36}$). For a given $\mbox{\textit{Re}}$, the amplitude $A^*$ appears to be proportional to a function of mass–damping, $A^*\propto g(\alpha)$, which is a similar function over all $\mbox{\textit{Re}}$. A good best-fit for a wide range of mass–damping and Reynolds number is thus given by the following simple expression, where $A^*\,{=}\, g(\alpha)\,f(\mbox{\textit{Re}})$: \[ A^* \,{=}\,(1 - 1.12\,\alpha + 0.30\,\alpha^2)\,\log (0.41\,\mbox{\textit{Re}}^{0.36}). \] In essence, by using a renormalized parameter, which we define as the ‘modified amplitude’, $A^*_M\,{=}\,A^*/A^*_{\alpha{=}0}$, the previously scattered data collapse very well onto a single curve, $g(\alpha)$, on what we refer to as the ‘modified Griffin plot’. There has also been much debate over the last three decades concerning the validity of using the product of mass and damping (such as $\alpha$) in these problems. Our results indicate that the combined mass–damping parameter ($\alpha$) does indeed collapse peak-amplitude data well, at a given $\mbox{\textit{Re}}$, independent of the precise mass and damping values, for mass ratios down to $m^*\,{=}\,1$.
Resumo:
The detailed understanding of the electronic properties of carbon-based materials requires the determination of their electronic structure and more precisely the calculation of their joint density of states (JDOS) and dielectric constant. Low electron energy loss spectroscopy (EELS) provides a continuous spectrum which represents all the excitations of the electrons within the material with energies ranging between zero and about 100 eV. Therefore, EELS is potentially more powerful than conventional optical spectroscopy which has an intrinsic upper information limit of about 6 eV due to absorption of light from the optical components of the system or the ambient. However, when analysing EELS data, the extraction of the single scattered data needed for Kramers Kronig calculations is subject to the deconvolution of the zero loss peak from the raw data. This procedure is particularly critical when attempting to study the near-bandgap region of materials with a bandgap below 1.5 eV. In this paper, we have calculated the electronic properties of three widely studied carbon materials; namely amorphous carbon (a-C), tetrahedral amorphous carbon (ta-C) and C60 fullerite crystal. The JDOS curve starts from zero for energy values below the bandgap and then starts to rise with a rate depending on whether the material has a direct or an indirect bandgap. Extrapolating a fit to the data immediately above the bandgap in the stronger energy loss region was used to get an accurate value for the bandgap energy and to determine whether the bandgap is direct or indirect in character. Particular problems relating to the extraction of the single scattered data for these materials are also addressed. The ta-C and C60 fullerite materials are found to be direct bandgap-like semiconductors having a bandgaps of 2.63 and 1.59eV, respectively. On the other hand, the electronic structure of a-C was unobtainable because it had such a small bandgap that most of the information is contained in the first 1.2 eV of the spectrum, which is a region removed during the zero loss deconvolution.
Resumo:
Seismic technique is in the leading position for discovering oil and gas trap and searching for reserves throughout the course of oil and gas exploration. It needs high quality of seismic processed data, not only required exact spatial position, but also the true information of amplitude and AVO attribute and velocity. Acquisition footprint has an impact on highly precision and best quality of imaging and analysis of AVO attribute and velocity. Acquisition footprint is a new conception of describing seismic noise in 3-D exploration. It is not easy to understand the acquisition footprint. This paper begins with forward modeling seismic data from the simple sound wave model, then processes it and discusses the cause for producing the acquisition footprint. It agreed that the recording geometry is the main cause which leads to the distribution asymmetry of coverage and offset and azimuth in different grid cells. It summarizes the characters and description methods and analysis acquisition footprint’s influence on data geology interpretation and the analysis of seismic attribute and velocity. The data reconstruct based on Fourier transform is the main method at present for non uniform data interpolation and extrapolate, but this method always is an inverse problem with bad condition. Tikhonov regularization strategy which includes a priori information on class of solution in search can reduce the computation difficulty duo to discrete kernel condition disadvantage and scarcity of the number of observations. The method is quiet statistical, which does not require the selection of regularization parameter; and hence it has appropriate inversion coefficient. The result of programming and tentat-ive calculation verifies the acquisition footprint can be removed through prestack data reconstruct. This paper applies migration to the processing method of removing the acquisition footprint. The fundamental principle and algorithms are surveyed, seismic traces are weighted according to the area which occupied by seismic trace in different source-receiver distances. Adopting grid method in stead of accounting the area of Voroni map can reduce difficulty of calculation the weight. The result of processing the model data and actual seismic demonstrate, incorporating a weighting scheme based on the relative area that is associated with each input trace with respect to its neighbors acts to minimize the artifacts caused by irregular acquisition geometry.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
Electromagnetic tomography has been applied to problems in nondestructive evolution, ground-penetrating radar, synthetic aperture radar, target identification, electrical well logging, medical imaging etc. The problem of electromagnetic tomography involves the estimation of cross sectional distribution dielectric permittivity, conductivity etc based on measurement of the scattered fields. The inverse scattering problem of electromagnetic imaging is highly non linear and ill posed, and is liable to get trapped in local minima. The iterative solution techniques employed for computing the inverse scattering problem of electromagnetic imaging are highly computation intensive. Thus the solution to electromagnetic imaging problem is beset with convergence and computational issues. The attempt of this thesis is to develop methods suitable for improving the convergence and reduce the total computations for tomographic imaging of two dimensional dielectric cylinders illuminated by TM polarized waves, where the scattering problem is defmed using scalar equations. A multi resolution frequency hopping approach was proposed as opposed to the conventional frequency hopping approach employed to image large inhomogeneous scatterers. The strategy was tested on both synthetic and experimental data and gave results that were better localized and also accelerated the iterative procedure employed for the imaging. A Degree of Symmetry formulation was introduced to locate the scatterer in the investigation domain when the scatterer cross section was circular. The investigation domain could thus be reduced which reduced the degrees of freedom of the inverse scattering process. Thus the entire measured scattered data was available for the optimization of fewer numbers of pixels. This resulted in better and more robust reconstructions of the scatterer cross sectional profile. The Degree of Symmetry formulation could also be applied to the practical problem of limited angle tomography, as in the case of a buried pipeline, where the ill posedness is much larger. The formulation was also tested using experimental data generated from an experimental setup that was designed. The experimental results confirmed the practical applicability of the formulation.
Resumo:
Active microwave imaging is explored as an imaging modality for early detection of breast cancer. When exposed to microwaves, breast tumor exhibits electrical properties that are significantly different from that of healthy breast tissues. The two approaches of active microwave imaging — confocal microwave technique with measured reflected signals and microwave tomographic imaging with measured scattered signals are addressed here. Normal and malignant breast tissue samples of same person are subjected to study within 30 minutes of mastectomy. Corn syrup is used as coupling medium, as its dielectric parameters show good match with that of the normal breast tissue samples. As bandwidth of the transmitter is an important aspect in the time domain confocal microwave imaging approach, wideband bowtie antenna having 2:1 VSWR bandwidth of 46% is designed for the transmission and reception of microwave signals. Same antenna is used for microwave tomographic imaging too at the frequency of 3000 MHz. Experimentally obtained time domain results are substantiated by finite difference time domain (FDTD) analysis. 2-D tomographic images are reconstructed with the collected scattered data using distorted Born iterative method. Variations of dielectric permittivity in breast samples are distinguishable from the obtained permittivity profiles.