974 resultados para Interpolation methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mapping the spatial distribution of contaminants in soils is the basis of pollution evaluation and risk control. Interpolation methods are extensively applied in the mapping processes to estimate the heavy metal concentrations at unsampled sites. The performances of interpolation methods (inverse distance weighting, local polynomial, ordinary kriging and radial basis functions) were assessed and compared using the root mean square error for cross validation. The results indicated that all interpolation methods provided a high prediction accuracy of the mean concentration of soil heavy metals. However, the classic method based on percentages of polluted samples, gave a pollution area 23.54-41.92% larger than that estimated by interpolation methods. The difference in contaminated area estimation among the four methods reached 6.14%. According to the interpolation results, the spatial uncertainty of polluted areas was mainly located in three types of region: (a) the local maxima concentration region surrounded by low concentration (clean) sites, (b) the local minima concentration region surrounded with highly polluted samples; and (c) the boundaries of the contaminated areas. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy saving, reduction of greenhouse gasses and increased use of renewables are key policies to achieve the European 2020 targets. In particular, distributed renewable energy sources, integrated with spatial planning, require novel methods to optimise supply and demand. In contrast with large scale wind turbines, small and medium wind turbines (SMWTs) have a less extensive impact on the use of space and the power system, nevertheless, a significant spatial footprint is still present and the need for good spatial planning is a necessity. To optimise the location of SMWTs, detailed knowledge of the spatial distribution of the average wind speed is essential, hence, in this article, wind measurements and roughness maps were used to create a reliable annual mean wind speed map of Flanders at 10 m above the Earth’s surface. Via roughness transformation, the surface wind speed measurements were converted into meso- and macroscale wind data. The data were further processed by using seven different spatial interpolation methods in order to develop regional wind resource maps. Based on statistical analysis, it was found that the transformation into mesoscale wind, in combination with Simple Kriging, was the most adequate method to create reliable maps for decision-making on optimal production sites for SMWTs in Flanders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy saving, reduction of greenhouse gasses and increased use of renewables are key policies to achieve the European 2020 targets. In particular, distributed renewable energy sources, integrated with spatial planning, require novel methods to optimise supply and demand. In contrast with large scale wind turbines, small and medium wind turbines (SMWTs) have a less extensive impact on the use of space and the power system, nevertheless, a significant spatial footprint is still present and the need for good spatial planning is a necessity. To optimise the location of SMWTs, detailed knowledge of the spatial distribution of the average wind speed is essential, hence, in this article, wind measurements and roughness maps were used to create a reliable annual mean wind speed map of Flanders at 10 m above the Earth’s surface. Via roughness transformation, the surface wind speed measurements were converted into meso- and macroscale wind data. The data were further processed by using seven different spatial interpolation methods in order to develop regional wind resource maps. Based on statistical analysis, it was found that the transformation into mesoscale wind, in combination with Simple Kriging, was the most adequate method to create reliable maps for decision-making on optimal production sites for SMWTs in Flanders (Belgium).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function (CPDF) is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents a validation study on the application of a novel interslice interpolation technique for musculoskeletal structure segmentation of articulated joints and muscles on human magnetic resonance imaging data. The interpolation technique is based on morphological shape-based interpolation combined with intensity based voxel classification. Shape-based interpolation in the absence of the original intensity image has been investigated intensively. However, in some applications of medical image analysis, the intensity image of the slice to be interpolated is available. For example, when manual segmentation is conducted on selected slices, the segmentation on those unselected slices can be obtained by interpolation. We proposed a two- step interpolation method to utilize both the shape information in the manual segmentation and local intensity information in the image. The method was tested on segmentations of knee, hip and shoulder joint bones and hamstring muscles. The results were compared with two existing interpolation methods. Based on the calculated Dice similarity coefficient and normalized error rate, the proposed method outperformed the other two methods.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We have developed a statistical gap-filling method adapted to the specific coverage and properties of observed fugacity of surface ocean CO2 (fCO2). We have used this method to interpolate the Surface Ocean CO2 Atlas (SOCAT) v2 database on a 2.5°×2.5° global grid (south of 70°N) for 1985-2011 at monthly resolution. The method combines a spatial interpolation based on a 'radius of influence' to determine nearby similar fCO2 values with temporal harmonic and cubic spline curve-fitting, and also fits long term trends and seasonal cycles. Interannual variability is established using deviations of observations from the fitted trends and seasonal cycles. An uncertainty is computed for all interpolated values based on the spatial and temporal range of the interpolation. Tests of the method using model data show that it performs as well as or better than previous regional interpolation methods, but in addition it provides a near-global and interannual coverage.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this chapter we present the relevant mathematical background to address two well defined signal and image processing problems. Namely, the problem of structured noise filtering and the problem of interpolation of missing data. The former is addressed by recourse to oblique projection based techniques whilst the latter, which can be considered equivalent to impulsive noise filtering, is tackled by appropriate interpolation methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: The high demanding computational requirements necessary to carry out protein motion simulations make it difficult to obtain information related to protein motion. On the one hand, molecular dynamics simulation requires huge computational resources to achieve satisfactory motion simulations. On the other hand, less accurate procedures such as interpolation methods, do not generate realistic morphs from the kinematic point of view. Analyzing a protein's movement is very similar to serial robots; thus, it is possible to treat the protein chain as a serial mechanism composed of rotational degrees of freedom. Recently, based on this hypothesis, new methodologies have arisen, based on mechanism and robot kinematics, to simulate protein motion. Probabilistic roadmap method, which discretizes the protein configurational space against a scoring function, or the kinetostatic compliance method that minimizes the torques that appear in bonds, aim to simulate protein motion with a reduced computational cost. Results: In this paper a new viewpoint for protein motion simulation, based on mechanism kinematics is presented. The paper describes a set of methodologies, combining different techniques such as structure normalization normalization processes, simulation algorithms and secondary structure detection procedures. The combination of all these procedures allows to obtain kinematic morphs of proteins achieving a very good computational cost-error rate, while maintaining the biological meaning of the obtained structures and the kinematic viability of the obtained motion. Conclusions: The procedure presented in this paper, implements different modules to perform the simulation of the conformational change suffered by a protein when exerting its function. The combination of a main simulation procedure assisted by a secondary structure process, and a side chain orientation strategy, allows to obtain a fast and reliable simulations of protein motion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

以黄土丘陵沟壑第三副区的藉河流域为研究区,根据65个实测点数据,采用普通克里格法、反距离权重法、样条函数法等插值方法,分析了测点数量变化、栅格像元尺寸变化及插值方法的差异对土壤稳定入渗速率空间插值结果的影响,剖析了空间插值中的不确定性。结果表明:(1)参与插值站点越多,所得插值结果不确定性越小;(2)像元尺寸在25~800 m间变化对土壤稳定入渗速率的插值结果影响微弱;(3)不同插值方法对插值结果的精度影响较大,说明插值方法的差异对插值结果的不确定性有较大影响。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

基于地统计学原理和半变异理论,采用地理信息系统技术定量研究了陕西省水蚀土壤因子指标的空间变异特征。分析比较了反距离权重法、样条函数法与普通克吕格法对陕西省土壤因子指标空间插值的精度。结果表明,研究区土壤抗冲系数具有中等强度的空间相关性,块金系数为32.29%,而稳渗速率、崩解速率、抗剪强度均表现为强烈的空间相关性,块金系数分别为13.19%,11.61%和12.98%。综合考虑平均相对误差、均方差及插值效果,认为普通克吕格法最好,更能反映土壤参数的空间特征并符合区域水土流失模型对数据的要求。对于普通克吕格法,稳渗速率的Lag步长为30 000 m,半方差理论模型为指数模型;抗冲系数、崩解速率、抗剪强度的Lag步长为55 000,半方差理论模型均为高斯模型。在空间分布上,各指标随土壤类型由北到南呈现明显的地带性规律。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

在区域水土流失模型研究中,空间插值可提供每个计算栅格的气象要素资料。考虑到研究区域降雨与高程相关性很弱,不宜采用梯度距离反比法(GIDS),故采用距离反比法(IDW)和普通克里格法(Kriging),对延安示范区及其周围共50个站点2000—2003年的5—10月逐月降雨量进行插值。交叉验证结果表明:对2种插值方法,二者经过对数变换后平均相对误差(MRE)为8.30%和7.67%,分别比原始数据插值后的MRE下降了23.17%和23.50%,说明插值精度得到了提升,对研究区域某一年逐月降水的插值Kriging方法比IDW方法更加精确。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Based on the fractal theories, contractive mapping principles as well as the fixed point theory, by means of affine transform, this dissertation develops a novel Explicit Fractal Interpolation Function(EFIF)which can be used to reconstruct the seismic data with high fidelity and precision. Spatial trace interpolation is one of the important issues in seismic data processing. Under the ideal circumstances, seismic data should be sampled with a uniform spatial coverage. However, practical constraints such as the complex surface conditions indicate that the sampling density may be sparse or for other reasons some traces may be lost. The wide spacing between receivers can result in sparse sampling along traverse lines, thus result in a spatial aliasing of short-wavelength features. Hence, the method of interpolation is of very importance. It not only needs to make the amplitude information obvious but the phase information, especially that of the point that the phase changes acutely. Many people put forward several interpolation methods, yet this dissertation focuses attention on a special class of fractal interpolation function, referred to as explicit fractal interpolation function to improve the accuracy of the interpolation reconstruction and to make the local information obvious. The traditional fractal interpolation method mainly based on the randomly Fractional Brown Motion (FBM) model, furthermore, the vertical scaling factor which plays a critical role in the implementation of fractal interpolation is assigned the same value during the whole interpolating process, so it can not make the local information obvious. In addition, the maximal defect of the traditional fractal interpolation method is that it cannot obtain the function values on each interpolating nodes, thereby it cannot analyze the node error quantitatively and cannot evaluate the feasibility of this method. Detailed discussions about the applications of fractal interpolation in seismology have not been given by the pioneers, let alone the interpolating processing of the single trace seismogram. On the basis of the previous work and fractal theory this dissertation discusses the fractal interpolation thoroughly and the stability of this special kind of interpolating function is discussed, at the same time the explicit presentation of the vertical scaling factor which controls the precision of the interpolation has been proposed. This novel method develops the traditional fractal interpolation method and converts the fractal interpolation with random algorithms into the interpolation with determined algorithms. The data structure of binary tree method has been applied during the process of interpolation, and it avoids the process of iteration that is inevitable in traditional fractal interpolation and improves the computation efficiency. To illustrate the validity of the novel method, this dissertation develops several theoretical models and synthesizes the common shot gathers and seismograms and reconstructs the traces that were erased from the initial section using the explicit fractal interpolation method. In order to compare the differences between the theoretical traces that were erased in the initial section and the resulting traces after reconstruction on waveform and amplitudes quantitatively, each missing traces are reconstructed and the residuals are analyzed. The numerical experiments demonstrate that the novel fractal interpolation method is not only applicable to reconstruct the seismograms with small offset but to the seismograms with large offset. The seismograms reconstructed by explicit fractal interpolation method resemble the original ones well. The waveform of the missing traces could be estimated very well and also the amplitudes of the interpolated traces are a good approximation of the original ones. The high precision and computational efficiency of the explicit fractal interpolation make it a useful tool to reconstruct the seismic data; it can not only make the local information obvious but preserve the overall characteristics of the object investigated. To illustrate the influence of the explicit fractal interpolation method to the accuracy of the imaging of the structure in the earth’s interior, this dissertation applies the method mentioned above to the reverse-time migration. The imaging sections obtained by using the fractal interpolated reflected data resemble the original ones very well. The numerical experiments demonstrate that even with the sparse sampling we can still obtain the high accurate imaging of the earth’s interior’s structure by means of the explicit fractal interpolation method. So we can obtain the imaging results of the earth’s interior with fine quality by using relatively small number of seismic stations. With the fractal interpolation method we will improve the efficiency and the accuracy of the reverse-time migration under economic conditions. To verify the application effect to real data of the method presented in this paper, we tested the method by using the real data provided by the Broadband Seismic Array Laboratory, IGGCAS. The results demonstrate that the accuracy of explicit fractal interpolation is still very high even with the real data with large epicenter and large offset. The amplitudes and the phase of the reconstructed station data resemble the original ones that were erased in the initial section very well. Altogether, the novel fractal interpolation function provides a new and useful tool to reconstruct the seismic data with high precision and efficiency, and presents an alternative to image the deep structure of the earth accurately.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quantifying nutrient and sediment loads in catchments is dif?cult owing to diffuse controls related to storm hydrology. Coarse sampling and interpolation methods are prone to very high uncertainties due to under-representation of high discharge, short duration events. Additionally, important low-?ow processes such as diurnal signals linked to point source impacts are missed. Here we demonstrate a solution based on a time-integrated approach to sampling with a standard 24 bottle autosampler con?gured to take a sample every 7 h over a week according to a Plynlimon design. This is evaluated with a number of other sampling strategies using a two-year dataset of sub-hourly discharge and phosphorus concentration data. The 24/7 solution is shown to be among the least uncertain in estimating load (inter-quartile range: 96% to 110% of actual load in year 1 and 97% to 104% in year 2) due to the increased frequency raising the probability of sampling storm events and point source signals. The 24/7 solution would appear to be most parsimonious in terms of data coverage and certainty, process signal representation, potential laboratory commitment, technology requirements and the ability to be widely deployed in complex catchments.