995 resultados para INTERPOLATION METHODS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper the scales of classes of stochastic processes are introduced. New interpolation theorems and boundedness of some transforms of stochastic processes are proved. Interpolation method for generously-monotonous rocesses is entered. Conditions and statements of interpolation theorems concern he xed stochastic process, which diers from the classical results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy saving, reduction of greenhouse gasses and increased use of renewables are key policies to achieve the European 2020 targets. In particular, distributed renewable energy sources, integrated with spatial planning, require novel methods to optimise supply and demand. In contrast with large scale wind turbines, small and medium wind turbines (SMWTs) have a less extensive impact on the use of space and the power system, nevertheless, a significant spatial footprint is still present and the need for good spatial planning is a necessity. To optimise the location of SMWTs, detailed knowledge of the spatial distribution of the average wind speed is essential, hence, in this article, wind measurements and roughness maps were used to create a reliable annual mean wind speed map of Flanders at 10 m above the Earth’s surface. Via roughness transformation, the surface wind speed measurements were converted into meso- and macroscale wind data. The data were further processed by using seven different spatial interpolation methods in order to develop regional wind resource maps. Based on statistical analysis, it was found that the transformation into mesoscale wind, in combination with Simple Kriging, was the most adequate method to create reliable maps for decision-making on optimal production sites for SMWTs in Flanders (Belgium).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We have developed a statistical gap-filling method adapted to the specific coverage and properties of observed fugacity of surface ocean CO2 (fCO2). We have used this method to interpolate the Surface Ocean CO2 Atlas (SOCAT) v2 database on a 2.5°×2.5° global grid (south of 70°N) for 1985-2011 at monthly resolution. The method combines a spatial interpolation based on a 'radius of influence' to determine nearby similar fCO2 values with temporal harmonic and cubic spline curve-fitting, and also fits long term trends and seasonal cycles. Interannual variability is established using deviations of observations from the fitted trends and seasonal cycles. An uncertainty is computed for all interpolated values based on the spatial and temporal range of the interpolation. Tests of the method using model data show that it performs as well as or better than previous regional interpolation methods, but in addition it provides a near-global and interannual coverage.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this chapter we present the relevant mathematical background to address two well defined signal and image processing problems. Namely, the problem of structured noise filtering and the problem of interpolation of missing data. The former is addressed by recourse to oblique projection based techniques whilst the latter, which can be considered equivalent to impulsive noise filtering, is tackled by appropriate interpolation methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this study was to establish a digital elevation model and its horizontal resolution to interpolate the annual air temperature for the Alagoas State by means of multiple linear regression models. A multiple linear regression model was adjusted to series (11 to 34 years) of annual air temperatures obtained from 28 weather stations in the states of Alagoas, Bahia, Pernambuco and Sergipe, in the Northeast of Brazil, in function of latitude, longitude and altitude. The elevation models SRTM and GTOPO30 were used in the analysis, with original resolutions of 90 and 900 m, respectively. The SRTM was resampled for horizontal resolutions of 125, 250, 500, 750 and 900 m. For spatializing the annual mean air temperature for the state of Alagoas, a multiple linear regression model was used for each elevation and spatial resolution on a grid of the latitude and longitude. In Alagoas, estimates based on SRTM data resulted in a standard error of estimate (0.57 degrees C) and dispersion (r(2) = 0.62) lower than those obtained from GTOPO30 (0.93 degrees C and 0.20). In terms of SRTM resolutions, no significant differences were observed between the standard error (0.55 degrees C; 750 m - 0.58 degrees C; 250m) and dispersion (0.60; 500 m - 0.65; 750 m) estimates. The spatialization of annual air temperature in Alagoas, via multiple regression models applied to SRTM data showed higher concordance than that obtained with the GTOPO30, independent of the spatial resolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nos anos mais recentes, observa-se aumento na adoção das técnicas de silvicultura de precisão em florestas plantadas no Brasil. Os plantios de eucalipto ocorrem preferencialmente em áreas com baixa fertilidade de solo e consequentemente baixa produtividade. Logo, para otimizar ao máximo a produção, é necessário saber o quanto essa cultura pode produzir em cada local (sítio). Objetivou-se aplicar uma metodologia que utiliza técnicas de estatística, geoestatística e geoprocessamento, no mapeamento da variabilidade espacial e temporal de atributos químicos do solo cultivado com eucalipto, em área de 10,09 ha, situada no sul do estado do Espírito Santo. Os atributos químicos da fertilidade do solo estudados foram: fósforo (P), potássio (K), cálcio (Ca) e magnésio (Mg), no ano da implantação do povoamento do eucalipto, em 2008, e três anos após, em 2011. O solo foi amostrado em duas profundidades, 0-0,2 m e 0,2-0,4 m, nos 94 pontos de uma malha regular, com extensão de 33 x 33 m. Os dados foram analisados pela estatística descritiva e, em seguida, pela geoestatística, por meio do ajuste de semivariogramas. Diferentes métodos de interpolação foram testados para produzir mapas temáticos mais precisos e facilitar as operações algébricas utilizadas. Com o auxílio de índices quantitativos, realizou-se uma análise geral da fertilidade do solo, por meio da álgebra de mapas. A metodologia utilizada neste estudo possibilitou mapear a variabilidade espacial e temporal de atributos químicos do solo. A análise variográfica mostrou que todos os atributos estudados apresentaram-se estruturados espacialmente, exceto para o atributo P, no Ano Zero (camada 0-0,2 m) e no Ano Três (ambas as camadas). Os melhores métodos de interpolação para o mapeamento de cada atributo químico do solo foram identificados com a ajuda gráfica do Diagrama de Taylor. Mereceram destaque, os modelos esférico e exponencial nas interpolações para a maioria dos atributos químicos do solo avaliados. Apesar de a variação espacial e temporal dos atributos estudados apresentar-se, em média, com pequena variação negativa, a metodologia usada mostrou variações positivas na fertilidade do solo em várias partes da área de estudo. Além disso, os resultados demonstram que os efeitos observados são majoritariamente em função da cultura, uma vez que não foram coletadas amostras de solo em locais adubados. A produtividade do sítio florestal apresentou-se com tendências semelhantes às variações ocorridas na fertilidade do solo, exceto para o magnésio, que se mostrou com tendências espaciais para suporte de elevadas produtividades, de até 50 m3 ha-1 ano-1. Além de mostrar claramente as tendências observadas para as variações na fertilidade do solo, a metodologia utilizada confirma um caminho operacional acessível para empresas e produtores florestais para o manejo nutricional em florestas plantadas. O uso dos mapas facilita a mobilização de recursos para melhorar a aplicação de fertilizantes e corretivos necessários.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para a obtenção do grau de Mestre em Engenharia Mecânica /Energia

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho Final de mestrado para obtenção do grau de Mestre em engenharia Mecância

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we propose a new automatic methodology for computing accurate digital elevation models (DEMs) in urban environments from low baseline stereo pairs that shall be available in the future from a new kind of earth observation satellite. This setting makes both views of the scene similarly, thus avoiding occlusions and illumination changes, which are the main disadvantages of the commonly accepted large-baseline configuration. There still remain two crucial technological challenges: (i) precisely estimating DEMs with strong discontinuities and (ii) providing a statistically proven result, automatically. The first one is solved here by a piecewise affine representation that is well adapted to man-made landscapes, whereas the application of computational Gestalt theory introduces reliability and automation. In fact this theory allows us to reduce the number of parameters to be adjusted, and tocontrol the number of false detections. This leads to the selection of a suitable segmentation into affine regions (whenever possible) by a novel and completely automatic perceptual grouping method. It also allows us to discriminate e.g. vegetation-dominated regions, where such an affine model does not apply anda more classical correlation technique should be preferred. In addition we propose here an extension of the classical ”quantized” Gestalt theory to continuous measurements, thus combining its reliability with the precision of variational robust estimation and fine interpolation methods that are necessary in the low baseline case. Such an extension is very general and will be useful for many other applications as well.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Forecasting coal resources and reserves is critical for coal mine development. Thickness maps are commonly used for assessing coal resources and reserves; however they are limited for capturing coal splitting effects in thick and heterogeneous coal zones. As an alternative, three-dimensional geostatistical methods are used to populate facies distributionwithin a densely drilled heterogeneous coal zone in the As Pontes Basin (NWSpain). Coal distribution in this zone is mainly characterized by coal-dominated areas in the central parts of the basin interfingering with terrigenous-dominated alluvial fan zones at the margins. The three-dimensional models obtained are applied to forecast coal resources and reserves. Predictions using subsets of the entire dataset are also generated to understand the performance of methods under limited data constraints. Three-dimensional facies interpolation methods tend to overestimate coal resources and reserves due to interpolation smoothing. Facies simulation methods yield similar resource predictions than conventional thickness map approximations. Reserves predicted by facies simulation methods are mainly influenced by: a) the specific coal proportion threshold used to determine if a block can be recovered or not, and b) the capability of the modelling strategy to reproduce areal trends in coal proportions and splitting between coal-dominated and terrigenousdominated areas of the basin. Reserves predictions differ between the simulation methods, even with dense conditioning datasets. Simulation methods can be ranked according to the correlation of their outputs with predictions from the directly interpolated coal proportion maps: a) with low-density datasets sequential indicator simulation with trends yields the best correlation, b) with high-density datasets sequential indicator simulation with post-processing yields the best correlation, because the areal trends are provided implicitly by the dense conditioning data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Artificial Neural Networks (ANNs) are mathematical models method capable of estimating non-linear response plans. The advantage of these models is to present different responses of the statistical models. Thus, the objective of this study was to develop and to test ANNs for estimating rainfall erosivity index (EI30) as a function of the geographical location for the state of Rio de Janeiro, Brazil and generating a thematic visualization map. The characteristics of latitude, longitude e altitude using ANNs were acceptable to estimating EI30 and allowing visualization of the space variability of EI30. Thus, ANN is a potential option for the estimate of climatic variables in substitution to the traditional methods of interpolation.