944 resultados para Hermite interpolation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pós-graduação em Geografia - FCT

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Categorical data cannot be interpolated directly because they are outcomes of discrete random variables. Thus, types of categorical variables are transformed into indicator functions that can be handled by interpolation methods. Interpolated indicator values are then backtransformed to the original types of categorical variables. However, aspects such as variability and uncertainty of interpolated values of categorical data have never been considered. In this paper we show that the interpolation variance can be used to map an uncertainty zone around boundaries between types of categorical variables. Moreover, it is shown that the interpolation variance is a component of the total variance of the categorical variables, as measured by the coefficient of unalikeability. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this work was to evaluate extreme water table depths in a watershed, using methods for geographical spatial data analysis. Groundwater spatio-temporal dynamics was evaluated in an outcrop of the Guarani Aquifer System. Water table depths were estimated from monitoring of water levels in 23 piezometers and time series modeling available from April 2004 to April 2011. For generation of spatial scenarios, geostatistical techniques were used, which incorporated into the prediction ancillary information related to the geomorphological patterns of the watershed, using a digital elevation model. This procedure improved estimates, due to the high correlation between water levels and elevation, and aggregated physical sense to predictions. The scenarios showed differences regarding the extreme levels - too deep or too shallow ones - and can subsidize water planning, efficient water use, and sustainable water management in the watershed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information about rainfall erosivity is important during soil and water conservation planning. Thus, the spatial variability of rainfall erosivity of the state Mato Grosso do Sul was analyzed using ordinary kriging interpolation. For this, three pluviograph stations were used to obtain the regression equations between the erosivity index and the rainfall coefficient EI30. The equations obtained were applied to 109 pluviometric stations, resulting in EI30 values. These values were analyzed from geostatistical technique, which can be divided into: descriptive statistics, adjust to semivariogram, cross-validation process and implementation of ordinary kriging to generate the erosivity map. Highest erosivity values were found in central and northeast regions of the State, while the lowest values were observed in the southern region. In addition, high annual precipitation values not necessarily produce higher erosivity values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Yield mapping represents the spatial variability concerning the features of a productive area and allows intervening on the next year production, for example, on a site-specific input application. The trial aimed at verifying the influence of a sampling density and the type of interpolator on yield mapping precision to be produced by a manual sampling of grains. This solution is usually adopted when a combine with yield monitor can not be used. An yield map was developed using data obtained from a combine equipped with yield monitor during corn harvesting. From this map, 84 sample grids were established and through three interpolators: inverse of square distance, inverse of distance and ordinary kriging, 252 yield maps were created. Then they were compared with the original one using the coefficient of relative deviation (CRD) and the kappa index. The loss regarding yield mapping information increased as the sampling density decreased. Besides, it was also dependent on the interpolation method used. A multiple regression model was adjusted to the variable CRD, according to the following variables: spatial variability index and sampling density. This model aimed at aiding the farmer to define the sampling density, thus, allowing to obtain the manual yield mapping, during eventual problems in the yield monitor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Piezoresistive sensors are commonly made of a piezoresistive membrane attached to a flexible substrate, a plate. They have been widely studied and used in several applications. It has been found that the size, position and geometry of the piezoresistive membrane may affect the performance of the sensors. Based on this remark, in this work, a topology optimization methodology for the design of piezoresistive plate-based sensors, for which both the piezoresistive membrane and the flexible substrate disposition can be optimized, is evaluated. Perfect coupling conditions between the substrate and the membrane based on the `layerwise' theory for laminated plates, and a material model for the piezoresistive membrane based on the solid isotropic material with penalization model, are employed. The design goal is to obtain the configuration of material that maximizes the sensor sensitivity to external loading, as well as the stiffness of the sensor to particular loads, which depend on the case (application) studied. The proposed approach is evaluated by studying two distinct examples: the optimization of an atomic force microscope probe and a pressure sensor. The results suggest that the performance of the sensors can be improved by using the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An operational method, already employed to formulate a generalization of the Ramanujan master theorem, is applied to the evaluation of integrals of various types. This technique provides a very flexible and powerful tool yielding new results encompassing different aspects of the special function theory. Crown Copyright (C) 2012 Published by Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with the numerical solution of complex fluid dynamics problems using a new bounded high resolution upwind scheme (called SDPUS-C1 henceforth), for convection term discretization. The scheme is based on TVD and CBC stability criteria and is implemented in the context of the finite volume/difference methodologies, either into the CLAWPACK software package for compressible flows or in the Freeflow simulation system for incompressible viscous flows. The performance of the proposed upwind non-oscillatory scheme is demonstrated by solving two-dimensional compressible flow problems, such as shock wave propagation and two-dimensional/axisymmetric incompressible moving free surface flows. The numerical results demonstrate that this new cell-interface reconstruction technique works very well in several practical applications. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We present a new method to construct a trivariate T-spline representation of complex genuszero solids for the application of isogeometric analysis. The proposed technique only demands a surface triangulation of the solid as input data. The key of this method lies in obtaining a volumetric parameterization between the solid and the parametric domain, the unitary cube. To do that, an adaptive tetrahedral mesh of the parametric domain is isomorphically transformed onto the solid by applying a mesh untangling and smoothing procedure. The control points of the trivariate T-spline are calculated by imposing the interpolation conditions on points sited both on the inner and on the surface of the solid...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We present a new method to construct a trivariate T-spline representation of complex solids for the application of isogeometric analysis. The proposed technique only demands the surface of the solid as input data. The key of this method lies in obtaining a volumetric parameterization between the solid and a simple parametric domain. To do that, an adaptive tetrahedral mesh of the parametric domain is isomorphically transformed onto the solid by applying the meccano method. The control points of the trivariate T-spline are calculated by imposing the interpolation conditions on points situated both on the inner and on the surface of the solid...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We present a new strategy, based on the meccano method [1, 2, 3], to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. The key of the method lies in defining an isomorphic transformation between the parametric and physical T-mesh finding the optimal position of the interior nodes by applying a new T-mesh untangling and smoothing procedure. Bivariate T-spline representation is calculated by imposing the interpolation conditions on points sited both on the interior and on the boundary of the geometry…

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]In this talk we introduce a new methodology for wind field simulation or forecasting over complex terrain. The idea is to use wind measurements or predictions of the HARMONIE mesoscale model as the input data for an adaptive finite element mass consistent wind model [1,2]. The method has been recently implemented in the freely-available Wind3D code [3]. A description of the HARMONIE Non-Hydrostatic Dynamics can be found in [4]. The results of HARMONIE (obtained with a maximum resolution about 1 Km) are refined by the finite element model in a local scale (about a few meters). An interface between both models is implemented such that the initial wind field approximation is obtained by a suitable interpolation of the HARMONIE results…

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]A new methodology for wind field simulation or forecasting over complex terrain is introduced. The idea is to use wind measurements or predictions of the HARMONIE mesoscale model as the input data for an adaptive finite element mass consistent wind model. The method has been recently implemented in the freely-available Wind3D code. A description of the HARMONIE Non-Hydrostatic Dynamics can be found in. HARMONIE provides wind prediction with a maximum resolution about 1 Km that is refined by the finite element model in a local scale (about a few meters). An interface between both models is implemented such that the initial wind field approximation is obtained by a suitable interpolation of the HARMONIE results…