937 resultados para Lagrange interpolation
Resumo:
Information about rainfall erosivity is important during soil and water conservation planning. Thus, the spatial variability of rainfall erosivity of the state Mato Grosso do Sul was analyzed using ordinary kriging interpolation. For this, three pluviograph stations were used to obtain the regression equations between the erosivity index and the rainfall coefficient EI30. The equations obtained were applied to 109 pluviometric stations, resulting in EI30 values. These values were analyzed from geostatistical technique, which can be divided into: descriptive statistics, adjust to semivariogram, cross-validation process and implementation of ordinary kriging to generate the erosivity map. Highest erosivity values were found in central and northeast regions of the State, while the lowest values were observed in the southern region. In addition, high annual precipitation values not necessarily produce higher erosivity values.
Resumo:
Yield mapping represents the spatial variability concerning the features of a productive area and allows intervening on the next year production, for example, on a site-specific input application. The trial aimed at verifying the influence of a sampling density and the type of interpolator on yield mapping precision to be produced by a manual sampling of grains. This solution is usually adopted when a combine with yield monitor can not be used. An yield map was developed using data obtained from a combine equipped with yield monitor during corn harvesting. From this map, 84 sample grids were established and through three interpolators: inverse of square distance, inverse of distance and ordinary kriging, 252 yield maps were created. Then they were compared with the original one using the coefficient of relative deviation (CRD) and the kappa index. The loss regarding yield mapping information increased as the sampling density decreased. Besides, it was also dependent on the interpolation method used. A multiple regression model was adjusted to the variable CRD, according to the following variables: spatial variability index and sampling density. This model aimed at aiding the farmer to define the sampling density, thus, allowing to obtain the manual yield mapping, during eventual problems in the yield monitor.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
Piezoresistive sensors are commonly made of a piezoresistive membrane attached to a flexible substrate, a plate. They have been widely studied and used in several applications. It has been found that the size, position and geometry of the piezoresistive membrane may affect the performance of the sensors. Based on this remark, in this work, a topology optimization methodology for the design of piezoresistive plate-based sensors, for which both the piezoresistive membrane and the flexible substrate disposition can be optimized, is evaluated. Perfect coupling conditions between the substrate and the membrane based on the `layerwise' theory for laminated plates, and a material model for the piezoresistive membrane based on the solid isotropic material with penalization model, are employed. The design goal is to obtain the configuration of material that maximizes the sensor sensitivity to external loading, as well as the stiffness of the sensor to particular loads, which depend on the case (application) studied. The proposed approach is evaluated by studying two distinct examples: the optimization of an atomic force microscope probe and a pressure sensor. The results suggest that the performance of the sensors can be improved by using the proposed approach.
Resumo:
This paper deals with the numerical solution of complex fluid dynamics problems using a new bounded high resolution upwind scheme (called SDPUS-C1 henceforth), for convection term discretization. The scheme is based on TVD and CBC stability criteria and is implemented in the context of the finite volume/difference methodologies, either into the CLAWPACK software package for compressible flows or in the Freeflow simulation system for incompressible viscous flows. The performance of the proposed upwind non-oscillatory scheme is demonstrated by solving two-dimensional compressible flow problems, such as shock wave propagation and two-dimensional/axisymmetric incompressible moving free surface flows. The numerical results demonstrate that this new cell-interface reconstruction technique works very well in several practical applications. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
We construct a consistent theory of a quantum massive Weyl field. We start with the formulation of the classical field theory approach for the description of massive Weyl fields. It is demonstrated that the standard Lagrange formalism cannot be applied for the studies of massive first-quantized Weyl spinors. Nevertheless we show that the classical field theory description of massive Weyl fields can be implemented in frames of the Hamilton formalism or using the extended Lagrange formalism. Then we carry out a canonical quantization of the system. The independent ways for the quantization of a massive Weyl field are discussed. We also compare our results with the previous approaches for the treatment of massive Weyl spinors. Finally the new interpretation of the Majorana condition is proposed.
Resumo:
A complete census of planetary systems around a volume-limited sample of solar-type stars (FGK dwarfs) in the Solar neighborhood (d a parts per thousand currency signaEuro parts per thousand 15 pc) with uniform sensitivity down to Earth-mass planets within their Habitable Zones out to several AUs would be a major milestone in extrasolar planets astrophysics. This fundamental goal can be achieved with a mission concept such as NEAT-the Nearby Earth Astrometric Telescope. NEAT is designed to carry out space-borne extremely-high-precision astrometric measurements at the 0.05 mu as (1 sigma) accuracy level, sufficient to detect dynamical effects due to orbiting planets of mass even lower than Earth's around the nearest stars. Such a survey mission would provide the actual planetary masses and the full orbital geometry for all the components of the detected planetary systems down to the Earth-mass limit. The NEAT performance limits can be achieved by carrying out differential astrometry between the targets and a set of suitable reference stars in the field. The NEAT instrument design consists of an off-axis parabola single-mirror telescope (D = 1 m), a detector with a large field of view located 40 m away from the telescope and made of 8 small movable CCDs located around a fixed central CCD, and an interferometric calibration system monitoring dynamical Young's fringes originating from metrology fibers located at the primary mirror. The mission profile is driven by the fact that the two main modules of the payload, the telescope and the focal plane, must be located 40 m away leading to the choice of a formation flying option as the reference mission, and of a deployable boom option as an alternative choice. The proposed mission architecture relies on the use of two satellites, of about 700 kg each, operating at L2 for 5 years, flying in formation and offering a capability of more than 20,000 reconfigurations. The two satellites will be launched in a stacked configuration using a Soyuz ST launch vehicle. The NEAT primary science program will encompass an astrometric survey of our 200 closest F-, G- and K-type stellar neighbors, with an average of 50 visits each distributed over the nominal mission duration. The main survey operation will use approximately 70% of the mission lifetime. The remaining 30% of NEAT observing time might be allocated, for example, to improve the characterization of the architecture of selected planetary systems around nearby targets of specific interest (low-mass stars, young stars, etc.) discovered by Gaia, ground-based high-precision radial-velocity surveys, and other programs. With its exquisite, surgical astrometric precision, NEAT holds the promise to provide the first thorough census for Earth-mass planets around stars in the immediate vicinity of our Sun.
Resumo:
A decision analytical model is presented and analysed to assess the effectiveness and cost-effectiveness of routine vaccination against varicella and herpes-zoster, or shingles. These diseases have as common aetiological agent the varicella-zoster virus (VZV). Zoster can more likely occur in aged people with declining cell-mediated immunity. The general concern is that universal varicella vaccination might lead to more cases of zoster: with more vaccinated children exposure of the general population to varicella infectives become smaller and thus a larger proportion of older people will have weaker immunity to VZV, leading to more cases of reactivation of zoster. Our compartment model shows that only two possible equilibria exist, one without varicella and the other one where varicella arid zoster both thrive. Threshold quantities to distinguish these cases are derived. Cost estimates on a possible herd vaccination program are discussed indicating a possible tradeoff choice.
The boundedness of penalty parameters in an augmented Lagrangian method with constrained subproblems
Resumo:
Augmented Lagrangian methods are effective tools for solving large-scale nonlinear programming problems. At each outer iteration, a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large, solving the subproblem becomes difficult; therefore, the effectiveness of this approach is associated with the boundedness of the penalty parameters. In this paper, it is proved that under more natural assumptions than the ones employed until now, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.
Resumo:
In this study is presented an economic optimization method to design telescope irrigation laterals (multidiameter) with regular spaced outlets. The proposed analytical hydraulic solution was validated by means of a pipeline composed of three different diameters. The minimum acquisition cost of the telescope pipeline was determined by an ideal arrangement of lengths and respective diameters for each one of the three segments. The mathematical optimization method based on the Lagrange multipliers provides a strategy for finding the maximum or minimum of a function subject to certain constraints. In this case, the objective function describes the acquisition cost of pipes, and the constraints are determined from hydraulic parameters as length of irrigation laterals and total head loss permitted. The developed analytical solution provides the ideal combination of each pipe segment length and respective diameter, resulting in a decreased of the acquisition cost.
Resumo:
We propose a new Skyrme-like model with fields taking values on the sphere S3 or, equivalently, on the group SU(2). The action of the model contains a quadratic kinetic term plus a quartic term which is the same as that of the Skyrme-Faddeev model. The novelty of the model is that it possess a first order Bogomolny type equation whose solutions automatically satisfy the second order Euler-Lagrange equations. It also possesses a lower bound on the static energy which is saturated by the Bogomolny solutions. Such Bogomolny equation is equivalent to the so-called force free equation used in plasma and solar Physics, and which possesses large classes of solutions. An old result due to Chandrasekhar prevents the existence of finite energy solutions for the force free equation on the entire three- dimensional space R3. We construct new exact finite energy solutions to the Bogomolny equations for the case where the space is the three-sphere S3, using toroidal like coordinates.
Resumo:
[EN] In this paper we study a variational problem derived from a computer vision application: video camera calibration with smoothing constraint. By video camera calibration we meanto estimate the location, orientation and lens zoom-setting of the camera for each video frame taking into account image visible features. To simplify the problem we assume that the camera is mounted on a tripod, in such case, for each frame captured at time t , the calibration is provided by 3 parameters : (1) P(t) (PAN) which represents the tripod vertical axis rotation, (2) T(t) (TILT) which represents the tripod horizontal axis rotation and (3) Z(t) (CAMERA ZOOM) the camera lens zoom setting. The calibration function t -> u(t) = (P(t),T(t),Z(t)) is obtained as the minima of an energy function I[u] . In thIs paper we study the existence of minima of such energy function as well as the solutions of the associated Euler-Lagrange equations.
Resumo:
[EN] We present an energy based approach to estimate a dense disparity map from a set of two weakly calibrated stereoscopic images while preserving its discontinuities resulting from image boundaries. We first derive a simplified expression for the disparity that allows us to estimate it from a stereo pair of images using an energy minimization approach. We assume that the epipolar geometry is known, and we include this information in the energy model. Discontinuities are preserved by means of a regularization term based on the Nagel-Enkelmann operator. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method The resulting parabolic problem has a unique solution. In order to reduce the risk to be trapped within some irrelevant local minima during the iterations, we use a focusing strategy based on a linear scalespace. Experimental results on both synthetic and real images arere presented to illustrate the capabilities of this PDE and scale-space based method.
Resumo:
[EN] In the last years we have developed some methods for 3D reconstruction. First we began with the problem of reconstructing a 3D scene from a stereoscopic pair of images. We developed some methods based on energy functionals which produce dense disparity maps by preserving discontinuities from image boundaries. Then we passed to the problem of reconstructing a 3D scene from multiple views (more than 2). The method for multiple view reconstruction relies on the method for stereoscopic reconstruction. For every pair of consecutive images we estimate a disparity map and then we apply a robust method that searches for good correspondences through the sequence of images. Recently we have proposed several methods for 3D surface regularization. This is a postprocessing step necessary for smoothing the final surface, which could be afected by noise or mismatch correspondences. These regularization methods are interesting because they use the information from the reconstructing process and not only from the 3D surface. We have tackled all these problems from an energy minimization approach. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method.