18 resultados para Lagrange interpolation
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
We consider a recently proposed finite-element space that consists of piecewise affine functions with discontinuities across a smooth given interface Γ (a curve in two dimensions, a surface in three dimensions). Contrary to existing extended finite element methodologies, the space is a variant of the standard conforming Formula space that can be implemented element by element. Further, it neither introduces new unknowns nor deteriorates the sparsity structure. It is proved that, for u arbitrary in Formula, the interpolant Formula defined by this new space satisfies Graphic where h is the mesh size, Formula is the domain, Formula, Formula, Formula and standard notation has been adopted for the function spaces. This result proves the good approximation properties of the finite-element space as compared to any space consisting of functions that are continuous across Γ, which would yield an error in the Formula-norm of order Graphic. These properties make this space especially attractive for approximating the pressure in problems with surface tension or other immersed interfaces that lead to discontinuities in the pressure field. Furthermore, the result still holds for interfaces that end within the domain, as happens for example in cracked domains.
Resumo:
Hermite interpolation is increasingly showing to be a powerful numerical solution tool, as applied to different kinds of second order boundary value problems. In this work we present two Hermite finite element methods to solve viscous incompressible flows problems, in both two- and three-dimension space. In the two-dimensional case we use the Zienkiewicz triangle to represent the velocity field, and in the three-dimensional case an extension of this element to tetrahedra, still called a Zienkiewicz element. Taking as a model the Stokes system, the pressure is approximated with continuous functions, either piecewise linear or piecewise quadratic, according to the version of the Zienkiewicz element in use, that is, with either incomplete or complete cubics. The methods employ both the standard Galerkin or the Petrov–Galerkin formulation first proposed in Hughes et al. (1986) [18], based on the addition of a balance of force term. A priori error analyses point to optimal convergence rates for the PG approach, and for the Galerkin formulation too, at least in some particular cases. From the point of view of both accuracy and the global number of degrees of freedom, the new methods are shown to have a favorable cost-benefit ratio, as compared to velocity Lagrange finite elements of the same order, especially if the Galerkin approach is employed.
Resumo:
The present paper aims at contributing to a discussion, opened by several authors, on the proper equation of motion that governs the vertical collapse of buildings. The most striking and tragic example is that of the World Trade Center Twin Towers, in New York City, about 10 years ago. This is a very complex problem and, besides dynamics, the analysis involves several areas of knowledge in mechanics, such as structural engineering, materials sciences, and thermodynamics, among others. Therefore, the goal of this work is far from claiming to deal with the problem in its completeness, leaving aside discussions about the modeling of the resistive load to collapse, for example. However, the following analysis, restricted to the study of motion, shows that the problem in question holds great similarity to the classic falling-chain problem, very much addressed in a number of different versions as the pioneering one, by von Buquoy or the one by Cayley. Following previous works, a simple single-degree-of-freedom model was readdressed and conceptually discussed. The form of Lagrange's equation, which leads to a proper equation of motion for the collapsing building, is a general and extended dissipative form, which is proper for systems with mass varying explicitly with position. The additional dissipative generalized force term, which was present in the extended form of the Lagrange equation, was shown to be derivable from a Rayleigh-like energy function. DOI: 10.1061/(ASCE)EM.1943-7889.0000453. (C) 2012 American Society of Civil Engineers.
Resumo:
Categorical data cannot be interpolated directly because they are outcomes of discrete random variables. Thus, types of categorical variables are transformed into indicator functions that can be handled by interpolation methods. Interpolated indicator values are then backtransformed to the original types of categorical variables. However, aspects such as variability and uncertainty of interpolated values of categorical data have never been considered. In this paper we show that the interpolation variance can be used to map an uncertainty zone around boundaries between types of categorical variables. Moreover, it is shown that the interpolation variance is a component of the total variance of the categorical variables, as measured by the coefficient of unalikeability. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this work was to evaluate extreme water table depths in a watershed, using methods for geographical spatial data analysis. Groundwater spatio-temporal dynamics was evaluated in an outcrop of the Guarani Aquifer System. Water table depths were estimated from monitoring of water levels in 23 piezometers and time series modeling available from April 2004 to April 2011. For generation of spatial scenarios, geostatistical techniques were used, which incorporated into the prediction ancillary information related to the geomorphological patterns of the watershed, using a digital elevation model. This procedure improved estimates, due to the high correlation between water levels and elevation, and aggregated physical sense to predictions. The scenarios showed differences regarding the extreme levels - too deep or too shallow ones - and can subsidize water planning, efficient water use, and sustainable water management in the watershed.
Resumo:
The use of antiretroviral therapy has proven to be remarkably effective in controlling the progression of human immunodeficiency virus (HIV) infection and prolonging patient's survival. Therapy however may fail and therefore these benefits can be compromised by the emergence of HIV strains that are resistant to the therapy. In view of these facts, the question of finding the reason for which drug-resistant strains emerge during therapy has become a worldwide problem of great interest. This paper presents a deterministic HIV-1 model to examine the mechanisms underlying the emergence of drug-resistance during therapy. The aim of this study is to determine whether, and how fast, antiretroviral therapy may determine the emergence of drug resistance by calculating the basic reproductive numbers. The existence, feasibility and local stability of the equilibriums are also analyzed. By performing numerical simulations we show that Hopf bifurcation may occur. The model suggests that the individuals with drug-resistant infection may play an important role in the epidemic of HIV. (C) 2011 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The purpose of this study is to present a position based tetrahedral finite element method of any order to accurately predict the mechanical behavior of solids constituted by functionally graded elastic materials and subjected to large displacements. The application of high-order elements makes it possible to overcome the volumetric and shear locking that appears in usual homogeneous isotropic situations or even in non-homogeneous cases developing small or large displacements. The use of parallel processing to improve the computational efficiency, allows employing high-order elements instead of low-order ones with reduced integration techniques or strain enhancements. The Green-Lagrange strain is adopted and the constitutive relation is the functionally graded Saint Venant-Kirchhoff law. The equilibrium is achieved by the minimum total potential energy principle. Examples of large displacement problems are presented and results confirm the locking free behavior of high-order elements for non-homogeneous materials. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Information about rainfall erosivity is important during soil and water conservation planning. Thus, the spatial variability of rainfall erosivity of the state Mato Grosso do Sul was analyzed using ordinary kriging interpolation. For this, three pluviograph stations were used to obtain the regression equations between the erosivity index and the rainfall coefficient EI30. The equations obtained were applied to 109 pluviometric stations, resulting in EI30 values. These values were analyzed from geostatistical technique, which can be divided into: descriptive statistics, adjust to semivariogram, cross-validation process and implementation of ordinary kriging to generate the erosivity map. Highest erosivity values were found in central and northeast regions of the State, while the lowest values were observed in the southern region. In addition, high annual precipitation values not necessarily produce higher erosivity values.
Resumo:
Yield mapping represents the spatial variability concerning the features of a productive area and allows intervening on the next year production, for example, on a site-specific input application. The trial aimed at verifying the influence of a sampling density and the type of interpolator on yield mapping precision to be produced by a manual sampling of grains. This solution is usually adopted when a combine with yield monitor can not be used. An yield map was developed using data obtained from a combine equipped with yield monitor during corn harvesting. From this map, 84 sample grids were established and through three interpolators: inverse of square distance, inverse of distance and ordinary kriging, 252 yield maps were created. Then they were compared with the original one using the coefficient of relative deviation (CRD) and the kappa index. The loss regarding yield mapping information increased as the sampling density decreased. Besides, it was also dependent on the interpolation method used. A multiple regression model was adjusted to the variable CRD, according to the following variables: spatial variability index and sampling density. This model aimed at aiding the farmer to define the sampling density, thus, allowing to obtain the manual yield mapping, during eventual problems in the yield monitor.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
Piezoresistive sensors are commonly made of a piezoresistive membrane attached to a flexible substrate, a plate. They have been widely studied and used in several applications. It has been found that the size, position and geometry of the piezoresistive membrane may affect the performance of the sensors. Based on this remark, in this work, a topology optimization methodology for the design of piezoresistive plate-based sensors, for which both the piezoresistive membrane and the flexible substrate disposition can be optimized, is evaluated. Perfect coupling conditions between the substrate and the membrane based on the `layerwise' theory for laminated plates, and a material model for the piezoresistive membrane based on the solid isotropic material with penalization model, are employed. The design goal is to obtain the configuration of material that maximizes the sensor sensitivity to external loading, as well as the stiffness of the sensor to particular loads, which depend on the case (application) studied. The proposed approach is evaluated by studying two distinct examples: the optimization of an atomic force microscope probe and a pressure sensor. The results suggest that the performance of the sensors can be improved by using the proposed approach.
Resumo:
This paper deals with the numerical solution of complex fluid dynamics problems using a new bounded high resolution upwind scheme (called SDPUS-C1 henceforth), for convection term discretization. The scheme is based on TVD and CBC stability criteria and is implemented in the context of the finite volume/difference methodologies, either into the CLAWPACK software package for compressible flows or in the Freeflow simulation system for incompressible viscous flows. The performance of the proposed upwind non-oscillatory scheme is demonstrated by solving two-dimensional compressible flow problems, such as shock wave propagation and two-dimensional/axisymmetric incompressible moving free surface flows. The numerical results demonstrate that this new cell-interface reconstruction technique works very well in several practical applications. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
We construct a consistent theory of a quantum massive Weyl field. We start with the formulation of the classical field theory approach for the description of massive Weyl fields. It is demonstrated that the standard Lagrange formalism cannot be applied for the studies of massive first-quantized Weyl spinors. Nevertheless we show that the classical field theory description of massive Weyl fields can be implemented in frames of the Hamilton formalism or using the extended Lagrange formalism. Then we carry out a canonical quantization of the system. The independent ways for the quantization of a massive Weyl field are discussed. We also compare our results with the previous approaches for the treatment of massive Weyl spinors. Finally the new interpretation of the Majorana condition is proposed.
Resumo:
A complete census of planetary systems around a volume-limited sample of solar-type stars (FGK dwarfs) in the Solar neighborhood (d a parts per thousand currency signaEuro parts per thousand 15 pc) with uniform sensitivity down to Earth-mass planets within their Habitable Zones out to several AUs would be a major milestone in extrasolar planets astrophysics. This fundamental goal can be achieved with a mission concept such as NEAT-the Nearby Earth Astrometric Telescope. NEAT is designed to carry out space-borne extremely-high-precision astrometric measurements at the 0.05 mu as (1 sigma) accuracy level, sufficient to detect dynamical effects due to orbiting planets of mass even lower than Earth's around the nearest stars. Such a survey mission would provide the actual planetary masses and the full orbital geometry for all the components of the detected planetary systems down to the Earth-mass limit. The NEAT performance limits can be achieved by carrying out differential astrometry between the targets and a set of suitable reference stars in the field. The NEAT instrument design consists of an off-axis parabola single-mirror telescope (D = 1 m), a detector with a large field of view located 40 m away from the telescope and made of 8 small movable CCDs located around a fixed central CCD, and an interferometric calibration system monitoring dynamical Young's fringes originating from metrology fibers located at the primary mirror. The mission profile is driven by the fact that the two main modules of the payload, the telescope and the focal plane, must be located 40 m away leading to the choice of a formation flying option as the reference mission, and of a deployable boom option as an alternative choice. The proposed mission architecture relies on the use of two satellites, of about 700 kg each, operating at L2 for 5 years, flying in formation and offering a capability of more than 20,000 reconfigurations. The two satellites will be launched in a stacked configuration using a Soyuz ST launch vehicle. The NEAT primary science program will encompass an astrometric survey of our 200 closest F-, G- and K-type stellar neighbors, with an average of 50 visits each distributed over the nominal mission duration. The main survey operation will use approximately 70% of the mission lifetime. The remaining 30% of NEAT observing time might be allocated, for example, to improve the characterization of the architecture of selected planetary systems around nearby targets of specific interest (low-mass stars, young stars, etc.) discovered by Gaia, ground-based high-precision radial-velocity surveys, and other programs. With its exquisite, surgical astrometric precision, NEAT holds the promise to provide the first thorough census for Earth-mass planets around stars in the immediate vicinity of our Sun.
Resumo:
A decision analytical model is presented and analysed to assess the effectiveness and cost-effectiveness of routine vaccination against varicella and herpes-zoster, or shingles. These diseases have as common aetiological agent the varicella-zoster virus (VZV). Zoster can more likely occur in aged people with declining cell-mediated immunity. The general concern is that universal varicella vaccination might lead to more cases of zoster: with more vaccinated children exposure of the general population to varicella infectives become smaller and thus a larger proportion of older people will have weaker immunity to VZV, leading to more cases of reactivation of zoster. Our compartment model shows that only two possible equilibria exist, one without varicella and the other one where varicella arid zoster both thrive. Threshold quantities to distinguish these cases are derived. Cost estimates on a possible herd vaccination program are discussed indicating a possible tradeoff choice.