158 resultados para Hierarchical dynamic models
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Computer viruses are an important risk to computational systems endangering either corporations of all sizes or personal computers used for domestic applications. Here, classical epidemiological models for disease propagation are adapted to computer networks and, by using simple systems identification techniques a model called SAIC (Susceptible, Antidotal, Infectious, Contaminated) is developed. Real data about computer viruses are used to validate the model. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This paper applies Hierarchical Bayesian Models to price farm-level yield insurance contracts. This methodology considers the temporal effect, the spatial dependence and spatio-temporal models. One of the major advantages of this framework is that an estimate of the premium rate is obtained directly from the posterior distribution. These methods were applied to a farm-level data set of soybean in the State of the Parana (Brazil), for the period between 1994 and 2003. The model selection was based on a posterior predictive criterion. This study improves considerably the estimation of the fair premium rates considering the small number of observations.
Resumo:
This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Parana (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited.
Resumo:
Background: Detailed analysis of the dynamic interactions among biological, environmental, social, and economic factors that favour the spread of certain diseases is extremely useful for designing effective control strategies. Diseases like tuberculosis that kills somebody every 15 seconds in the world, require methods that take into account the disease dynamics to design truly efficient control and surveillance strategies. The usual and well established statistical approaches provide insights into the cause-effect relationships that favour disease transmission but they only estimate risk areas, spatial or temporal trends. Here we introduce a novel approach that allows figuring out the dynamical behaviour of the disease spreading. This information can subsequently be used to validate mathematical models of the dissemination process from which the underlying mechanisms that are responsible for this spreading could be inferred. Methodology/Principal Findings: The method presented here is based on the analysis of the spread of tuberculosis in a Brazilian endemic city during five consecutive years. The detailed analysis of the spatio-temporal correlation of the yearly geo-referenced data, using different characteristic times of the disease evolution, allowed us to trace the temporal path of the aetiological agent, to locate the sources of infection, and to characterize the dynamics of disease spreading. Consequently, the method also allowed for the identification of socio-economic factors that influence the process. Conclusions/Significance: The information obtained can contribute to more effective budget allocation, drug distribution and recruitment of human skilled resources, as well as guiding the design of vaccination programs. We propose that this novel strategy can also be applied to the evaluation of other diseases as well as other social processes.
Resumo:
With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma(tau)=3/2). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma(tau)=1.780 +/- 0.005.
Resumo:
The machining of hardened steels has always been a great challenge in metal cutting, particularly for drilling operations. Generally, drilling is the machining process that is most difficult to cool due to the tool`s geometry. The aim of this work is to determine the heat flux and the coefficient of convection in drilling using the inverse heat conduction method. Temperature was assessed during the drilling of hardened AISI H13 steel using the embedded thermocouple technique. Dry machining and two cooling/lubrication systems were used, and thermocouples were fixed at distances very close to the hole`s wall. Tests were replicated for each condition, and were carried out with new and worn drills. An analytical heat conduction model was used to calculate the temperature at tool-workpiece interface and to define the heat flux and the coefficient of convection. In all tests using new and worn out drills, the lowest temperatures and decrease of heat flux were observed using the flooded system, followed by the MQL, considering the dry condition as reference. The decrease of temperature was directly proportional to the amount of lubricant applied and was significant in the MQL system when compared to dry cutting. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This study presents an alternative three-dimensional geometric non-linear frame formulation based on generalized unconstrained vector and positions to solve structures and mechanisms subjected to dynamic loading. The formulation is classified as total Lagrangian with exact kinematics description. The resulting element presents warping and non-constant transverse strain modes, which guarantees locking-free behavior for the adopted three-dimensional constitutive relation, Saint-Venant-Kirchhoff, for instance. The application of generalized vectors is an alternative to the use of finite rotations and rigid triad`s formulae. Spherical and revolute joints are considered and selected dynamic and static examples are presented to demonstrate the accuracy and generality of the proposed technique. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
High-angle grain boundary migration is predicted during geometric dynamic recrystallization (GDRX) by two types of mathematical models. Both models consider the driving pressure due to curvature and a sinusoidal driving pressure owing to subgrain walls connected to the grain boundary. One model is based on the finite difference solution of a kinetic equation, and the other, on a numerical technique in which the boundary is subdivided into linear segments. The models show that an initially flat boundary becomes serrated, with the peak and valley migrating into both adjacent grains, as observed during GDRX. When the sinusoidal driving pressure amplitude is smaller than 2 pi, the boundary stops migrating, reaching an equilibrium shape. Otherwise, when the amplitude is larger than 2 pi, equilibrium is never reached and the boundary migrates indefinitely, which would cause the protrusions of two serrated parallel boundaries to impinge on each other, creating smaller equiaxed grains.
Resumo:
Dynamic experiments in a nonadiabatic packed bed were carried out to evaluate the response to disturbances in wall temperature and inlet airflow rate and temperature. A two-dimensional, pseudo-homogeneous, axially dispersed plug-flow model was numerically solved and used to interpret the results. The model parameters were fitted in distinct stages: effective radial thermal conductivity (K (r)) and wall heat transfer coefficient (h (w)) were estimated from steady-state data and the characteristic packed bed time constant (tau) from transient data. A new correlation for the K (r) in packed beds of cylindrical particles was proposed. It was experimentally proved that temperature measurements using radially inserted thermocouples and a ring-shaped sensor were not distorted by heat conduction across the thermocouple or by the thermal inertia effect of the temperature sensors.
Resumo:
The ideal conditions for the operation of tandem cold mills are connected to a set of references generated by models and used by dynamic regulators. Aiming at the optimization of the friction and yield stress coefficients an adaptation algorithm is proposed in this paper. Experimental results obtained from an industrial cold rolling mill are presented. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Eight different models to represent the effect of friction in control valves are presented: four models based on physical principles and four empirical ones. The physical models, both static and dynamic, have the same structure. The models are implemented in Simulink/Matlab (R) and compared, using different friction coefficients and input signals. Three of the models were able to reproduce the stick-slip phenomenon and passed all the tests, which were applied following ISA standards. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This article addresses the interactions of the synthetic antimicrobial peptide dermaseptin 01 (GLWSTIKQKGKEAAIAAA-KAAGQAALGAL-NH(2), DS 01) with phospholipid (PL) monolayers comprising (i) a lipid-rich extract of Leishmania amazonensis (LRE-La), (ii) zwitterionic PL (dipalmitoylphosphatidylcholine, DPPC), and (iii) negatively charged PL (dipalmitoylphosphatidylglycerol, DPPG). The degree of interaction of DS 01 with the different biomembrane models was quantified from equilibrium and dynamic liquid-air interface parameters. At low peptide concentrations, interactions between DS 01 and zwitterionic PL, as well as with the LRE-La monolayers were very weak, whereas with negatively charged PLs the interactions were stronger. For peptide concentrations above 1 mu g/ml, a considerable expansion of negatively charged monolayers occurred. In the case of DPPC, it was possible to return to the original lipid area in the condensed phase, suggesting that the peptide was expelled from the monolayer. However, in the case of DPPG, the average area per lipid molecule in the presence of DS 01 was higher than pure PLs even at high surface pressures, suggesting that at least part of DS 01 remained incorporated in the monolayer. For the LRE-La monolayers, DS 01 also remained in the monolayer. This is the first report on the antiparasitic activity of AMPs using Langmuir monolayers of a natural lipid extract from L. amazonensis. Copyright (C) 2011 European Peptide Society and John Wiley & Sons, Ltd.
Resumo:
This paper presents a new technique and two algorithms to bulk-load data into multi-way dynamic metric access methods, based on the covering radius of representative elements employed to organize data in hierarchical data structures. The proposed algorithms are sample-based, and they always build a valid and height-balanced tree. We compare the proposed algorithm with existing ones, showing the behavior to bulk-load data into the Slim-tree metric access method. After having identified the worst case of our first algorithm, we describe adequate counteractions in an elegant way creating the second algorithm. Experiments performed to evaluate their performance show that our bulk-loading methods build trees faster than the sequential insertion method regarding construction time, and that it also significantly improves search performance. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.